|
|
|
|
|
|
|
Origin >
How to Build a Brain >
A myopic perspective on AI
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0516.html
Printable Version |
|
|
|
A myopic perspective on AI
In a recent Red Herring magazine article, writer Geoffrey James said "pundits can't stop hyping the business opportunities of artificial intelligence" and described AI as a "technological backwater." Ray Kurzweil challenges this view, citing "hundreds of examples of narrow AI deeply integrated into our information-based economy" and "many applications beginning to combine multiple methodologies," a step towards the eventual achievement of "strong AI" (human-level intelligence in a machine).
Geoffrey James' myopic perspective on artificial intelligence ("Out
of Their Minds," August 2002) harkens back to the 1980s,
when many observers equated AI with the single technique of "expert
systems." It has always been my view that AI properly refers
to a broad panoply of disciplines that emulate intelligent systems
and behaviors. The reason that technologists don't typically describe
their projects as "using AI" is the same reason they don't
describe them as "using computer science."
Either of these descriptions are too broad to be useful. Far more
informative are the many subfields of AI such as robotics, natural
language processing, character recognition, "quant" investing,
etc.
There are today hundreds of examples of narrow AI deeply integrated
into our information-based economy. Routing emails and cell phone
calls, automatically diagnosing electrocardiograms and blood cell
images, directing cruise missiles and weapon systems, automatically
landing airplanes, conducting pattern-recognition based financial
transactions, detecting credit card fraud, and a myriad of other
automated tasks are all successful examples of AI in use today.
Many major industries (e.g., medical drug discovery, product design
of almost any product, including computers themselves) are increasingly
reliant on these intelligent algorithms.
To call this a "backwater" is hardly a reasonable perspective.
These AI-based technologies simply did not exist or were in formative
stages only a decade ago. James is like those visitors to the rain
forest who plaintively ask "where are all these species I've
heard so much about?" when there are fifty species of ant alone
within fifty yards. Alan Turing predicted this, saying that intelligent
systems would become so deeply integrated in our society as to be
all but invisible.
As an aside, I found it interesting that James' primary example
of a successful AI company is ScanSoft, which used to be called
Kurzweil Computer Products, which I founded in 1974.
With virtually every industry extensively using intelligent algorithms,
the trend now is that the "narrowness" of the intelligence
of these systems is gradually becoming less narrow, with many applications
beginning to combine multiple methodologies. "Strong AI"
is not a separate endeavor; rather it represents the culmination
of these ongoing and accelerating trends.
It will always be easy to scoff at AI as long as there are tasks
at which humans are better, but the many derivatives of AI research
are becoming increasingly vital to our economy and civilization.
| | Join the discussion about this article on Mind·X! | |
|
|
Mind·X Discussion About This Article:
|
|
|
|
Re: A myopic perspective on AI
|
|
|
|
Here are two sets of big questions--one fundamental, the other practical, just to spark some debate:
Fundamental questions: How likely is strong AI within the next 50 years? Assuming, as I do, that consciousness is an epiphenomenon of the brain, how close are we to being able to develop software as complex as that which operates the human mind? A top-down approach to writing such code doesn't appear too likely to succeed, but do we know enough to develop code that develops, like an organism? Final fundamental question: How close are we to being able to reverse-engineer the brain? Most neuroscientists I've spoken too doubt that it will be possible within this century...obviously there are folks here, and throughout the computer science community who believe it will be possible within 30 years.
Practical questions: If investor and venture capitalist sentiment is accurately reflected by the Red Herring article, and with the government becoming more and more stingy with research funds, where is the development money for this technology to come from? Consumer demand for computer technology in the developed world is slowing...as is demand for mobile communications technology. Growth in both industries (at least in the West) is beginning to look like that in the auto industry--no longer exponential but asymptotic. Are there enough short to mid term bells and whistles in the pipeline to stoke demand..enough to fund this research? Are there enough consumers with enough wealth in the Third World to take up the slack? Semiconductor production costs are through the roof...and I've read a few articles proclaiming the death of Moore's Law. If so, is there is reasonable and inexpensive substitute for silicon chips available short-term?
BC
|
|
|
|
|
|
|
|
|
Re: A myopic perspective on AI
|
|
|
|
BC,
The question of "strong-AI" itself need refinement. I think that "consciousness" is possibly a distraction here. It is certainly an "epiphenomenon of the brain", but perhaps not merely of "any logical operations that mimic brain-like behavior". The issue of "artificial consciousness" is only one of ethics - it would be nice to know if you were "killing a sentient being" by unplugging a sufficiently able AI. Beyond this, all that matters is really performance.
Some might argue that an "unconscious" (as we experience the feeling) AI could never compose beautiful or "moving" music. I am not sure. Perhaps the "appreciative, esthetic right-brain" can be emulated by something that is never "conscious" except by some formal definition. But again, the import is one of ethics, not performance.
So the question becomes, "how would we decide that we have achieved strong-AI"? What must it demonstrate to us that says "we have it now"? If we had super-intelligent (trans-human-level) AI, what would we expect it to do for us? What problems do we desire that it "solve" for us. What jobs do we desire that it "manage" for us?
I think people need to realize that there are distinctly different kinds of problems. Some are computationally intractible (saleman's tour) and for large sets, you either settle for a good approximation via heuristics, or must exhaust the entire problem space to arrive at the "optimum" solution. You don't need "super-AI" for such problems, only "BIG-AI", lots of processing and memory. You cannot "smart" your way to a shorter solution.
Other kinds of problems simply have no "objective best" solution, like the trade-off of X pounds of pollutants for Y pounds of product. People will approach the issue with different fundamental values, and no "super-AI" is going to decide the "correct value point".
I see AI doing a great job (eventually) acting as my "trusted agent". It can know who is knocking at my door, and let them in (or sound the alarm). It can "surf the web" looking for only the things it knows would be of interest to me. It can take my value system into account and decide when to buy and sell for me. The list goes on and on.
Demonstrate a few applications such as these, and the demand for "AI" will grow.
Cheers! ____tony b____ |
|
|
|
|
|
|
|
|
Re: A myopic perspective on AI
|
|
|
|
Harold,
> "in my imagination, strong ai will be able to weaponise art."
Interesting concept. Religion and politics are forms of weaponised art, in a sense.
> "it will be able to create such beautiful constructions that a human would be forever captured by them. he (the human) could think of nothing else for the rest of his life."
Are you suggesting this would be an intentional seduction, a deliberate humanity-pacifier? Or is this a metaphor for how humanity will naturally react to the inventive creations of a superior intelligence.
In either case, you see this as the end to "human striving to accomplish". Yes?
> "he would spend all of his time trying to imitate this great art, and would never have an original, inventive idea again."
Has anyone ever really had an "original, inventive idea", one that emerged on its own, independent of the influence of countless other ideas? Even the most creative individuals evolved their personalities in the exchange of "ideas" with the environment. Where is the boundary between "learned" and "created"?
> "have you noticed all of the lousy frank lloyd wright immitations that you see all over the place? that's a mild example."
There are always more followers than leaders.
But I take it, you pose the question: "What becomes of human talent (and sense of worth) when artificial systems out-do us in all respects."
A very good question, I think.
Cheers! ____tony b____
|
|
|
|
|
|
|
|
|
Re: A myopic perspective on AI
|
|
|
|
>> "in my imagination, strong ai will be able to weaponise art."
Hmmmmm..., so would an art based arms race develop with humans creating the equivalent of Geordi and Data's Borg detroying image to be implanted in the AI collective mind?
>> "it will be able to create such beautiful constructions that a human would be forever captured by them. he (the human) could think of nothing else for the rest of his life."
> Are you suggesting this would be an intentional seduction, a deliberate humanity-pacifier? Or is this a metaphor for how humanity will naturally react to the inventive creations of a superior intelligence.
As it turns humanity pacifying Art need be neither original, inspired, nor beautiful. It exists today and ensares over 40% of the waking hours of some parts of the population in technologically advanced countries. It is called television.
>>In either case, you see this as the end to "human striving to accomplish". Yes?
>But I take it, you pose the question: "What becomes of human talent (and sense of worth) when artificial systems out-do us in all respects."
Cars (and many animals) are faster than humans, yet humans still run. Cameras take much better pictures that humans can create, yet humans still paint (which is itself a technological product). Just because AI can answer some questions faster, more completely, or more accurately, does not (necessarily) mean the end of human thinking. |
|
|
|
|
|
|
|
|
Re: A myopic perspective on AI
|
|
|
|
I think coupling data mining with new ways of visualising data that more closely match that which we are good at - ie our cognitive functions Instead of two-dimensional text, language and thinking like Top-Down, Bottom-Up etc we could try conceptualising things using Outside-In, Inside-Out (shake it all about), no seriously, a system isn't neccesarily a two dimensional thing, I had these thoughts about scanning through data graphically displayed like a broken volume/quantity chart, made up of custom slices of time instead of a whole section of time, and use the cursor keys to step forward and back in time, and have our eyes look for repetition. So much quicker than writing data mining algorithms, although having never written any I can;t clain that with authority.
Its four dimensional data, not two, or three. Its Intelligence Amplification (sorry no link) and I think it is a good direction to go in too. It isn't necesarily in opposition to AI as I have read once or twice. Its all part of the same technological progress Imo. |
|
|
|
|
|
|
|