|
|
|
|
|
|
|
Origin >
How to Build a Brain >
Response to Mitchell Kapor's "Why I Think I Will Win"
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0413.html
Printable Version |
|
|
|
Response to Mitchell Kapor's "Why I Think I Will Win"
Ray Kurzweil responds to Mitch Kapor's arguments against the possibility that an AI that will pass a Turing Test in 2029 in this final counterpoint on the bet: an AI will pass a Turing Test by 2029.
Published April 9, 2002 on
KurzweilAI.net. Click here to
read an explanation of the bet and its background, with rules and
definitions. Read why Ray thinks he will win here.
Click here to see why Mitch Kapor
thinks he won't.
Mitchell's essay provides a thorough and concise statement of the
classic arguments against the likelihood of Turing-level machines
in a several decade timeframe. Mitch ends with a nice compliment
comparing me to future machines, and I only wish that it were true.
I think of all the books and web sites I'd like to read, and of
all the people I'd like to dialog and interact with, and I realize
just how limited my current bandwidth and attention span is with
my mere hundred trillion connections.
I discussed several of Mitchell's insightful objections in my statement,
and augment these observations here:
"We are embodied creatures": True, but machines will
have bodies also, in both real and virtual reality.
"Emotion is as or more basic than cognition": Yes, I
agree. As I discussed, our ability to perceive and respond appropriately
to emotion is the most complex thing that we do. Understanding our
emotional intelligence will be the primary target of our reverse
engineering efforts. There is no reason that we cannot understand
our own emotions and the complex biological system that gives rise
to them. We've already demonstrated the feasibility of understanding
regions of the brain in great detail.
"We are conscious beings, capable of reflection and self-awareness."
I think we have to distinguish the performance aspects of what is
commonly called consciousness (i.e., the ability to be reflective
and aware of ourselves) versus consciousness as the ultimate ontological
reality. Since the Turing test is a test of performance, it is the
performance aspects of what is commonly referred to as consciousness
that we are concerned with here. And in this regard, our ability
to build models of ourselves and our relation to others and the
environment is indeed a subtle and complex quality of human thinking.
However there is no reason why a nonbiological intelligence would
be restricted from similarly building comparable models in its nonbiological
brain.
Mitchell cites the limitations of the expert system methodology
and I agree with this. A lot of AI criticism is really criticism
of this approach. The core strength of human intelligence is not
logical analysis of rules, but rather pattern recognition, which
requires a completely different paradigm. This pertains also to
Mitchell's objection to the "metaphor" of "brain-as-computer."
The future machines that I envision will not be like the computers
of today, but will be biologically inspired and will be emulating
the massively parallel, self-organizing, holographically organized
methods that are used in the human brain. A future AI certainly
won't be using expert system techniques. Rather, it will be a complex
system of systems, each built with a different methodology, just
like, well, the human brain.
I will say that Mitchell is overlooking the hundreds of ways in
which "narrow AI" has infiltrated our contemporary systems.
Expert systems are not the best example of these, and I cited several
categories in my statement.
I agree with Mitchell that the brain does not represent the entirety
of our thinking process, but it does represent the bulk of it. In
particular, the endocrine system is orders of magnitude simpler
and operates at very low bandwidth compared to neural processes
(which themselves utilize a form of analog information processing
dramatically slower than contemporary electronic systems).
Mitchell expresses skepticism that "it's all about the bits
and just the bits." There is something going on in the human
brain, and these processes are not hidden from us. I agree that
it's actually not exactly bits because what we've already learned
is that the brain uses digitally controlled analog methods. We know
that analog methods can be emulated by digital methods but there
are engineering reasons to prefer analog techniques because they
are more efficient by several orders of magnitude. However, the
work of Cal Tech Professor Carver Mead and others have shown that
we can use this approach in our machines. Again, this is different
from today's computers, but will be, I believe, an important future
trend.
However, I think Mitchell's primary point here is not to distinguish
analog and digital computing methods, but to make reference to some
other kind of "stuff" that we inherently can't recreate
in a machine. I believe, however, that the scale of the human nervous
system (and, yes, the endocrine system, although as I said this
adds little additional complexity) is sufficient to explain the
complexity and subtlety of our behavior.
I think the most compelling argument that Mitchell offers is his
insight that most experience is not book learning. I agree, but
point out that one of the primary purposes of nonbiological intelligence
is to interact with us humans. So embodied AI's will have plenty
of opportunity to learn from direct interaction with their human
progenitors, as well as to observe a massive quantity of other full
immersion human interaction available over the web.
Now it's true that AI's will have a different history from humans,
and that does represent an additional challenge to their passing
the Turing test. As I pointed out in my statement, it's harder (even
for humans) to successfully defend a fictional history than a real
one. So an AI will actually need to surpass native human intelligence
in order to pass for a human in a valid Turing test. And that's
what I'm betting on.
I can imagine Mitchell saying to himself as he reads this "But
does Ray really appreciate the extraordinary depth of human intellect
and emotion?" I believe that I do and think that Mitchell has
done an excellent job of articulating this perspective. I would
put the question back and ask whether Mitchell really appreciates
the extraordinary power and depth of the technology that lies ahead,
which will be billions of times more powerful and complex than what
we have today?
On that note, I would end by emphasizing the accelerating pace
of progress in all of these information-based technologies. The
power of these technologies is doubling every year, and the paradigm
shift rate is doubling every decade, so the next thirty years will
be like 140 years at today's rate of progress. And the past 140
years was comparable to only about 30 years of progress at today's
rate of progress because we've been accelerating up to this point.
If one really absorbs the implications of what I call the law of
accelerating returns, then it becomes apparent that over the next
three decades (well, 28 years to be exact when Mitchell and I sit
down to compare notes), we will see astonishing levels of technological
progress.
| | Join the discussion about this article on Mind·X! | |
|
|
Mind·X Discussion About This Article:
|
|
|
|
Learning through experience
|
|
|
|
In their debate over who will win the proposed Turing Test, Ray says: "I think the most compelling argument that Mitchell offers is his insight that most experience is not book learning. I agree, but point out that one of the primary purposes of nonbiological intelligence is to interact with us humans. So embodied AI's will have plenty of opportunity to learn from direct interaction with their human progenitors, as well as to observe a massive quantity of other full immersion human interaction available over the web."
I expect that a non-human entity will be able to pass the Turing Test well before 2029. I use the term non-human entity instead of computer because the being that passes the test will be much more than what we think of today as a computer. It will be an amalgam of artifical intelligence, robot, and distributed network.
A key factor that Ray did not mention is that this embodied AI will learn not only from "direct interaction with" humans and "full immersion human interaction available over the web", but also from its own subjective experience of being in the world. It will have the ability to independently sense the difference between hot and cold, wet and dry, loud and soft, bright and dark, gentle and rough, polite and rude.
In many ways these entities will be like human children discovering the world anew and with their own unique perspective. The act of cataloging such experiences and developing patterns of recognition, reaction, and response may in fact result in a kind of emotion. I look forward to getting to know some of these new creatures and becoming friends with them.
- Mike Treder, Incipient Posthuman
|
|
|
|
|
|
|
|
|
Re: Response to Mitchell Kapor's "Why I Think I Will Win"
|
|
|
|
>>>Take a good chess program for example. And feed it with enough CPU power and it will smash every human player.
No. Take a good chess program and feed it enough previous chess games, scenarios, and algorithms, and it will smash every human player. CPU power alone does not bring about artificial intelligence.
>>>It's true. It took 40 years of human endeavors to make one that good and to feed it with enough calculation power - but it was proven possible.
It took 40 years of refining algorithms and adding knowledge to prove it possible. Raw computing power just means that it can churn through those algorithms fast enough to compete in real time. If I put a 100 terrahertz processor in my computer, it won't start beating chess masters all of a sudden.
>>>Most people however are still unable to comprehend this fact. They keep insist, that Deep Blue hasn't won. (Even if it didn't, with some extra MHz (let alone GHz) - it surely would.)
If you add a few Gigahertz here or megahertz there, it won't do anything to improve the chess-playing ability of Deep blue. Again, if I upgrade my processor in my computer, it won't figure out how to manage memory more efficiently, it won't figure out how to stop itself from crashing, and it won't figure out that I hate the damn Clippy icon in Microsoft Word. In fact, it won't figure out anything. It will continue to run the same software with the same features, only (blazingly) faster. Only when the power of the CPU is harnessed, with better software, will improvements be made.
>>>Soon, programs will take a lead in the enhancement of everything. Things will go much faster then.
How will the programs do that? We have to write the programs that "take the lead". Until we have those programs, raw CPU power is meaningless from an AI standpoint.
I'm not saying it isn't possible, in fact I am a firm believer that intelligence in computers is indeed an attainable goal. However, if we look at the trends and say "computers will become intelligent on their own...look at those exponential curves!" we're kidding ourselves. |
|
|
|
|
|
|
|
|
Re: Response to Mitchell Kapor's "Why I Think I Will Win"
|
|
|
|
> No. Take a good chess program and feed it enough previous chess games, scenarios, and algorithms, and it will smash every human player. CPU power alone does not bring about artificial intelligence.
You _can_ reverse engineering the human playing. But you can also let the program to calculate deep in the position. Much deeper than any human can. And it will win, provided it has enough CPU power and RAM storage. Accumulating the old knowledge, what to play in certain circumstances, is the matter of experience. A lot of computing power can give you a billion of games played and analyzed every second.
For example, a simple program can always win with two bishops and the king against the king and two knights. No human can.
> Raw computing power just means that it can churn through those algorithms fast enough
Again - chess rules and enough computing power beats every Kasparov. The question is, if the entire computing power on the Earth today is enough for that - but sooner or later - it will be.
> If I put a 100 terrahertz processor in my computer, it won't start beating chess masters all of a sudden.
Give him enough storage capacity and time to render out good moves for many position. You gave that to Kasparov and humanity.
> If you add a few Gigahertz here or megahertz there, it won't do anything to improve the chess-playing ability
Of course it will! The program will scan deeper. It's ALL about that. ALL.
> Again, if I upgrade my processor in my computer, it won't figure out how to manage memory more efficiently
To a certain level it will. Better caching. But RAM will be improved also.
> it won't figure out how to stop itself from crashing,
Don't count much on this. Newer OS crashes less often.
> and it won't figure out that I hate the damn Clippy icon in Microsoft Word.
Clippy may be stupid - I wouldn't know. But maybe your hate has nothing to do with that.
> Only when the power of the CPU is harnessed, with better software, will improvements be made.
Not _only_. Sometimes additional CPU makes a big difference.
> How will the programs do that?
So called genetic (evolutionary) algorithm can improve almost everything. It needs as much CPU as possible. And memory. And bandwidth. And nothing else.
> We have to write the programs that "take the lead".
Sometimes is better to wait for something to evolve inside computer. More and more often.
> Until we have those programs, raw CPU power is meaningless from an AI standpoint.
We already have. John Koza ... and others.
> if we look at the trends and say "computers will become intelligent on their own...look at those exponential curves!" we're kidding ourselves.
Of course. But from a certain point on ... it will happen. Puberty of our software is close.
- Thomas |
|
|
|
|
|
|
|
|
Re: Response to Mitchell Kapor's
|
|
|
|
>>>But you can also let the program to calculate deep in the position. Much deeper than any human can. And it will win, provided it has enough CPU power and RAM storage.
*How*, exactly, does the computer learn how to calculate chess moves? The CPU doesn't teach itself. There are no chess "rules", that when followed, result in a win every time. If you find that kind of rule set, let me know, we could make a lot of money.
>>>For example, a simple program can always win with two bishops and the king against the king and two knights. No human can.
That is blatantly a false statement. Any combination of moves a computer can carry out, a human can.
>>>>If you add a few Gigahertz here or megahertz there, it won't do anything to improve the chess-playing ability
>>>Of course it will! The program will scan deeper. It's ALL about that. ALL.
OK, a faster computer will be able to scan more moves in the same amount of time. But lets say we allow a slower computer to run for a longer amount to time, so that the total number of calculations evens out. Both computers (slow and fast) will come up with the same moves.
Why can't we apply this to other types of artificial intelligence? Why can't I have an incredibly slow conversation with my computer? The answer is that we don't have any clue how to program computers to do that, and no amount of processing power will change that. Processing power may be an "enabling technology", but we still need a breakthrough in software.
>>>>Only when the power of the CPU is harnessed, with better software, will improvements be made.
>>>Not _only_. Sometimes additional CPU makes a big difference.
Stating your thesis over and over doesn't convince me of its validity.
|
|
|
|
|
|
|
|
|
Re: Response to Mitchell Kapor's
|
|
|
|
Actually, the previous poster is right, to a point! The evaluation function is still a matter of art and guesswork. However, if there is a computer built that will go 20 plys deep at any stage of the game, it will beat any human. Even if the evaluation function is simple. Better yet, program a typical opening book into a machine and put variable weighting on a list of criteria for good play: knight distance from the center, number of open ranks for rooks, diagonals for bishops, passed pawns, minimizing possible moves by opposing player, etc. Then feed in the thousands of Grandmaster games on record, and let the program finish the drawn games, adjusting the weighting all the while! The first chess programs were very bad, and people felt that chess was exclusively the domain of intelligent human experts. But programmers gradually extracted and emulated "mechanizable" components to chess play until there was very little art and judgment left over. Garry Kasparov made a minor but well known(to grandmasters) mistake and just resigned because he knew he didn't have enough art and judgment left over to overcome the brute force power of Deep Blue. I expect that *all* tasks requiring art and judgment will have a sufficient proportion of mechanizable elements to be capably performed by computers. |
|
|
|
|
|
|
|
|
Re: Response to Mitchell Kapor's "Why I Think I Will Win"
|
|
|
|
You underestimate <a href='http://en.wikipedia.org/wiki/Grover's_algori thm'>Grover's Algorithm</a>.
It is a general algorithm for turning any "test all possibilities" problem into a 'search algorithm' and reducing the time to compute it to O(N^1/2) instead of the O(N) a normal computer would take. For example, it solves any NP complete problem in O(N^1/2).
As a further example, assume that you have some fitness function that will return 1 for a set of arbitrary inputs that, for example, describe a self-replicating nano-assembler composed of iron, silicon & oxygen, and 0 for any other set of arbitrary inputs. Assume that you need 10^20 combinations to describe all the iron-silicon-oxygen structures that could produce this machine. Further assume that it takes 10,000 operations to analyze one configuration for fitness.
An ordinary computer running at 1 gigahertz would require 10,000 * 10^20 / 10^9 = 10^15 seconds =~ 30,000,000 years to test all combinations.
A 1 megahertz quantum computer running Grover's Algorithm would require 10,000 * 10^10 / 10^6 = 10^8 seconds =~ 3 years to return a correct combination.
When you want to find the a satisfactory combination from a boatload of combinations, and you can't find another way to do it that's faster than O(N^1/2), Grover's Algorithm is the way to go.
It seems to me that this leads to some very interesting possibilities when the possibility space is very large (10^20 or bigger). |
|
|
|
|
|
|
|