Origin > How to Build a Brain > Response to Mitchell Kapor's "Why I Think I Will Win"
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0413.html

Printable Version
    Response to Mitchell Kapor's "Why I Think I Will Win"
by   Ray Kurzweil

Ray Kurzweil responds to Mitch Kapor's arguments against the possibility that an AI that will pass a Turing Test in 2029 in this final counterpoint on the bet: an AI will pass a Turing Test by 2029.


Published April 9, 2002 on KurzweilAI.net. Click here to read an explanation of the bet and its background, with rules and definitions. Read why Ray thinks he will win here. Click here to see why Mitch Kapor thinks he won't.

Mitchell's essay provides a thorough and concise statement of the classic arguments against the likelihood of Turing-level machines in a several decade timeframe. Mitch ends with a nice compliment comparing me to future machines, and I only wish that it were true. I think of all the books and web sites I'd like to read, and of all the people I'd like to dialog and interact with, and I realize just how limited my current bandwidth and attention span is with my mere hundred trillion connections.

I discussed several of Mitchell's insightful objections in my statement, and augment these observations here:

"We are embodied creatures": True, but machines will have bodies also, in both real and virtual reality.

"Emotion is as or more basic than cognition": Yes, I agree. As I discussed, our ability to perceive and respond appropriately to emotion is the most complex thing that we do. Understanding our emotional intelligence will be the primary target of our reverse engineering efforts. There is no reason that we cannot understand our own emotions and the complex biological system that gives rise to them. We've already demonstrated the feasibility of understanding regions of the brain in great detail.

"We are conscious beings, capable of reflection and self-awareness." I think we have to distinguish the performance aspects of what is commonly called consciousness (i.e., the ability to be reflective and aware of ourselves) versus consciousness as the ultimate ontological reality. Since the Turing test is a test of performance, it is the performance aspects of what is commonly referred to as consciousness that we are concerned with here. And in this regard, our ability to build models of ourselves and our relation to others and the environment is indeed a subtle and complex quality of human thinking. However there is no reason why a nonbiological intelligence would be restricted from similarly building comparable models in its nonbiological brain.

Mitchell cites the limitations of the expert system methodology and I agree with this. A lot of AI criticism is really criticism of this approach. The core strength of human intelligence is not logical analysis of rules, but rather pattern recognition, which requires a completely different paradigm. This pertains also to Mitchell's objection to the "metaphor" of "brain-as-computer." The future machines that I envision will not be like the computers of today, but will be biologically inspired and will be emulating the massively parallel, self-organizing, holographically organized methods that are used in the human brain. A future AI certainly won't be using expert system techniques. Rather, it will be a complex system of systems, each built with a different methodology, just like, well, the human brain.

I will say that Mitchell is overlooking the hundreds of ways in which "narrow AI" has infiltrated our contemporary systems. Expert systems are not the best example of these, and I cited several categories in my statement.

I agree with Mitchell that the brain does not represent the entirety of our thinking process, but it does represent the bulk of it. In particular, the endocrine system is orders of magnitude simpler and operates at very low bandwidth compared to neural processes (which themselves utilize a form of analog information processing dramatically slower than contemporary electronic systems).

Mitchell expresses skepticism that "it's all about the bits and just the bits." There is something going on in the human brain, and these processes are not hidden from us. I agree that it's actually not exactly bits because what we've already learned is that the brain uses digitally controlled analog methods. We know that analog methods can be emulated by digital methods but there are engineering reasons to prefer analog techniques because they are more efficient by several orders of magnitude. However, the work of Cal Tech Professor Carver Mead and others have shown that we can use this approach in our machines. Again, this is different from today's computers, but will be, I believe, an important future trend.

However, I think Mitchell's primary point here is not to distinguish analog and digital computing methods, but to make reference to some other kind of "stuff" that we inherently can't recreate in a machine. I believe, however, that the scale of the human nervous system (and, yes, the endocrine system, although as I said this adds little additional complexity) is sufficient to explain the complexity and subtlety of our behavior.

I think the most compelling argument that Mitchell offers is his insight that most experience is not book learning. I agree, but point out that one of the primary purposes of nonbiological intelligence is to interact with us humans. So embodied AI's will have plenty of opportunity to learn from direct interaction with their human progenitors, as well as to observe a massive quantity of other full immersion human interaction available over the web.

Now it's true that AI's will have a different history from humans, and that does represent an additional challenge to their passing the Turing test. As I pointed out in my statement, it's harder (even for humans) to successfully defend a fictional history than a real one. So an AI will actually need to surpass native human intelligence in order to pass for a human in a valid Turing test. And that's what I'm betting on.

I can imagine Mitchell saying to himself as he reads this "But does Ray really appreciate the extraordinary depth of human intellect and emotion?" I believe that I do and think that Mitchell has done an excellent job of articulating this perspective. I would put the question back and ask whether Mitchell really appreciates the extraordinary power and depth of the technology that lies ahead, which will be billions of times more powerful and complex than what we have today?

On that note, I would end by emphasizing the accelerating pace of progress in all of these information-based technologies. The power of these technologies is doubling every year, and the paradigm shift rate is doubling every decade, so the next thirty years will be like 140 years at today's rate of progress. And the past 140 years was comparable to only about 30 years of progress at today's rate of progress because we've been accelerating up to this point. If one really absorbs the implications of what I call the law of accelerating returns, then it becomes apparent that over the next three decades (well, 28 years to be exact when Mitchell and I sit down to compare notes), we will see astonishing levels of technological progress.

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Learning through experience
posted on 03/01/2002 8:22 AM by iph1954@msn.com

[Top]
[Mind·X]
[Reply to this post]

In their debate over who will win the proposed Turing Test, Ray says: "I think the most compelling argument that Mitchell offers is his insight that most experience is not book learning. I agree, but point out that one of the primary purposes of nonbiological intelligence is to interact with us humans. So embodied AI's will have plenty of opportunity to learn from direct interaction with their human progenitors, as well as to observe a massive quantity of other full immersion human interaction available over the web."

I expect that a non-human entity will be able to pass the Turing Test well before 2029. I use the term non-human entity instead of computer because the being that passes the test will be much more than what we think of today as a computer. It will be an amalgam of artifical intelligence, robot, and distributed network.

A key factor that Ray did not mention is that this embodied AI will learn not only from "direct interaction with" humans and "full immersion human interaction available over the web", but also from its own subjective experience of being in the world. It will have the ability to independently sense the difference between hot and cold, wet and dry, loud and soft, bright and dark, gentle and rough, polite and rude.

In many ways these entities will be like human children discovering the world anew and with their own unique perspective. The act of cataloging such experiences and developing patterns of recognition, reaction, and response may in fact result in a kind of emotion. I look forward to getting to know some of these new creatures and becoming friends with them.

- Mike Treder, Incipient Posthuman

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 03/04/2002 11:17 AM by net.kurzweilai@rnix.com

[Top]
[Mind·X]
[Reply to this post]

Why is there still this desire to have a "machine" pass such a test? If I ask "Are you human?" why would I want a machine that respond "yes" to try and fool me? And if I ask "what is 987654321.123456789 * 123456789.987654321?", why would I want to have a machine simulate the time it would take a human to calculate it?
The Turing Test is a quaint but antiquated.

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 03/04/2002 2:53 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I agree. ;)

- Thomas

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 03/15/2002 2:45 PM by lothar63@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Neither of you guys seem to appreciate the signifigance of the turing test. It's a very important milestone in AI development.

Paul

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 03/15/2002 4:19 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Humans (including myself) are quite stupid. Meat is not a very efficient way to compute - after all!

Today computer programs are very intelligent on some (still narrow) areas. Far ahead of us.

Take a good chess program for example. And feed it with enough CPU power and it will smash every human player.

It's true. It took 40 years of human endeavors to make one that good and to feed it with enough calculation power - but it was proven possible.

Most people however are still unable to comprehend this fact. They keep insist, that Deep Blue hasn't won. (Even if it didn't, with some extra MHz (let alone GHz) - it surely would.)

Soon, programs will take a lead in the enhancement of everything. Things will go much faster then.

Soon after, we will become intelligent too!

- Thomas




Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 03/15/2002 4:48 PM by -

[Top]
[Mind·X]
[Reply to this post]

>>>Take a good chess program for example. And feed it with enough CPU power and it will smash every human player.

No. Take a good chess program and feed it enough previous chess games, scenarios, and algorithms, and it will smash every human player. CPU power alone does not bring about artificial intelligence.

>>>It's true. It took 40 years of human endeavors to make one that good and to feed it with enough calculation power - but it was proven possible.

It took 40 years of refining algorithms and adding knowledge to prove it possible. Raw computing power just means that it can churn through those algorithms fast enough to compete in real time. If I put a 100 terrahertz processor in my computer, it won't start beating chess masters all of a sudden.

>>>Most people however are still unable to comprehend this fact. They keep insist, that Deep Blue hasn't won. (Even if it didn't, with some extra MHz (let alone GHz) - it surely would.)

If you add a few Gigahertz here or megahertz there, it won't do anything to improve the chess-playing ability of Deep blue. Again, if I upgrade my processor in my computer, it won't figure out how to manage memory more efficiently, it won't figure out how to stop itself from crashing, and it won't figure out that I hate the damn Clippy icon in Microsoft Word. In fact, it won't figure out anything. It will continue to run the same software with the same features, only (blazingly) faster. Only when the power of the CPU is harnessed, with better software, will improvements be made.

>>>Soon, programs will take a lead in the enhancement of everything. Things will go much faster then.

How will the programs do that? We have to write the programs that "take the lead". Until we have those programs, raw CPU power is meaningless from an AI standpoint.

I'm not saying it isn't possible, in fact I am a firm believer that intelligence in computers is indeed an attainable goal. However, if we look at the trends and say "computers will become intelligent on their own...look at those exponential curves!" we're kidding ourselves.

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 03/15/2002 5:39 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> No. Take a good chess program and feed it enough previous chess games, scenarios, and algorithms, and it will smash every human player. CPU power alone does not bring about artificial intelligence.

You _can_ reverse engineering the human playing. But you can also let the program to calculate deep in the position. Much deeper than any human can. And it will win, provided it has enough CPU power and RAM storage. Accumulating the old knowledge, what to play in certain circumstances, is the matter of experience. A lot of computing power can give you a billion of games played and analyzed every second.

For example, a simple program can always win with two bishops and the king against the king and two knights. No human can.

> Raw computing power just means that it can churn through those algorithms fast enough

Again - chess rules and enough computing power beats every Kasparov. The question is, if the entire computing power on the Earth today is enough for that - but sooner or later - it will be.

> If I put a 100 terrahertz processor in my computer, it won't start beating chess masters all of a sudden.

Give him enough storage capacity and time to render out good moves for many position. You gave that to Kasparov and humanity.

> If you add a few Gigahertz here or megahertz there, it won't do anything to improve the chess-playing ability

Of course it will! The program will scan deeper. It's ALL about that. ALL.

> Again, if I upgrade my processor in my computer, it won't figure out how to manage memory more efficiently

To a certain level it will. Better caching. But RAM will be improved also.

> it won't figure out how to stop itself from crashing,

Don't count much on this. Newer OS crashes less often.

> and it won't figure out that I hate the damn Clippy icon in Microsoft Word.

Clippy may be stupid - I wouldn't know. But maybe your hate has nothing to do with that.

> Only when the power of the CPU is harnessed, with better software, will improvements be made.

Not _only_. Sometimes additional CPU makes a big difference.

> How will the programs do that?

So called genetic (evolutionary) algorithm can improve almost everything. It needs as much CPU as possible. And memory. And bandwidth. And nothing else.

> We have to write the programs that "take the lead".

Sometimes is better to wait for something to evolve inside computer. More and more often.

> Until we have those programs, raw CPU power is meaningless from an AI standpoint.

We already have. John Koza ... and others.

> if we look at the trends and say "computers will become intelligent on their own...look at those exponential curves!" we're kidding ourselves.

Of course. But from a certain point on ... it will happen. Puberty of our software is close.

- Thomas

Re: Response to Mitchell Kapor's
posted on 03/17/2002 10:39 PM by -

[Top]
[Mind·X]
[Reply to this post]

>>>But you can also let the program to calculate deep in the position. Much deeper than any human can. And it will win, provided it has enough CPU power and RAM storage.

*How*, exactly, does the computer learn how to calculate chess moves? The CPU doesn't teach itself. There are no chess "rules", that when followed, result in a win every time. If you find that kind of rule set, let me know, we could make a lot of money.

>>>For example, a simple program can always win with two bishops and the king against the king and two knights. No human can.

That is blatantly a false statement. Any combination of moves a computer can carry out, a human can.

>>>>If you add a few Gigahertz here or megahertz there, it won't do anything to improve the chess-playing ability
>>>Of course it will! The program will scan deeper. It's ALL about that. ALL.

OK, a faster computer will be able to scan more moves in the same amount of time. But lets say we allow a slower computer to run for a longer amount to time, so that the total number of calculations evens out. Both computers (slow and fast) will come up with the same moves.

Why can't we apply this to other types of artificial intelligence? Why can't I have an incredibly slow conversation with my computer? The answer is that we don't have any clue how to program computers to do that, and no amount of processing power will change that. Processing power may be an "enabling technology", but we still need a breakthrough in software.

>>>>Only when the power of the CPU is harnessed, with better software, will improvements be made.
>>>Not _only_. Sometimes additional CPU makes a big difference.

Stating your thesis over and over doesn't convince me of its validity.


Re: Response to Mitchell Kapor's
posted on 03/19/2002 1:47 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]


> *How*, exactly, does the computer learn how to calculate chess moves?

One example:

- program R0 picks a random (legal) move.

You can imagine, that two random - R0 - programs can play with each other.

Program R1 CHECKS every next move only by how good is the potential move in a number of R0 - R0 games.

In another words: R1 will open (select one of possible 20 initial moves) by looking what move is usually the best in a hundredth R0 - R0 games from the initial move on.

Not only open - but calculate every move that way. We have R1 strategy now, which is better than R0.

R2 can be defined same way. After enough steps we have a powerful program RN, which pumps it's strength out of massive CPU calculation.

Do you understand this?

- Thomas

Re: Response to Mitchell Kapor's
posted on 03/21/2002 11:57 AM by -

[Top]
[Mind·X]
[Reply to this post]

>>>>In another words: R1 will open (select one of possible 20 initial moves) by looking what move is usually the best in a hundredth R0 - R0 games from the initial move on.

Try actually writing this program. You will quickly find that the most difficult part will be this "checking" algorithm, that decides which move is "usually best". *That* is where the intelligence of the program lies. The CPU power may allow you to use more complex algorithms and heuristics for doing that, but it won't write those algorithms and heuristics for you.

Re: Response to Mitchell Kapor's
posted on 03/21/2002 1:27 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I don't have time for this program for the game of chess.

I've wrote it however, for the tic-tac-toe. The sixth level was virtually invincible. Just as I am - in that game.

The principle is the same for a large variety of games. Including chess.

> You will quickly find that the most difficult part will be this "checking" algorithm, that decides which move is "usually best".

Not at all! It is very easy. You just have to calculate which moves (of about 30) has the best wining (nonloosing) record.

- Thomas

Re: Response to Mitchell Kapor's
posted on 04/09/2002 12:47 AM by kcisobderf@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Actually, the previous poster is right, to a point! The evaluation function is still a matter of art and guesswork. However, if there is a computer built that will go 20 plys deep at any stage of the game, it will beat any human. Even if the evaluation function is simple. Better yet, program a typical opening book into a machine and put variable weighting on a list of criteria for good play: knight distance from the center, number of open ranks for rooks, diagonals for bishops, passed pawns, minimizing possible moves by opposing player, etc. Then feed in the thousands of Grandmaster games on record, and let the program finish the drawn games, adjusting the weighting all the while! The first chess programs were very bad, and people felt that chess was exclusively the domain of intelligent human experts. But programmers gradually extracted and emulated "mechanizable" components to chess play until there was very little art and judgment left over. Garry Kasparov made a minor but well known(to grandmasters) mistake and just resigned because he knew he didn't have enough art and judgment left over to overcome the brute force power of Deep Blue. I expect that *all* tasks requiring art and judgment will have a sufficient proportion of mechanizable elements to be capably performed by computers.

Re: Response to Mitchell Kapor's
posted on 04/09/2002 1:57 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

We quite agree.

I want to say, that this history of the best chess games added to a chess program, is just an (optional) shortcut.

- Thomas

Re: Response to Mitchell Kapor's
posted on 05/01/2002 1:42 AM by jay_abbott@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Arguing on the internet is like winning the special olympics. Even if you win, youre still a retard.

Re: Response to Mitchell Kapor's
posted on 05/01/2002 2:36 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

It is about having the right answer - not wining a (stupid) zero sum game.

Knowing something is a lot more, than wining Olympics.

A retard with enough knowledge becomes normal - and more.

Wining _any_ Olympics, is nothing compared to that.

- Thomas

Re: Response to Mitchell Kapor's
posted on 11/29/2006 9:18 PM by cwalk10

[Top]
[Mind·X]
[Reply to this post]

You are an extremely ignorant man. You sit here an try to act all intelligent...yet don't have the intellect to know that you shouldn't call people retards. i've volunteered as a coach for special olympics for the last two years, and those athletes know much more about life and caring and loving than you will ever know. you are a down right ignorant person to have made that comment.

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 04/10/2002 12:51 AM by webcop0012@aol.com

[Top]
[Mind·X]
[Reply to this post]

>>Neither of you guys seem to appreciate the signifigance of the turing test. It's a very important milestone in AI development.

Explain why Paul? It seems that the Turing test relies on deception not genuine interaction to succeed. Why is it a milestone?

Could Aliens pass the test? Would we question their intelligence after managing to navigate their way to Earth?

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 04/12/2002 12:43 PM by rkoch@rkoch.org

[Top]
[Mind·X]
[Reply to this post]

Why don't you try to use your common sense to figure out why such questions, while formally OK, make no sense in a Turing test?

BTW: The machine would not need to lie or pretend anything. If the machine says "No, I am a machine", is it true or was it a human who lied? If the machine answers calculus questions immediatelly is it a machine or is it a human who is augmented with a direct machine link.

The Turing test does not forbid lying. Neither does it forbid 'enhanced humans'.

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 04/12/2002 3:39 PM by net.kurzweilai@rnix.com

[Top]
[Mind·X]
[Reply to this post]

The Turing Test, while i do think it was a milestone in AI, is antiquated. The point of the Turing Test is to make a machine indistinguishable from a human. Not, as you propose, to make you confused as to which is which. To accomplish making a machine indistinguishable from a human will require that the machine emulate all the imperfections of a human. My question is why would we spend the time to create such a machine? It should suffice to better understand humans at that level without the need to emulate it in a machine.

Just as our understanding of the universe has evolved over time, so has our knowledge and understanding of the human mind and so has our knowledge and understanding of computing machines. As knowledge and understand progress, so does (or should) the criteria for how we define that which we, now, more understand. This has not happened with the Turing Test. I think it should.

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 04/12/2002 3:51 PM by net.kurzweilai@rnix.com

[Top]
[Mind·X]
[Reply to this post]

And before someone argues this. When I saw the point is not to "confuse which is which", I mean that the Turing Test isn't interested in making the tester think that a human is actually a machine. In other words, it isn't interesting to have a human imitate a machine. That doesn't forward AI development in the least.

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 04/12/2002 9:50 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

It would probably be more productive to look at the Turing test, not as a goal, but a milestone on the way to an intelligence that surpasses human capabilities.

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 04/26/2002 2:56 PM by nisus164@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

if you want AI to work harmoniously with humans and convincingly BE a sentient life-form, you need to not only make it conscious, but give it a sub-conscious. Let it have abstract, random recurrances of past experiences. Make it re-analyze what it has already analyzed. e.g. Given new information or knowledge accumulated between the initial time it assessed a given situation, and when it "remembered" and re-analyzed the same situation situation(now more wise) it might have done things differently. Realizing this can create subtle realizations such as humility, insecurity and prudence.
Let it dream.

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 11/04/2002 1:29 AM by Steve

[Top]
[Mind·X]
[Reply to this post]

Ray mentions all sorts of technology, ie. brain scans, nanotechnology, etc ... but fails to mention the one technological development that is going to make AI possible, Quantum Computing!

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 11/25/2002 4:19 PM by quantum

[Top]
[Mind·X]
[Reply to this post]

It doesn't look like quantum computing will help much with AI. As far as we know, it can't quickly solve EXPTIME-complete problems like Chess. It can't quickly solve PSPACE-complete problems like Checkers. It can't even quickly solve NP-complete problems like the Traveling Salesman Problem. The only known case where quantum computers are much faster than ordinary computers is Shor's algorithm for factoring large integers or solving the discrete log problem. (There's also Grover's algorithm which gives a more modest speedup on blind search problems.) It's possible that quantum computers are of no use at all for AI, even if we could figure out how to build them on a large scale.

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 03/03/2007 12:58 AM by bobbymartin

[Top]
[Mind·X]
[Reply to this post]

You underestimate <a href='http://en.wikipedia.org/wiki/Grover's_algori thm'>Grover's Algorithm</a>.

It is a general algorithm for turning any "test all possibilities" problem into a 'search algorithm' and reducing the time to compute it to O(N^1/2) instead of the O(N) a normal computer would take. For example, it solves any NP complete problem in O(N^1/2).

As a further example, assume that you have some fitness function that will return 1 for a set of arbitrary inputs that, for example, describe a self-replicating nano-assembler composed of iron, silicon & oxygen, and 0 for any other set of arbitrary inputs. Assume that you need 10^20 combinations to describe all the iron-silicon-oxygen structures that could produce this machine. Further assume that it takes 10,000 operations to analyze one configuration for fitness.

An ordinary computer running at 1 gigahertz would require 10,000 * 10^20 / 10^9 = 10^15 seconds =~ 30,000,000 years to test all combinations.

A 1 megahertz quantum computer running Grover's Algorithm would require 10,000 * 10^10 / 10^6 = 10^8 seconds =~ 3 years to return a correct combination.

When you want to find the a satisfactory combination from a boatload of combinations, and you can't find another way to do it that's faster than O(N^1/2), Grover's Algorithm is the way to go.

It seems to me that this leads to some very interesting possibilities when the possibility space is very large (10^20 or bigger).

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 11/25/2002 4:43 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

what I never could understand- is that even if we lived in some magical universe where ONLY biolgy was capible of sentience- all you would have to do is-

well- BUILD BIOLOGICAL AI!

this is happening anyway- and will probably end being the primary forms of AI/AL-

use what you have! what natural selection sculpted for us- it's probably easier to engineer and program a custom-engineered bioneural brain-computer thingy anyway- and easier to work out the problems like processing speed/ longevity/ fragility-

or most likely a hybrid modular network of various analog biologics and digital systems working together-

you don't HAVE to use nonbiological elements you know- and the fact should be enough to stop the critics- but they never seem to realize this childishly simple concept-

Re: Response to Mitchell Kapor's "Why I Think I Will Win"
posted on 03/03/2007 9:26 AM by doojie

[Top]
[Mind·X]
[Reply to this post]

Basically, we have evolved from biology to machine concepts. But the study of AI now forces us in the oppowsite direction. Modeling AI using evolutionary algorithms, mimicking nature, makes us seek to mirror our processes within biology rather than machines.

This will probably change our views on such things as hierarchy and chains of command t a much deeper degree. The "return to nature" movement is probably shaped by technology returning to biological processes.