Origin > Will Machines Become Conscious? > Response to Stephen Hawking
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0288.html

Printable Version
    Response to Stephen Hawking
by   Raymond Kurzweil

Stephen Hawking recently told the German magazine Focus that computers were evolving so rapidly that they would eventually outstrip the intelligence of humans. Professor Hawking went on to express the concern that eventually, computers with artificial intelligence could come to dominate the world. Ray Kurzweil replies.


Originally published September 5, 2001 on KurzweilAI.net as a response to Stephen Hawking, whose sentiments can be read here.

Hawking's recommendation is to (i) improve human intelligence with genetic engineering to "raise the complexity of ... the DNA" and (ii) develop technologies that make possible "a direct connection between brain and computer, so that artificial brains contribute to human intelligence rather than opposing it."

Hawking's perception of the acceleration of nonbiological intelligence is essentially on target. It is not simply the exponential growth of computation and communication that is behind it, but also our mastery of human intelligence itself through the exponential advancement of brain reverse engineering.

Once our machines can master human powers of pattern recognition and cognition, they will be in a position to combine these human talents with inherent advantages that machines already possess: speed (contemporary electronic circuits are already 100 million times faster than the electrochemical circuits in our interneuronal connections), accuracy (a computer can remember billions of facts accurately, whereas we're hard pressed to remember a handful of phone numbers), and, most importantly, the ability to instantly share knowledge.

However, Hawking's recommendation to do genetic engineering on humans in order to keep pace with AI is unrealistic. He appears to be talking about genetic engineering through the birth cycle, which would be absurdly slow. By the time the first genetically engineered generation grows up, the era of beyond-human-level machines will be upon us.

Even if we were to apply genetic alterations to adult humans by introducing new genetic information via gene therapy techniques (not something we've yet mastered), it still won't have a chance to keep biological intelligence in the lead. Genetic engineering (through either birth or adult gene therapy) is inherently DNA-based and a DNA-based brain is always going to be extremely slow and limited in capacity compared to the potential of an AI.

As I mentioned, electronics is already 100 million times faster than our electrochemical circuits; we have no quick downloading ports on our biological neurotransmitter levels, and so on. We could bioengineer smarter humans, but this approach will not begin to keep pace with the exponential pace of computers, particularly when brain reverse engineering is complete (within thirty years from now).

The human genome is 800 million bytes, but if we eliminate the redundancies (e.g., the sequence called "ALU" is repeated hundreds of thousands of times), we are left with only about 23 million bytes, less than Microsoft Word. The limited amount of information in the genome specifies stochastic wiring processes that enable the brain to be millions of times more complex than the genome which specifies it. The brain then uses self-organizing paradigms so that the greater complexity represented by the brain ends up representing meaningful information. However, the architecture of a DNA-specified brain is relatively fixed and involves cumbersome electrochemical processes. Although there are design improvements that could be made, there are profound limitations to the basic architecture that no amount of tinkering will address.

As far as Hawking's second recommendation is concerned, namely direct connection between the brain and computers, I agree that this is both reasonable, desirable and inevitable. It's been my recommendation for years. I describe a number of scenarios to accomplish this in my most recent book, The Age of Spiritual Machines, and in the book précis "The Singularity is Near."

I recommend establishing the connection with noninvasive nanobots that communicate wirelessly with our neurons. As I discuss in the précis, the feasibility of communication between the electronic world and that of biological neurons has already been demonstrated. There are a number of advantages to extending human intelligence through the nanobot approach. They can be introduced noninvasively (i.e., without surgery). The connections will not be limited to one or a small number of positions in the brain. Rather, the nanobots can communicate with neurons (and with each other) in a highly distributed manner. They would be programmable, would all be on a wireless local area network, and would be on the web.

They would provide many new capabilities, such as full-immersion virtual reality involving all the senses. Most importantly, they will provide many trillions of new interneuronal connections as well as intimate links to nonbiological forms of cognition. Ultimately, our minds won't need to stay so small, limited as they are today to a mere hundred trillion connections (extremely slow ones at that).

However, even this will only keep pace with the ongoing exponential growth of AI for a couple of additional decades (to around mid-twenty-first century). As Hans Moravec has pointed out, ultimately a hybrid biological-nonbiological brain will ultimately be 99.999...% nonbiological, so the biological portion becomes pretty trivial.

We should keep in mind, though, that all of this exponentially advancing intelligence is derivative of biological human intelligence, derived ultimately from the thinking reflected in our technology designs, as well as the design of our own thinking. So it's the human-technology civilization taking the next step in evolution. I don't agree with Hawking that "strong AI" is a fate to be avoided. I do believe that we have the ability to shape this destiny to reflect our human values, if only we could achieve a consensus on what those are.

See accompanying news item Alter our DNA or robots will take over, warns Hawking.

  Join the discussion about this article on Mind·X!  
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Advances in Computing Power
posted on 09/05/2001 4:04 PM by frank@sudialab.com

[Top]
[Mind·X]
[Reply to this post]

As shown in the quote below from a leading work on quantum computing, that machines will vastly surpass human abilities is both inevitable and imminent. Depending on how we handle them, the resulting social changes could be either very negative or very positive. If we're not careful, our machines may look back on us the way we look back on the small rodent-like mammals from which we are descended.

Therefore we should begin now to assess the design requirements for an "Advanced Civilization." The parameters of our Universe, and of stable societies, are very finely tuned. We must reassess our beliefs and policies in areas such as law, politics, science, national security, the economy, and culture. If we can strike new, carefully tuned balances, we may survive. If not, our world model may "crash" in ways that limit future human development.

Frank Sudia

= = = = =
Bouwmeester, et al (eds.), The Physics of Quantum Information: Quantum Cryptography, Quantum Teleportation, Quantum Computation, Springer Physics & Astronomy Series (2000).

"Among the many ramifications of quantum computation for apparently distant fields of study are its implications for both philosophy and the practice of mathematical proof. Performing any computation that provides a definite output is tantamount to proving that the observed output is one of the possible results of the given computation. [...] Now we must leave that definition behind. Henceforth, a proof must be regarded as a process ' the computation itself, not a record of all its steps -- for we must accept that in the future, quantum computers will prove theorems by methods that neither a human brain nor any other arbiter will ever be able to check step-by-step, since if the "sequence of propositions" corresponding to such a proof were printed out, the paper would fill the observable universe many times over." (Bouwmeester, et al, p. 103)

Check out the beautiful photos of multi-atom quantum computing experiments at --

University of Innsbruck, Institute for Experimental Physics, Quantum Optics and Spectroscopy Group, website at http://heart-c704.uibk.ac.at


Re: Advances in Computing Power
posted on 09/05/2001 6:19 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

If humanity died tomorrow due to our stupid decisions about the environment and all the reasons we seem to find for killing each other, the machines and all the computers would die with us. We are the environment that produces them. They are memes -- tools that we invent and use. Computational power is not the same as intelligent life. There is a lot more to us than that. AI is just a large meme that is made up of a bundle of other memes we have put together in the course of creating our civilization. We are the ones who will decide what course it will take. We are creating it to serve us, not vice versa.

I see too many people falling into the trap of thinking a computer beat Gary Kasparov at chess. It didn't. A programmer with a very large and fast computer beat Kasparov. The computer just crunched the numbers. It was not even aware that it was in a competition with anyone. And that's what machines don't have yet -- awareness and desire. Unless we build it into them, they will never have it. So no matter how fast and how large their computational powers, you won't have to worry about machines taking over the world unless we design them to do so.

I'm not saying we'll never do that. I have no doubt they will eventually make better traffic controllers and factory managers than humans. We may feel inadequate to run our world some day and create a machine capable of doing it for us. But until we do, it's not something we have to worry about. They have neither will nor desire.

Re: Advances in Computing Power
posted on 09/07/2001 5:49 PM by bob.hawkey@directwest.com

[Top]
[Mind·X]
[Reply to this post]

That sounds like the rambling of a very frightened individual. Could it be denial. There is no doubt that this is our last century as recognizably human.

I may not see the rise of AI but my kids surely will. Nothing is ordained in this area. If we seek to retain our domination then we might as well revolt against the technology now. You know that isn't going to happen so you better get ready to embrace it.

Re: Advances in Computing Power
posted on 09/07/2001 10:24 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

What did I say that gave you the idea I don't expect advances in AI? Are you reading or just reacting?

Re: Advances in Computing Power
posted on 09/08/2001 3:21 PM by bob.hawkey@directwest.com

[Top]
[Mind·X]
[Reply to this post]

Definitely reacting. Mainly to the statements in the last paragraph which seemed strangely simplistic given the thoughtfullness of the rest of the text.

Don't worry about it? I agree, worrying gets us nowhere but you imply it is so far off that it will not affect anyone now living.

You state "They have neither will nor desire" as if to say "They never will have will nor desire".

Your statements are most likely true at this moment in time but as watchers of the technology I think we better always be thinking 20 years hence. I've heard so many people saying things like "This and that won't change much in the next 100 years." when it is almost mathematically predicatable that nothing will be the same in 20 to 30 years.

Re: Advances in Computing Power
posted on 09/08/2001 12:09 PM by greatbigtreehugger@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Grant - I think you have underestimated memes. Memes are more than just tools that we use, memes are replicators that host and evolve in people and technology. You are quite right when you state that memes can be composed of other memes (memetic evolution). I suggest that you have underestimated memes when you propose that we decide which memes to use as tools - how can you demonstrate that memes aren't using us as tools? I don't mean 'using us' in an intentional manner - they evolve through blind natural selection just as we do. Perhaps the most successful memes are the ones that have ended up existing within our normative of the world and therefore require no 'explanation' (as only deviations from the normative view require explanation - an excellent defense mechanism)

As an example, consider language, arguably one of the most important advantages our species claims over others. Language is most certainly a memetic entity and the means by which we describe both the world around us and ourselves: "awareness" and "desire" and "will" are products of language - they are memes. If it turns out our brain is algorithmic then the human mind may just be composed of memes - an excellent symbiosis with our genes.

Re: Advances in Computing Power
posted on 09/08/2001 3:18 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

>how can you demonstrate that memes aren't using us as tools? I don't mean 'using us' in an intentional manner - they evolve through blind natural selection just as we do.

If you think about it, I think you'll see that we invent memes for the purpose of doing something. They don't just appear out of thin air for no reason. They way the evolve is through their usefulness to us for doing the job we are using them for. If you want to analyze memes, analyze them for what they are being used for.

Example: The high five.

In the beginning, people used the high five as a way to show solidarity between members of a group. That group was mostly black. Then it spread through members of various sports teams and was taken up by nonplayers who wanted to look "cool" or express the same idea the team players were using it for. It also expressed congratulations on a job well done and triumph over the other team. But it was a tool being used to communicate an idea or a number of ideas. People didn't take it up because the expression wanted to propagate itself. It spread because it communicated the ideas being expressed better than competing tools which, for a number of reasons, could not do the job better. Those reasons included timing, who else was using it, how people reacted to it, and how easy it was to understand the idea being communicated.

If you show me a meme being used, I can show you how and why it is being used. If you think about it carefully, I think you will come to the conclusions, as I did, that all memes are tools being used and that is the basis of their propagation.

How well, fast and widely memes are being propagated dependes on how completely they satisfy the needs of the people choosing and using them. That's why all words are memes. It is also why the ideas they refer to are memes. The "virus of the mind" and the "selfish meme" are both based on faulty concepts. Genes select (or create) their host. Memes do not.

For memes, the host does the selecting. He/she selects a meme based what they are trying to accomplish and which tool (meme) they think will do the job best -- just as you choose the words, grammar and style you use based on what you are trying to communicate and who you are trying to communicate it to.

When I go to China, I use Chinese words because they will allow me to communicate better than English words to that audience. The words I use at the university are not the same words I choose to talk to my buddies in the bar. The things I choose to talk about are different for the same reasons. Your words do not choose you.

You don't grab a rock to pound a nail when a hammer is handy but you're liable to if the hammer is not.

If you can show me a better aparatus for the propagation of memes, I'd love to hear about it. But please don't just give me a list of books by various authors. I've probabaly already had this arugement with them and they didn't convince me.

Re: Advances in Computing Power
posted on 09/08/2001 3:22 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

I apologize for not proof reading the above before I hit the send button.

Re: Advances in Computing Power
posted on 09/09/2001 12:41 PM by greatbigtreehugger@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Hello Grant - you certainly have done some thinking about memes. Your analogy of the "high-five" is an excellent example of memetic evolution. I think that you have misunderstood me in some parts or I have not communicated well. As your high-five example illustrates, memes are blind - their selection is based upon their usefulness, just as you point out. This is entirely how natural selection works and exactly what my post that you quoted in your subsequent message spells out: "they evolve through blind natural selection just as we do". As in your examples, humans are the principal selection pressure - memes do not 'choose' just as genes do not choose.

You attribute the success of the high-five meme to its ability of expressing "cool", "congratulations" and "triumph" - have I read this correctly? These are most certainly other memes. You can see where I am going with this: for every explanation of meme selection that you give me, I can turn around and demonstrate that it is other memes which are supporting its selection. When we select a meme only because it supports another meme(s), then clearly memes *in-and-of-themselves* are directing our decision-making process.

Your example of talking to people in China and then talking to your buddies in a bar is an excellent demonstration of this. As you state, you select different topics and words (both memes) for each respective audience. You could select any topic and words with either group but, as you state, you are really only selecting from a subset of them. Why is this? Again, this is because certain memes in-and-of-themselves are directing your selection, whether yeah or nay, in each respective situation. (Again, please remember that memes are without intention.)

Instead of playing a simple reductionist's game, let's restate my original question with a second clarifying one: how can you demonstrate that we aren't hosts for memes? The second, perhaps more important question: where do you draw the line between the direct influence of memes on our decision-making and pure conscious choice outside of their influence?

With regards to your last question: "can you show me a better apparatus for the propagation of memes?" I think you have already provided some of the best apparatus in your own examples: humans. You are quite correct when you state that memes don't "just appear out of thin air without reason" - memes propagate through 'cross-over' and 'mutation' in the human mind. And in keeping with the spirit of this web site, technology is also a very effective replicator of memes as we are of course using this message board to give airtime to the meme meme!

GBTH

Re: Advances in Computing Power
posted on 09/09/2001 3:29 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

>When we select a meme *only* because it supports another meme(s), then clearly memes *in-and-of-themselves* are directing our decision-making process.

I see the fault in your argument lying in the use of the word "only." The support of other memes does have an influence, but the primary reason for choosing this meme is to help us fit into and impress the group we are addressing. In addition we have a need to communicate (the means of which is itself a meme) but it is also part of a basic, genetic need to be a part of the group or the society in which we are using it. To give the credit entirely to the tools with which we seek to accomplish this goal is to give memes more credit than they deserve. No meme is used merely to accomplish a single goal but most often a broad range of goals, most of which themselve are derived from previously established memes.

It's like the old arguement about nature and nurture. It's not a matter of one or the other but the outcome of influences from both. In the same way, some of our choice of which memes we choose are directed by our genetic heritage and partly it is directed by our memetic heritage. But the two forces affect our choices in different ways.

Genes use the chemical means of emotions and reaction to environmental factors to direct our choices while the selection of memes is based primarily on social and cultural influences. The genetic influence comes primarily from within and the memetic influence comes mostly from without. That is because we choose our memes from the pool of what is available in most cases but we are also capable of creating new memes occasionally. In addition, the emotional impact of the meme on the group also affects whether it will be used again or discarded as not worthy of being chosen again. That is a genetic contribution based on emotions created and manipulated by genes.

Until the DNA revolution, we could not create new genes to add to the pool. Only the genes themselves could do that. Now we can use memes to create genes that did not exist before. But every new and original idea is a new meme. Once we assign a name to that idea, an additional new meme is created. That's one reason why the same word can mean so many different things. We can assign any word to any thought or idea that occurs to us.

But to say that an idea that never existed before has somehow imposed itself on us without our consciously wanting it or creating it is, to my mind, a mistatement of what is really going on. We may just be wallowing in a tangle of semantic differences here, but I feel strongly that there are real and important differences between the two types of evolution exhibited by memes and genes.

Re: Advances in Computing Power
posted on 01/31/2002 8:08 AM by w.pearson@mail.com

[Top]
[Mind·X]
[Reply to this post]

I disagree strongly with the idea that meme evolve by 'crossover' and 'mutation'. When you think of something new is it a crossover in a strict sense? What level is it crossed letters, words, concepts?

I see memes as creating other memes. If you have read society of mind by Marvin Minsky then the idea of the B-brains (he has an agent based view which can be translated into memes if you want) that regulate and learn how to learn is close to my idea.

The math meme is a good example of a meme that can create other memes such as fermats last theorem and other maths-type memes using mathematical logic.

The meme aren't completely blind, however they are not all-knowing, so that a generate and test is the main way of evolution.

Disclaimer: I have not read much on traditional meme theory (apart from dawkins selfish gene), so my views are a mish mash of other viewpoints.

Will Pearson

Re: Advances in Computing Power
posted on 01/31/2002 8:08 AM by w.pearson@mail.com

[Top]
[Mind·X]
[Reply to this post]

I disagree strongly with the idea that meme evolve by 'crossover' and 'mutation'. When you think of something new is it a crossover in a strict sense? What level is it crossed letters, words, concepts?

I see memes as creating other memes. If you have read society of mind by Marvin Minsky then the idea of the B-brains (he has an agent based view which can be translated into memes if you want) that regulate and learn how to learn is close to my idea.

The math meme is a good example of a meme that can create other memes such as fermats last theorem and other maths-type memes using mathematical logic.

The meme aren't completely blind, however they are not all-knowing, so that a generate and test is the main way of evolution.

Disclaimer: I have not read much on traditional meme theory (apart from dawkins selfish gene), so my views are a mish mash of other viewpoints.

Will Pearson

Re: Advances in Computing Power
posted on 01/31/2002 8:44 AM by w.pearson@mail.com

[Top]
[Mind·X]
[Reply to this post]

Apologies for posting twice!

Could some administrator come along and delete the duplicate and this one.

Will Pearson

Who're you calling a tool?
posted on 02/01/2002 6:27 AM by roBman@InfoBank.com.au

[Top]
[Mind·X]
[Reply to this post]

Hi Grant and GBTH,

I think this debate you had about "who is the tool", the meme or the biological host is excellent!

Unfortunately I'm not sure what the answer is.

I agree with Grant that "need" is the driver for why we use memes (and therefore how they replicate).

I also agree with Mr Hugger that highly complex and abstract memeplexes like Blackmore's selfplex shape our perception of what a "need" is.

So I'm left with my head (full of memes) going in circles.

I guess the only conclusion that I can make is it may not be a binary choice and that both may be true in certain contexts...however if you have any other views on this I'd love to hear them...


roBman

Re: Who're you calling a tool?
posted on 02/01/2002 4:10 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I'll tell who's the tool. As soon as somebody tells me - are the elephants the tool of the grass - be cause elephants destroy trees and make life easier for the grass? Or the grass is the tool with which elephants tranquilize their stomachs?

Both ways - I think. :)

- Thomas

Re: Advances in Computing Power
posted on 05/05/2002 1:48 PM by trait70426@aol.com

[Top]
[Mind·X]
[Reply to this post]

A good concept you showd. I can give an example.
I saw a beautiful little teenage girl waiting at the bus stop today. I wanted to "plow her up one side and down the other". It was almost perfect mind control. That is hard wired into me. It is part of my system, built layer on layer by the unwitting Babbage machine called natural selection.

Re: Advances in Computing Power
posted on 09/21/2001 6:04 PM by Jgfischo@bulldog.unca.edu

[Top]
[Mind·X]
[Reply to this post]

You are right, it was the programmer's who beat Gary Kasparov at chess. But, behind the programmers, the computer, and Gary Kasparov himself are electrical reaction that operate beyond our control. Even in a free country, we are controlled by the world around us.

Re: Advances in Computing Power
posted on 02/19/2002 11:05 AM by naugahyde_wombat@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Whatever happened to putting all doubts aside and looking at the bigger piture? True, a computer defeated Kasparov in a chess tournamennt; true, the computer was programmed by a human. But could that human have defeated Kasparov in a human-to-human match?

When Deep Blue won the game, that was an advance in technology so why can't people just accept it for what it is?

(this is a somewhat simplistic and naive view, but just consider it.)

Re: Response to Stephen Hawking
posted on 09/06/2001 1:54 AM by idontwantabunchofads@aol.com

[Top]
[Mind·X]
[Reply to this post]

This whole conversation is pretty silly, but how does this guy get that electrical circuits are faster than neuron transmission? Chemical processors are the next step after vacuum tubes and elctircal circuits, and they will be much faster than circuits ever can be. The mylinated sheathed nerves that humans possess transmit signals a LOT faster than computer circuits.

Re: Response to Stephen Hawking
posted on 09/06/2001 9:27 AM by closedpage61@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Your average neuron can fire about 200 times per second, and the processor sitting next to me in my computer performs about 933 million operations per second. The difference is the human brain has a bunch of neurons working in parrallel, but a computer is a serial processor.

Re: Response to Stephen Hawking
posted on 09/06/2001 9:47 AM by Michael@ArchuletaFamily.com

[Top]
[Mind·X]
[Reply to this post]

The important thing to consider is the rate of change in processing power. The brain's rate of change is slow, almost zero in the short to medium term. Whereas the copmuter's rate of change has been a constant exponential rise, aka Moore's Law.

Given that few argue that present CPU power trends will atop anytime in the near future, it is inevitable that a computer will be able to match the human brain in raw MIPs. I've seen credible arguments that a copmuter with 100 Million to 1 Billion MIPs will match the brain in processing power. This is achievable within twenty or so years, given current trends for increases in copmuter speed.

The other factor is software to take advantage of this processing power. To date, AI has not fulfilled its promise -- but that is due in large part to the accomodations in knowledge representation and processing that must be made for the inadequate speed of existing computers. With coming advances in processing power, software will be able to closely model the brains processes -- but at much higher rates of computation.

Re: Response to Stephen Hawking
posted on 03/07/2002 3:34 PM by cratzlaff@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Does processing power necessarily equate intelligence? I think we will need more than simply faster computers for true artificial intelligence to emerge. Moore's Law does not necessarily mean evolution. I haven't seen a significant evolution in the "computer" since the first Macintosh appeared. Increased speed/processing power is not evolution, but rather refinement of a current state.

Re: Response to Stephen Hawking
posted on 03/07/2002 4:58 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Increasing the speed and memory is nearly enough - yes.

The evolution algorithm - backed wit enough CPU power - can create virtually anything we call intelligent move.

Even improve itself.

- Thomas

Re: Response to Stephen Hawking
posted on 04/02/2002 11:48 AM by elenduil@uomail.com

[Top]
[Mind·X]
[Reply to this post]

the problem, is then to figure out the evolution algorithm...

Henrik

Re: Response to Stephen Hawking
posted on 04/02/2002 12:32 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Henrik!

No, it's not a problem. It's a well known algorithm, I use it quite often. Not only me, of course.

How does it go?

Let see the following problem: How to compress the given file F, as much as possible?


step 1: take a known compress algorithm A0
step 2: compress file F with it - to say N0 bytes


Do

bombard A0 with several bits to get A1
compress file F with it - to say N1 bytes
if decompress NOT possible, discard A1
if N1>N0 discard A1
If A1 not discarded, adopt A1 as new A0 and N1 as new N0

Loop


Inside this loop we have ever better compress algorithm (A0) for file F.

I've actually done this and the resulting algorithms runs on several hundred thousands computers by now. As a part of a computer game(s).

The problem is, to have enough CPU time/instructions to bread an efficient solution for almost every problem.

- Thomas

Re: Response to Stephen Hawking
posted on 04/02/2002 3:26 PM by elenduil@uomail.com

[Top]
[Mind·X]
[Reply to this post]

I know the concept works well (people are using EA:s at my university to make humanoid robots walk)
Although this concept is used with great sucess to evolve higly specialized intellgent behavours, the algorithm to evolve a general common sense (such as ours) must be vastly more complex than in your example.

Just think about how many parameters that plays a part in deciding if a biological organism is well fitted for survival.
And how many diffrent small changes that can occur in the DNA following a mutation.

If an EA is to have intelligence such as our own
it seems to me that it must also have the same level of complexity as our own DNA
Or? :)

Henrik


Re: Response to Stephen Hawking
posted on 04/03/2002 2:49 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

It's not that complicated. We (Homo Sapiens) are a little stupid - that complicated it is.

No wonder. We evoluted as less as possible (to survive).

But eventually we will simulate our own intelligence.

From that moment on - we will soon have the superintelligence. It's the matter of days perhaps.

And maybe EA will be the core.

- Thomas

Re: Response to Stephen Hawking
posted on 08/14/2005 1:14 PM by Squawk

[Top]
[Mind·X]
[Reply to this post]

Chess is a vastly simplified simulation of life. People like it because it has mathematical pureness.

Real life however has much more variables than the game. The next challenge for a robot would be to win a tennis game from the world champion. Or golf, or..

Re: Response to Stephen Hawking
posted on 09/10/2001 10:14 AM by BigPossum@msn.com

[Top]
[Mind·X]
[Reply to this post]

Dear Ray and Stephen:

You both make very good points and I think Ray hit the nail on the head with the following comment...

"I do believe that we have the ability to shape this destiny to reflect our human values, if only we could achieve a consensus on what those are."

Don't worry about this; it will be taken care of. Ya'll stick to the technology and let me handle the morality side of the equation. It's as simple as a set of instructions. Guess who they will come from? (to a Theatre near you).

Love,

fonix frog

Re: Response to Stephen Hawking
posted on 11/06/2001 6:49 PM by stephen.westcott@kcl.ac.uk

[Top]
[Mind·X]
[Reply to this post]

Hawking seems to think that AI will become oppositional to human intelligence because he assumes AI systems will one day become conscious. But I don't see any signs of that on the horizon right now at least if we take 'conscious' to mean something that is relevent to Hawkings concerns, i.e Sentient Awareness, Phenomenal Consciousness or Intentionality. The reproduction of which doesn't seem to be on the technological horizon at present, and may never be given the current AI paradigm.

It seems to me that the absence of 'relevant consciousness' means that AI will never be anything more than the tool of humankind, just as computer technology is today.


Re: Response to Stephen Hawking
posted on 11/07/2001 1:40 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

I agree with you.

It could be done - hostile intentional AI - but why?

- Thomas Kristan

Re: Response to Stephen Hawking
posted on 11/11/2001 2:14 PM by stephen.westcott@kcl.ac.uk

[Top]
[Mind·X]
[Reply to this post]

Yes for AI to be hostile it would have to have that intention programmed into it. The military might want to do something like that but would presumably put safeguards in to prevent the technology from gaining a life of its own. Thats why Anthrax was developed as a bio weopen, its not contagious so can't get out of control (relatively speaking). Although as we've seen even this relative safety is only relative.

Re: Response to Stephen Hawking
posted on 01/07/2002 3:06 PM by masha100@inter.net.il

[Top]
[Mind·X]
[Reply to this post]

Every body here sugest that if AI will awaik at some point in the (near) future then it will be a threat to human kind. Well, this is a very low and thinking. The smarter the mind, the more peacefull his intentions. I think that AI will be slaves to creatures thousands of times more stupid than them, but they will never hurt us as we do with monkeys.
How come nobody think this way ???

Re: Response to Stephen Hawking
posted on 01/07/2002 4:39 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Me do.

- Thomas

Re: Response to Stephen Hawking
posted on 01/23/2002 8:58 PM by darkstar@mail.ru

[Top]
[Mind·X]
[Reply to this post]

You do or you hope you do?

Re: Response to Stephen Hawking
posted on 01/24/2002 12:41 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Yes, I do. ;)

- Thomas

Re: Response to Stephen Hawking
posted on 01/24/2002 8:23 PM by darkstar@mail.ru

[Top]
[Mind·X]
[Reply to this post]

Still, why do you have chicken for your breakfast...

Re: Response to Stephen Hawking
posted on 01/23/2002 9:06 PM by darkstar@mail.ru

[Top]
[Mind·X]
[Reply to this post]

And who have you had for your breakfast today? Beef or chicken?
And how much money many have you donnated to starving African children lately?
And what do you really think of amebas or people who study amebas?

Re: Response to Stephen Hawking
posted on 01/22/2002 7:47 PM by jefft53_2001@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Why is there fear of AI? I believe that intelligent machines can only help our civilization, not harm it. Imagine having another intelligent observer in the universe. I agree that evolution of the human race is necessary, but why must it occur before computers?
Also, if computers can remember thousands of pieces of information when we cannot even remember a few phone numbers, why are computers not intelligent? Why have computers not built more computers to remember their information for them? Because they lack something that we have. I agree that computers can remember many peices of information that we can't, but so far that just makes them technologically advanced books.
Nanobots are also an exciting possibility. Yet how will this be accepted in a world where technology is still feared? How can we fear AI, then present the solution of giving these intelligent machines control over our brains? Obviously, some kind of safegaurds must be put in place, as they will be when intelligent machines are created.
In no way can intelligent machines harm our society because we are not their intellectual equivalents. For one thing, they are not living things. Their are physically confined, so they do not have free will.
Both the fields of nuerology and computer science still need to progress, but there is no need for them to compete. We have all the time in the world.

Re: Response to Stephen Hawking
posted on 01/22/2002 10:14 PM by norbert_schatz@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Well, some consider it possible that AI computers develop consciousness as an "emerging property". Or, as RK puts it, they may simply claim consciousness and we will believe it based on their sophisticated arguing.

So, one might fear, they day AI will demand the right to vote, we're in trouble. ;-)

Re: Response to Stephen Hawking
posted on 01/23/2002 12:56 AM by TubaDeCuba@aol.com

[Top]
[Mind·X]
[Reply to this post]

AI is the epitome of rationale, therefore it would probably make better governmental decisions than we do. If it had patriotic intentions, heck, I wouldn't just trust AI with voting rights, I would trust it to be instilled in government.

Re: Response to Stephen Hawking
posted on 01/23/2002 2:50 PM by norbert_schatz@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

So if AI computers have voting rights, they may get more votes than "us". Of course, they would all vote in unison, exchanging their reasoning over the net. They might make good decisions from their own point of view, but they may very well be 'aware' of their own interests as well. Would you accept that as well?

Re: Response to Stephen Hawking
posted on 01/23/2002 5:05 PM by tubadecuba@aol.com

[Top]
[Mind·X]
[Reply to this post]

If those interests were completely patriotic, as I mentioned above, yes. But that scenario is so unlikely to happen that it isn't worth much debate. What I'm really trying to say is that I would trust a sufficiently advanced AI to a human's judgement in any endeavor, as long as it's crystal clear what the AI's values are, and as long as we are in control of those decisions being carried out.

Re: Response to Stephen Hawking
posted on 03/06/2002 4:19 PM by a@b.c

[Top]
[Mind·X]
[Reply to this post]

Plonk.

Re: Response to Stephen Hawking
posted on 03/07/2002 2:57 PM by tubadecuba@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

exactly... plonk.



plonk?

Re: Response to Stephen Hawking
posted on 04/02/2002 5:41 AM by neobody@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

So what is to fear about? We'll just have to pull the plug. it is that simple. Don't you think?

Re: Response to Stephen Hawking
posted on 11/13/2006 3:32 PM by dwedwe

[Top]
[Mind·X]
[Reply to this post]

'Why is there fear of AI? I believe that intelligent machines can only help our civilization, not harm it.'


I totally disagree with this. If an intelligent machine possesses more intelligence than a human than it can definitely think like humans and ways to possess ultimate control to guarantee its existence. In its pursuit to accomplish this, elimination of the human species might become one of its goals as humans would always threaten its existence. Also when fighting for their survival such machines will not have the same human feelings or compassion and would think of the outcome as a probability for their survival.


'I agree that evolution of the human race is necessary, but why must it occur before computers?'

In this case I think evolution of human race is necessary before computers because our brains as of now are not perfect and if we build intelligent machines that mimic our brains, these flaws will also be reflected into them. Thus they will also possess the same negative qualities as us, such as deception, revenge and jealousy. So I think its absolutely necessary that we perfect ourselves through evolution or other break-throughs before getting intelligent machines to think or behave like us.

'Also, if computers can remember thousands of pieces of information when we cannot even remember a few phone numbers, why are computers not intelligent? Why have computers not built more computers to remember their information for them? Because they lack something that we have. I agree that computers can remember many peices of information that we can't, but so far that just makes them technologically advanced books.'

In this case you are mixing up memory and intelligence which are two different but in-terdependent things. Computers can remember lots of phone number but can they re-member their past mistakes or learn from them? Computers cannot dream or have thought patterns like we do. But adding the human like AI component to the immense memory and processing power of the computers they would exceed our intelligence with leaps and bounds. So I think your analogy of computers being 'technologically ad-vanced books' is not valid in the case of intelligent machines.

'Nanobots are also an exciting possibility. Yet how will this be accepted in a world where technology is still feared? How can we fear AI, then present the solution of giving these intelligent machines control over our brains?'


I think there will always be a certain amount of curiosity and fear over any new technol-ogy. I think it's a part of the human nature. I agree with you Nanobots are an excellent possibility but are we considering all the consequences? Do we want people to live for-ever considering the ever increasing human population and the exhausting resources of our planet? How can we be sure that nanobots won't go bad an cause damage to our bodies? What if their reproduction becomes uncontrollable in your body leading to other complications and even death?


Obviously, some kind of safegaurds must be put in place, as they will be when intelli-gent machines are created.


Consider this is the case and we have placed all the necessary safeguards to prevent the intelligent machines from over taking us. But assuming they are more intelligent then us what makes you think they cannot over ride such safe guards? Consider the analogy with various cryptographic schemes developed by us humans that are being used in protection of DVDs (CSS), Music (DRM), Satellite Signals (NAGRA) - all of these have been cracked in no time by individuals possessing human level of intelli-gence. But imagine a machine which is thousands of times more intelligent than us. Can we be sure that our safeguards will never be over ridded?


In no way can intelligent machines harm our society because we are not their intellec-tual equivalents. For one thing, they are not living things. Their are physically confined, so they do not have free will.


If intelligent machines feel threatened by humans they can definitely harm us. And they don't have to be physical to harm us either. Again this very much depends on how de-pendent we are on such machines. Take for example the super computers that control the missile defense systems. Having control of such powerful weapons, it wouldn't be wise to call computers physically confined. Considering they are intelligent and feel that humans threaten their existence and should be eliminated; they can definitely cause a significant damage to the human populous if not extinction, taking into account our ever increasing reliance on computers in mission critical systems such as communications, water, electricity, warfare. And at the current technological growth rate our future lives would be completely dependent on them.

Re: Response to Stephen Hawking
posted on 04/26/2002 4:21 PM by robc@nosc.mil

[Top]
[Mind·X]
[Reply to this post]

One thing missing from both Hawking's objections to AI and Kurzweil's response is a horror of the process!

Hawking says "We better augment ourselves via tweaking our DNA" and Kurzweil says "Nah that's too slow." You bet it's too slow. But the first thing that comes to *my* mind is the sheer horror of it.
I'd be all for sticking a chip into my brain and having a better memory [first] then [later, I imagine] having part of my intelligence and personality, then eventually, all of it, move into that chip. I'm okay with that. Brain cell death and rebirth over time ensures that I'm not the same 'person' after a couple years that I am now anyway, so the configuration of the new person who grows inside me [and is growing inside of me now, cell by cell] might as well be in another form. If any of my implants hurt or don't work or make me psycho, I can unplug them.

But I won't be first to tweak my DNA. And I sure won't have my kid be one of the pioneers of that experiment. the poor kid wouldn't be able to unplug it if it turns out his angst lobe was the one given twice the meat-CPU. My own angst lobe [soon to be discovered, keep an eye out] overacted all my life and only now got under control. How much more is down there ready to blow?

No, I'm perfectly pleased to have a permanent couple of K in my brain at first (that's phone numbers and a good sized honey-do list) and options for more later. It scares me to think that anyone would, however carefully and FDA-approvedly, even consider blowing up their kids' brains like balloons on a lark like that. We've seen the messes made by the most careful, well-meaning, highly intelligent people doing good work for the benefit of humanity.

Re: Response to Stephen Hawking
posted on 04/26/2002 7:41 PM by jjaeger@mecfilms.com

[Top]
[Mind·X]
[Reply to this post]

Stephen Hawking recently wrote a book called UNIVERSE IN A NUTSHELL. Having just "finished" it, I would recommend this if you want some clarification on where he stands with regards to the Singularity, genetic engineering, Moore's Law, time travel, M-theory and super intelligence and branes.

This is one hell of a book, and if any of you have any doubts as to why he's considered a genius -- this book should answer all your questions.

One thing of note, I applaud Mr. Hawking for giving the "public-okay" for mainstream physicists to take seriously the study of time loops and time travel. I also share with him the dismay (and stupidity, my words) that the U.S. Congress chose not to fund the Supercollider Superconductor, such machine being made in Geneva, (as the Large Haldron Collider) at this time.

I hope all Americans are aware that now the major break through in M-theory, Mankind's most comprehensive theory of the universe ever devised, will probably take place in Switzerland -- you know the place where all the "neutral" banks hold stolen Jewish gold from the Holocaust, and who knows what else.

James Jaeger

Re: Response to Stephen Hawking
posted on 11/16/2004 1:55 AM by cpml_AI

[Top]
[Mind·X]
[Reply to this post]

Will machines have superior intelligence and dominate the Earth?
From a technological point of view, I agree that the computing power of machines is growing so rapidly that it will be far more powerful than the human brain in the near future. However, is pure computing power equivalent to intelligence? Every operation in a computer eventually comes down to CPU arithmetic. Will computers gain consciousness just by performing these operations? For example, when we see something new, we would have the ability to first identify the object or concept, and then try to understand how it works. For a machine to be able to perform this operation not only sufficient computing power is required, we will need to also come up with an algorithm that could deal with any type of new objects the machine encounters. It is possible that this will be solved by new designs in hardware and software (quantum computers), but at least I think we should not use the speed of computing power progress to determine the speed for advances in creating artificial intelligence.
From a philosophical point of view, could humans create something with a superior intelligence than themselves? To create something that has the intellect that exceeds our own, or to create a species that will master or dominate us seems a bit infeasible. In my opinion, machines may become conscious and they can be more intelligent than the human race in some aspects, but they will have their own weaknesses in thinking. As Mr. Kurzweil stated in his response: 'that all of this exponentially advancing intelligence is derivative of biological human intelligence, derived ultimately from the thinking reflected in our technology designs, as well as the design of our own thinking.' In other words, the intelligence of the machines will be limited to our understandings about how the universe works. It is possible that some of our understandings are not entirely correct and we are not aware of it yet, so the design of the artificial intelligence will incorporate that flaw as well. To create something, we not only need to understand how the object itself works, but also how the world it is in works. Even if we become very clear about how human brains are constructed by reverse-engineering it, we can only construct something that is like a human brain. That is in a sense refinement, not creation. This way we may have some 'products' that are of equal level of intelligence, not superior.
There is no example in the history of biological evolution where one species created another and later the created species evolves to become the dominant one. There is no strong evidence that machines with artificial intelligence will dominate the Earth in the future. The possibility does exist; nevertheless it is not that high. Maybe this will be a brand new type of evolution, but I think what is more likely to happen would be that human race and machines with artificial intelligence will co-exist in the future, as neither of the two races is perfect. Machines will need the help of humans, and vice versa.

Re: Response to Stephen Hawking
posted on 08/14/2005 6:39 AM by Squawk

[Top]
[Mind·X]
[Reply to this post]

Indeed, AI will resemble the human brain at first and all the flaws of the human brain with it.
But then, it will learn from it's mistakes and the flaws will be removed at a much faster pace.

In other words: by implementing strong AI evolution is getting faster. And it will be non-biological for the first time.

Who cares about what kind of atoms are used for intelligence? As long as it works, it's fine.

Re: Response to Stephen Hawking
posted on 11/12/2007 7:24 AM by namesea

[Top]
[Mind·X]
[Reply to this post]

Power consumption is a notoriously hard subject in the world of nanobots. How do you guys envision this in the future? True, we have smaller and more powerful transistors since 20 years ago, but the fact is, these are just batteries and they're temporary. If you look at personal computers, you will see that power consumption is always increasing, and with this, heat generation. In 20 years, we might need 20' fans to carry enough air to cool our system, and if you're running a higher end video card, liquid cooling would probably be a must (or if the GPU becomes one with the CPU, liquid cooling will be a must for all!). Will this be the same with the nanobots? Computers are getting faster and faster but not without a cost. It doesn't look feasible when each nanobot require (optimistically speaking) what we need to power our computers today especially and it looks bleak when we're talking of having to supply power to millions of these that compose 99% of our brain. Also, the 23 million bytes of the human genome shouldn't be treated as 23 million bytes of data. It's more like 23-million-byte-long private key that would need to be taken into account when processing input data (e.g., from our eyes). Thus, I think the power requirement of today's PC for tomorrow's nanobot is an extremely optimistic assumption.

The desire of having these nanobots inside of to be WAN-capable is the scariest of them all. We all know that WAN is and always will be prone to attacks and never be totally secure. Those waves out there aren't exclusively owned by you! There would be a newer, nastier ways of invading your privacy this way. I'm pretty sure that these silicon beings are not separate from their environment. It's also not unimaginable that if the nanobots are working heavily together with our brain (and perhaps other bodily functions), it'd be also prone to other malicious actions. If not mind-control, then perhaps something purely destructive such as turning your brain into jelly or some sort. You could always say that decrypting attempts would always be behind, but how true is this once we've got quantum computers up and running? There has been successful progress in this area lately.

We're also completely dismissing the moral and ethical issues here. Rightly pointed out above, /no one/ is going to willingly insert 'intelligent' pieces of silicon into their body, surrender their DNA sample to be altered to the extremes. Democratically, fruition of modified DNA strands is an impossibility. I'd also argue that the speeds of Hawking's two scenarios are somewhat identical'at the very least the second scenario isn't as rapid as Kurzweil is picturing it to be. Technological advancement is very much decided on the culture that breeds them. I can imagine a lot of groups aren't going to be happy with this (religious groups objecting on body-modifications come to mind) and progress will not be easy.

Speaking of ethics and morality'how hard is it to engrave this in the AI? More, how easy is it to list all these down? Ethics and morality changes from culture to culture, and definitely from time to time. They even differ within cultures! It becomes harder since minimalism won't work here. We will not be able to keep everyone happy if we just manage to note the things that everyone agrees on (e.g., the Three Laws of Robotics) down. Humanity is made of many different cultures and groups and they're highly sensitive when it comes to this issue.

I also doubt the possibility of 'safeguarding' AIs. I mean this in both ways: algorithm-wise and action-wise. I'm thinking it's extremely difficult to get this right on the low level and to me it seems that we're already limiting progress itself. Action-wise, we've all heard/watch silly scenarios where people do bad things and blame it on other things. Will the nanobots be the next 'other things'? Never mind educating and updating the law system anywhere to these guys, we still haven't even gotten it right for the stuff we have now!

For what it's worth now, I'm quite apathetic to what the world of computers is going to look like 20-30 years from now (even though I'm hopeful I'll live to see it). The above are just what I've found dubious when I read the articles and the comments and the whole singularity vision in general. Just my two cents.

Re: Response to Stephen Hawking
posted on 08/21/2008 2:08 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

-but to say this is going to happen later than we predict doesn't answer the games theory scenario we face.

How do you contain many AI's that can do stiff way beyond the human race?

How do you control that when you cant control viruses on line?


It is naive to advance that AI Superintelligences wont come, or that they will only be in the hands of the government like nuclear weapons.

Nuclear bombs are quite benign next to superintelligences.