|
|
|
|
|
|
|
Origin >
Will Machines Become Conscious? >
Response to Stephen Hawking
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0288.html
Printable Version |
|
|
|
Response to Stephen Hawking
Stephen Hawking recently told the German magazine Focus that computers were evolving so rapidly that they would eventually outstrip the intelligence
of humans. Professor Hawking went on to express the concern that eventually, computers with artificial intelligence could come to dominate the world. Ray
Kurzweil replies.
Originally published September 5, 2001 on KurzweilAI.net as a response to Stephen Hawking, whose sentiments can be read here.
Hawking's recommendation is to (i) improve human intelligence with genetic engineering to "raise the complexity
of ... the DNA" and (ii) develop technologies that make possible "a direct connection between brain and computer, so that
artificial brains contribute to human intelligence rather than opposing it."
Hawking's perception of the acceleration of nonbiological intelligence
is essentially on target. It is not simply the exponential growth of computation and communication that is behind it, but also our mastery of human intelligence itself through the exponential advancement of brain
reverse engineering.
Once our machines can master human powers of pattern recognition and cognition, they will be in a position to combine these human talents with inherent advantages
that machines already possess: speed (contemporary electronic circuits are already 100 million times faster than the electrochemical circuits in our interneuronal
connections), accuracy (a computer can remember billions of facts accurately, whereas we're hard pressed to remember a handful of phone numbers), and, most importantly, the
ability to instantly share knowledge.
However, Hawking's recommendation to do genetic engineering on humans in order to keep pace with AI is unrealistic. He appears to be talking about genetic engineering
through the birth cycle, which would be absurdly slow. By the time the first genetically engineered generation grows up, the era of beyond-human-level machines will be upon us.
Even if we were to apply genetic alterations to adult humans by introducing new genetic information via gene therapy techniques (not something we've yet mastered), it still
won't have a chance to keep biological intelligence in the lead. Genetic engineering (through either birth or
adult gene therapy) is inherently DNA-based and a DNA-based brain is always going to be extremely slow and limited in capacity compared to the potential of an AI.
As I mentioned, electronics is already 100 million times faster than our electrochemical circuits; we have no quick downloading ports on our biological neurotransmitter levels, and so on. We could bioengineer smarter humans, but this approach will not begin to keep pace with the exponential pace of computers, particularly when brain reverse engineering is complete (within thirty years from now).
The human genome is 800 million bytes, but if we eliminate the redundancies (e.g., the sequence called "ALU" is repeated
hundreds of thousands of times), we are left with only about 23 million bytes, less than Microsoft Word. The limited amount of information
in the genome specifies stochastic wiring processes that enable the brain to be millions of times more complex than the
genome which specifies it. The brain then uses self-organizing paradigms so that the greater complexity represented by
the brain ends up representing meaningful information. However, the architecture of a DNA-specified
brain is relatively fixed and involves cumbersome electrochemical processes. Although there are design improvements that could be made, there are profound limitations to the basic architecture
that no amount of tinkering will address.
As far as Hawking's second recommendation is concerned, namely direct connection between the brain and computers, I agree that this is both reasonable, desirable and inevitable. It's been my recommendation for years. I describe a number of scenarios to accomplish this in my most recent book, The Age of Spiritual Machines, and in
the book précis "The Singularity
is Near."
I recommend establishing the connection with noninvasive nanobots that communicate wirelessly with our neurons. As I discuss in the précis, the feasibility of communication
between the electronic world and that of biological neurons has already been demonstrated. There are a number of advantages to extending human intelligence through the nanobot
approach. They can be introduced noninvasively (i.e., without surgery). The connections will not be limited to one or a small number of positions in the brain. Rather, the
nanobots can communicate with neurons (and with each other) in a highly distributed manner. They would be programmable, would all be on a wireless local
area network, and would be on the web.
They would provide many new capabilities, such as full-immersion virtual reality involving all the senses. Most importantly, they will provide many trillions of new
interneuronal connections as well as intimate links to nonbiological forms of cognition. Ultimately, our minds won't need to stay so small, limited as they are today to a
mere hundred trillion connections (extremely slow ones at that).
However, even this will only keep pace with the ongoing exponential growth of AI for a couple of additional
decades (to around mid-twenty-first century). As Hans Moravec has pointed out, ultimately a hybrid biological-nonbiological brain will ultimately be 99.999...% nonbiological, so the biological
portion becomes pretty trivial.
We should keep in mind, though, that all of this exponentially advancing intelligence is derivative of biological
human intelligence, derived ultimately from the thinking reflected in our technology
designs, as well as the design of our own thinking. So it's the human-technology civilization
taking the next step in evolution. I don't agree with Hawking that "strong AI" is a fate to be avoided. I do believe
that we have the ability to shape this destiny to reflect our human values, if only we could achieve a consensus on what those are.
See accompanying news item Alter our DNA or robots will take over, warns Hawking.
|
|
Join the discussion about this article on Mind·X! |
|
|
|
Mind·X
Discussion About This Article:
|
|
|
|
Advances in Computing Power
|
|
|
|
As shown in the quote below from a leading work on quantum computing, that machines will vastly surpass human abilities is both inevitable and imminent. Depending on how we handle them, the resulting social changes could be
either very negative or very positive. If we're not careful, our machines may look back on us the way we look back on the small rodent-like mammals from which we are descended.
Therefore we should begin now to assess the design requirements for an "Advanced Civilization." The parameters of our Universe, and of stable societies, are very finely tuned. We must reassess our beliefs and policies in areas
such as law, politics, science, national security, the economy, and culture. If we can strike new, carefully tuned balances, we may survive. If not, our world model may "crash" in ways that limit future human development.
Frank Sudia
= = = = =
Bouwmeester, et al (eds.), The Physics of Quantum Information: Quantum Cryptography, Quantum Teleportation, Quantum Computation, Springer Physics & Astronomy Series (2000).
"Among the many ramifications of quantum computation for apparently distant fields of study are its implications for both philosophy and the practice of mathematical proof. Performing any computation that provides a definite
output is tantamount to proving that the observed output is one of the possible results of the given computation. [...] Now we must leave that definition behind. Henceforth, a proof must be regarded as a process ' the computation
itself, not a record of all its steps -- for we must accept that in the future, quantum computers will prove theorems by methods that neither a human brain nor any other arbiter will ever be able to check step-by-step, since if
the "sequence of propositions" corresponding to such a proof were printed out, the paper would fill the observable universe many times over." (Bouwmeester, et al, p. 103)
Check out the beautiful photos of multi-atom quantum computing experiments at --
University of Innsbruck, Institute for Experimental Physics, Quantum Optics and Spectroscopy Group, website at http://heart-c704.uibk.ac.at
|
|
|
|
|
|
|
|
|
Re: Advances in Computing Power
|
|
|
|
>how can you demonstrate that memes aren't using us as tools? I don't mean 'using us' in an intentional manner - they evolve through blind natural selection just as we do.
If you think about it, I think you'll see that we invent memes for the purpose of doing something. They don't just appear out of thin air for no reason. They way the evolve is through their usefulness to us for doing the job we
are using them for. If you want to analyze memes, analyze them for what they are being used for.
Example: The high five.
In the beginning, people used the high five as a way to show solidarity between members of a group. That group was mostly black. Then it spread through members of various sports teams and was taken up by nonplayers who wanted to
look "cool" or express the same idea the team players were using it for. It also expressed congratulations on a job well done and triumph over the other team. But it was a tool being used to communicate an idea or a number of
ideas. People didn't take it up because the expression wanted to propagate itself. It spread because it communicated the ideas being expressed better than competing tools which, for a number of reasons, could not do the job
better. Those reasons included timing, who else was using it, how people reacted to it, and how easy it was to understand the idea being communicated.
If you show me a meme being used, I can show you how and why it is being used. If you think about it carefully, I think you will come to the conclusions, as I did, that all memes are tools being used and that is the basis of
their propagation.
How well, fast and widely memes are being propagated dependes on how completely they satisfy the needs of the people choosing and using them. That's why all words are memes. It is also why the ideas they refer to are memes. The
"virus of the mind" and the "selfish meme" are both based on faulty concepts. Genes select (or create) their host. Memes do not.
For memes, the host does the selecting. He/she selects a meme based what they are trying to accomplish and which tool (meme) they think will do the job best -- just as you choose the words, grammar and style you use based on what
you are trying to communicate and who you are trying to communicate it to.
When I go to China, I use Chinese words because they will allow me to communicate better than English words to that audience. The words I use at the university are not the same words I choose to talk to my buddies in the bar. The
things I choose to talk about are different for the same reasons. Your words do not choose you.
You don't grab a rock to pound a nail when a hammer is handy but you're liable to if the hammer is not.
If you can show me a better aparatus for the propagation of memes, I'd love to hear about it. But please don't just give me a list of books by various authors. I've probabaly already had this arugement with them and they didn't
convince me.
|
|
|
|
|
|
|
|
|
Re: Advances in Computing Power
|
|
|
|
Hello Grant - you certainly have done some thinking about memes. Your analogy of the "high-five" is an excellent example of memetic evolution. I think that you have misunderstood me in some parts or I have not communicated well.
As your high-five example illustrates, memes are blind - their selection is based upon their usefulness, just as you point out. This is entirely how natural selection works and exactly what my post that you quoted in your
subsequent message spells out: "they evolve through blind natural selection just as we do". As in your examples, humans are the principal selection pressure - memes do not 'choose' just as genes do not choose.
You attribute the success of the high-five meme to its ability of expressing "cool", "congratulations" and "triumph" - have I read this correctly? These are most certainly other memes. You can see where I am going with this: for
every explanation of meme selection that you give me, I can turn around and demonstrate that it is other memes which are supporting its selection. When we select a meme only because it supports another meme(s), then clearly memes
*in-and-of-themselves* are directing our decision-making process.
Your example of talking to people in China and then talking to your buddies in a bar is an excellent demonstration of this. As you state, you select different topics and words (both memes) for each respective audience. You could
select any topic and words with either group but, as you state, you are really only selecting from a subset of them. Why is this? Again, this is because certain memes in-and-of-themselves are directing your selection, whether
yeah or nay, in each respective situation. (Again, please remember that memes are without intention.)
Instead of playing a simple reductionist's game, let's restate my original question with a second clarifying one: how can you demonstrate that we aren't hosts for memes? The second, perhaps more important question: where do you
draw the line between the direct influence of memes on our decision-making and pure conscious choice outside of their influence?
With regards to your last question: "can you show me a better apparatus for the propagation of memes?" I think you have already provided some of the best apparatus in your own examples: humans. You are quite correct when you
state that memes don't "just appear out of thin air without reason" - memes propagate through 'cross-over' and 'mutation' in the human mind. And in keeping with the spirit of this web site, technology is also a very effective
replicator of memes as we are of course using this message board to give airtime to the meme meme!
GBTH
|
|
|
|
|
|
|
|
|
Re: Advances in Computing Power
|
|
|
|
>When we select a meme *only* because it supports another meme(s), then clearly memes *in-and-of-themselves* are directing our decision-making process.
I see the fault in your argument lying in the use of the word "only." The support of other memes does have an influence, but the primary reason for choosing this meme is to help us fit into and impress the group we are
addressing. In addition we have a need to communicate (the means of which is itself a meme) but it is also part of a basic, genetic need to be a part of the group or the society in which we are using it. To give the credit
entirely to the tools with which we seek to accomplish this goal is to give memes more credit than they deserve. No meme is used merely to accomplish a single goal but most often a broad range of goals, most of which themselve
are derived from previously established memes.
It's like the old arguement about nature and nurture. It's not a matter of one or the other but the outcome of influences from both. In the same way, some of our choice of which memes we choose are directed by our genetic
heritage and partly it is directed by our memetic heritage. But the two forces affect our choices in different ways.
Genes use the chemical means of emotions and reaction to environmental factors to direct our choices while the selection of memes is based primarily on social and cultural influences. The genetic influence comes primarily from
within and the memetic influence comes mostly from without. That is because we choose our memes from the pool of what is available in most cases but we are also capable of creating new memes occasionally. In addition, the
emotional impact of the meme on the group also affects whether it will be used again or discarded as not worthy of being chosen again. That is a genetic contribution based on emotions created and manipulated by genes.
Until the DNA revolution, we could not create new genes to add to the pool. Only the genes themselves could do that. Now we can use memes to create genes that did not exist before. But every new and original idea is a new meme.
Once we assign a name to that idea, an additional new meme is created. That's one reason why the same word can mean so many different things. We can assign any word to any thought or idea that occurs to us.
But to say that an idea that never existed before has somehow imposed itself on us without our consciously wanting it or creating it is, to my mind, a mistatement of what is really going on. We may just be wallowing in a tangle
of semantic differences here, but I feel strongly that there are real and important differences between the two types of evolution exhibited by memes and genes.
|
|
|
|
|
|
|
|
|
Re: Response to Stephen Hawking
|
|
|
|
Why is there fear of AI? I believe that intelligent machines can only help our civilization, not harm it. Imagine having another intelligent observer in the universe. I agree that evolution of the human race is necessary, but
why must it occur before computers?
Also, if computers can remember thousands of pieces of information when we cannot even remember a few phone numbers, why are computers not intelligent? Why have computers not built more computers to remember their information for
them? Because they lack something that we have. I agree that computers can remember many peices of information that we can't, but so far that just makes them technologically advanced books.
Nanobots are also an exciting possibility. Yet how will this be accepted in a world where technology is still feared? How can we fear AI, then present the solution of giving these intelligent machines control over our brains?
Obviously, some kind of safegaurds must be put in place, as they will be when intelligent machines are created.
In no way can intelligent machines harm our society because we are not their intellectual equivalents. For one thing, they are not living things. Their are physically confined, so they do not have free will.
Both the fields of nuerology and computer science still need to progress, but there is no need for them to compete. We have all the time in the world.
|
|
|
|
|
|
|
|
|
Re: Response to Stephen Hawking
|
|
|
|
'Why is there fear of AI? I believe that intelligent machines can only help our civilization, not harm it.'
I totally disagree with this. If an intelligent machine possesses more intelligence than a human than it can definitely think like humans and ways to possess ultimate control to guarantee its existence. In its pursuit to
accomplish this, elimination of the human species might become one of its goals as humans would always threaten its existence. Also when fighting for their survival such machines will not have the same human feelings or
compassion and would think of the outcome as a probability for their survival.
'I agree that evolution of the human race is necessary, but why must it occur before computers?'
In this case I think evolution of human race is necessary before computers because our brains as of now are not perfect and if we build intelligent machines that mimic our brains, these flaws will also be reflected into them.
Thus they will also possess the same negative qualities as us, such as deception, revenge and jealousy. So I think its absolutely necessary that we perfect ourselves through evolution or other break-throughs before getting
intelligent machines to think or behave like us.
'Also, if computers can remember thousands of pieces of information when we cannot even remember a few phone numbers, why are computers not intelligent? Why have computers not built more computers to remember their information
for them? Because they lack something that we have. I agree that computers can remember many peices of information that we can't, but so far that just makes them technologically advanced books.'
In this case you are mixing up memory and intelligence which are two different but in-terdependent things. Computers can remember lots of phone number but can they re-member their past mistakes or learn from them? Computers
cannot dream or have thought patterns like we do. But adding the human like AI component to the immense memory and processing power of the computers they would exceed our intelligence with leaps and bounds. So I think your
analogy of computers being 'technologically ad-vanced books' is not valid in the case of intelligent machines.
'Nanobots are also an exciting possibility. Yet how will this be accepted in a world where technology is still feared? How can we fear AI, then present the solution of giving these intelligent machines control over our brains?'
I think there will always be a certain amount of curiosity and fear over any new technol-ogy. I think it's a part of the human nature. I agree with you Nanobots are an excellent possibility but are we considering all the
consequences? Do we want people to live for-ever considering the ever increasing human population and the exhausting resources of our planet? How can we be sure that nanobots won't go bad an cause damage to our bodies? What if
their reproduction becomes uncontrollable in your body leading to other complications and even death?
Obviously, some kind of safegaurds must be put in place, as they will be when intelli-gent machines are created.
Consider this is the case and we have placed all the necessary safeguards to prevent the intelligent machines from over taking us. But assuming they are more intelligent then us what makes you think they cannot over ride such
safe guards? Consider the analogy with various cryptographic schemes developed by us humans that are being used in protection of DVDs (CSS), Music (DRM), Satellite Signals (NAGRA) - all of these have been cracked in no time by
individuals possessing human level of intelli-gence. But imagine a machine which is thousands of times more intelligent than us. Can we be sure that our safeguards will never be over ridded?
In no way can intelligent machines harm our society because we are not their intellec-tual equivalents. For one thing, they are not living things. Their are physically confined, so they do not have free will.
If intelligent machines feel threatened by humans they can definitely harm us. And they don't have to be physical to harm us either. Again this very much depends on how de-pendent we are on such machines. Take for example the
super computers that control the missile defense systems. Having control of such powerful weapons, it wouldn't be wise to call computers physically confined. Considering they are intelligent and feel that humans threaten their
existence and should be eliminated; they can definitely cause a significant damage to the human populous if not extinction, taking into account our ever increasing reliance on computers in mission critical systems such as
communications, water, electricity, warfare. And at the current technological growth rate our future lives would be completely dependent on them.
|
|
|
|
|
|
|
|
|
Re: Response to Stephen Hawking
|
|
|
|
One thing missing from both Hawking's objections to AI and Kurzweil's response is a horror of the process!
Hawking says "We better augment ourselves via tweaking our DNA" and Kurzweil says "Nah that's too slow." You bet it's too slow. But the first thing that comes to *my* mind is the sheer horror of it.
I'd be all for sticking a chip into my brain and having a better memory [first] then [later, I imagine] having part of my intelligence and personality, then eventually, all of it, move into that chip. I'm okay with that. Brain
cell death and rebirth over time ensures that I'm not the same 'person' after a couple years that I am now anyway, so the configuration of the new person who grows inside me [and is growing inside of me now, cell by cell] might
as well be in another form. If any of my implants hurt or don't work or make me psycho, I can unplug them.
But I won't be first to tweak my DNA. And I sure won't have my kid be one of the pioneers of that experiment. the poor kid wouldn't be able to unplug it if it turns out his angst lobe was the one given twice the meat-CPU. My own
angst lobe [soon to be discovered, keep an eye out] overacted all my life and only now got under control. How much more is down there ready to blow?
No, I'm perfectly pleased to have a permanent couple of K in my brain at first (that's phone numbers and a good sized honey-do list) and options for more later. It scares me to think that anyone would, however carefully and
FDA-approvedly, even consider blowing up their kids' brains like balloons on a lark like that. We've seen the messes made by the most careful, well-meaning, highly intelligent people doing good work for the benefit of humanity.
|
|
|
|
|
|
|
|
|
Re: Response to Stephen Hawking
|
|
|
|
Stephen Hawking recently wrote a book called UNIVERSE IN A NUTSHELL. Having just "finished" it, I would recommend this if you want some clarification on where he stands with regards to the Singularity, genetic engineering,
Moore's Law, time travel, M-theory and super intelligence and branes.
This is one hell of a book, and if any of you have any doubts as to why he's considered a genius -- this book should answer all your questions.
One thing of note, I applaud Mr. Hawking for giving the "public-okay" for mainstream physicists to take seriously the study of time loops and time travel. I also share with him the dismay (and stupidity, my words) that the U.S.
Congress chose not to fund the Supercollider Superconductor, such machine being made in Geneva, (as the Large Haldron Collider) at this time.
I hope all Americans are aware that now the major break through in M-theory, Mankind's most comprehensive theory of the universe ever devised, will probably take place in Switzerland -- you know the place where all the "neutral"
banks hold stolen Jewish gold from the Holocaust, and who knows what else.
James Jaeger
|
|
|
|
|
|
|
|
|
Re: Response to Stephen Hawking
|
|
|
|
Will machines have superior intelligence and dominate the Earth?
From a technological point of view, I agree that the computing power of machines is growing so rapidly that it will be far more powerful than the human brain in the near future. However, is pure computing power equivalent to
intelligence? Every operation in a computer eventually comes down to CPU arithmetic. Will computers gain consciousness just by performing these operations? For example, when we see something new, we would have the ability to
first identify the object or concept, and then try to understand how it works. For a machine to be able to perform this operation not only sufficient computing power is required, we will need to also come up with an algorithm
that could deal with any type of new objects the machine encounters. It is possible that this will be solved by new designs in hardware and software (quantum computers), but at least I think we should not use the speed of
computing power progress to determine the speed for advances in creating artificial intelligence.
From a philosophical point of view, could humans create something with a superior intelligence than themselves? To create something that has the intellect that exceeds our own, or to create a species that will master or dominate
us seems a bit infeasible. In my opinion, machines may become conscious and they can be more intelligent than the human race in some aspects, but they will have their own weaknesses in thinking. As Mr. Kurzweil stated in his
response: 'that all of this exponentially advancing intelligence is derivative of biological human intelligence, derived ultimately from the thinking reflected in our technology designs, as well as the design of our own
thinking.' In other words, the intelligence of the machines will be limited to our understandings about how the universe works. It is possible that some of our understandings are not entirely correct and we are not aware of it
yet, so the design of the artificial intelligence will incorporate that flaw as well. To create something, we not only need to understand how the object itself works, but also how the world it is in works. Even if we become very
clear about how human brains are constructed by reverse-engineering it, we can only construct something that is like a human brain. That is in a sense refinement, not creation. This way we may have some 'products' that are of
equal level of intelligence, not superior.
There is no example in the history of biological evolution where one species created another and later the created species evolves to become the dominant one. There is no strong evidence that machines with artificial intelligence
will dominate the Earth in the future. The possibility does exist; nevertheless it is not that high. Maybe this will be a brand new type of evolution, but I think what is more likely to happen would be that human race and
machines with artificial intelligence will co-exist in the future, as neither of the two races is perfect. Machines will need the help of humans, and vice versa.
|
|
|
|
|
|
|
|
|
Re: Response to Stephen Hawking
|
|
|
|
Power consumption is a notoriously hard subject in the world of nanobots. How do you guys envision this in the future? True, we have smaller and more powerful transistors since 20 years ago, but the fact is, these are just
batteries and they're temporary. If you look at personal computers, you will see that power consumption is always increasing, and with this, heat generation. In 20 years, we might need 20' fans to carry enough air to cool our
system, and if you're running a higher end video card, liquid cooling would probably be a must (or if the GPU becomes one with the CPU, liquid cooling will be a must for all!). Will this be the same with the nanobots? Computers
are getting faster and faster but not without a cost. It doesn't look feasible when each nanobot require (optimistically speaking) what we need to power our computers today especially and it looks bleak when we're talking of
having to supply power to millions of these that compose 99% of our brain. Also, the 23 million bytes of the human genome shouldn't be treated as 23 million bytes of data. It's more like 23-million-byte-long private key that
would need to be taken into account when processing input data (e.g., from our eyes). Thus, I think the power requirement of today's PC for tomorrow's nanobot is an extremely optimistic assumption.
The desire of having these nanobots inside of to be WAN-capable is the scariest of them all. We all know that WAN is and always will be prone to attacks and never be totally secure. Those waves out there aren't exclusively owned
by you! There would be a newer, nastier ways of invading your privacy this way. I'm pretty sure that these silicon beings are not separate from their environment. It's also not unimaginable that if the nanobots are working
heavily together with our brain (and perhaps other bodily functions), it'd be also prone to other malicious actions. If not mind-control, then perhaps something purely destructive such as turning your brain into jelly or some
sort. You could always say that decrypting attempts would always be behind, but how true is this once we've got quantum computers up and running? There has been successful progress in this area lately.
We're also completely dismissing the moral and ethical issues here. Rightly pointed out above, /no one/ is going to willingly insert 'intelligent' pieces of silicon into their body, surrender their DNA sample to be altered to the
extremes. Democratically, fruition of modified DNA strands is an impossibility. I'd also argue that the speeds of Hawking's two scenarios are somewhat identical'at the very least the second scenario isn't as rapid as Kurzweil is
picturing it to be. Technological advancement is very much decided on the culture that breeds them. I can imagine a lot of groups aren't going to be happy with this (religious groups objecting on body-modifications come to mind)
and progress will not be easy.
Speaking of ethics and morality'how hard is it to engrave this in the AI? More, how easy is it to list all these down? Ethics and morality changes from culture to culture, and definitely from time to time. They even differ within
cultures! It becomes harder since minimalism won't work here. We will not be able to keep everyone happy if we just manage to note the things that everyone agrees on (e.g., the Three Laws of Robotics) down. Humanity is made of
many different cultures and groups and they're highly sensitive when it comes to this issue.
I also doubt the possibility of 'safeguarding' AIs. I mean this in both ways: algorithm-wise and action-wise. I'm thinking it's extremely difficult to get this right on the low level and to me it seems that we're already limiting
progress itself. Action-wise, we've all heard/watch silly scenarios where people do bad things and blame it on other things. Will the nanobots be the next 'other things'? Never mind educating and updating the law system anywhere
to these guys, we still haven't even gotten it right for the stuff we have now!
For what it's worth now, I'm quite apathetic to what the world of computers is going to look like 20-30 years from now (even though I'm hopeful I'll live to see it). The above are just what I've found dubious when I read the
articles and the comments and the whole singularity vision in general. Just my two cents.
|
|
|
|
|
|
|
|