Origin > How to Build a Brain > When Machines Outsmart Humans
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0170.html

Printable Version
    When Machines Outsmart Humans
by   Nick Bostrom

Artificial intelligence is a possibility that should not be ignored in any serious thinking about the world in 2050. This article outlines the case for thinking that human-level machine intelligence might well be appear in that time frame. It then explains four immediate consequences of such a development, and argues that machine intelligence would have a revolutionary impact on a wide range of the social, political, economic, commercial, technological, scientific and environmental issues that humanity will face in the next century.


Originally published 2000 at www.nickbostrom.com. Published on KurzweilAI.net April 30, 2001.

The annals of artificial intelligence are littered with broken promises. Half a century after the first electric computer, we still have nothing that even resembles an intelligent machine, if by 'intelligent' we mean possessing the kind of general-purpose smartness that we humans pride ourselves of. Maybe we will never manage to build real artificial intelligence. The problem could be too difficult for human brains ever to solve. Those who find the prospect of machines surpassing us in general intellectual abilities threatening may even hope that is the case.

However, neither the fact machine intelligence would be scary, nor the fact that some past predictions were wrong, is a good ground for concluding that artificial intelligence will never be created. Indeed, to assume that artificial intelligence is impossible or will take thousands of years to develop seems at least as unwarranted as to make the opposite assumption. At a minimum, we must acknowledge that any scenario about what the world will be like in 2050 which simply postulates the absence human-level artificial intelligence is making a big assumption that could well turn out to be false.

It is therefore important to consider the alternative possibility, that intelligent machines will be built within fifty years. In the past year or two, there have been several books and articles published by leading researchers in artificial intelligence and robotics that argue for precisely that projection. This essay will first outline some of the reasons for this, and then discuss some of the consequences of human-level artificial intelligence.

We can get a grasp of the issue by considering the three things that are required for constructing an effective artificial intelligence. These are: hardware, software, and input/output mechanisms.

The requisite input/output technology already exists. We have videocameras, speakers, robotic arms etc. that provide a rich variety of ways for a computer to interact with its environment, so this part is trivial.

The hardware problem is more challenging. Speed rather than memory seems to be the limiting factor. We can make a guess at the computer hardware that will be needed by estimating the processing power of a human brain. We get somewhat different figures depending on what method we use and what degree of optimisation we assume, but typical estimates range from 100 million MIPS to 100 billion MIPS. (1 MIPS = 1 Million Instructions Per Second). A high-range PC today has about one thousand MIPS. The most powerful supercomputer to date performs at about 10 million MIPS. This means that we will soon be within striking distance from meeting the hardware requirements for human-level artificial intelligence. In retrospect, it is easy to see why the early artificial intelligence efforts in the sixties and seventies could not possibly have succeeded - the hardware available then was pitifully inadequate. It is no wonder that human-level intelligence was not attained using less-than-cockroach level of processing power.

Looking forward, we can predict with a rather high degree of confidence that hardware matching that of the human brain will be available in the foreseeable future. IBM is currently working on a next-generation supercomputer, Blue Gene, which will perform over 1 billion MIPS. This computer is expected to be ready around 2005. We can extrapolate beyond this date using Moore's Law, which describes the historical growth rate of computer speed. (Strictly speaking, Moore's Law as originally formulated was about the density of transistors on a computer chip, but this has been closely correlated with processing power.) For the past half century, computing power has doubled every eighteen months to two years. Moore's Law is really not a law at all, but merely an observed regularity. In principle, it could stop holding true at any point in time. Nevertheless, the trend it depicts has been going strong for a very extended period of time, and it has survived several transitions in the underlying technology (from relays to vacuum tubes, to transistors, to integrated circuits, to Very Large Integrated Circuits, VLSI). Chip manufacturers rely on it when they plan their forthcoming product lines. It is therefore reasonable to suppose that it may continue to hold for some time in the future. Using a conservative doubling time of two years, Moore's law predicts that the upper-end estimate of the human brain's processing power will be reached before 2019. Since this represents the performance of the best supercomputer in the world, one may add a few years to account for the delay before that level of computing power becomes available for doing experimental work in artificial intelligence. The exact numbers don't matter much here. The point is that human-level computing power has not been reached yet, but almost certainly will be attained well before 2050.

This leaves the software problem. It is harder to analyze in a rigorous way how long it will take to solve that problem. (Of course, this holds equally for those who feel confident that artificial intelligence will remain unobtainable for an extremely long time - in the absence of evidence, we should not rule out either alternative.) Here we will approach the issue by outlining two approaches to creating the software, and presenting some general plausibility-arguments for why they could work.

We know that the software problem can be solved in principle. After all, humans have achieved human-level intelligence; so it is evidently possible. One way to build the requisite software is to figure out how the human brain works, and copy nature's solution.

It is only relatively recently that we have begun to understand the computational mechanisms of biological brains. Computational neuroscience is only about fifteen years old as an active research discipline. In this short time, substantial progress has been made. We are beginning to understand early sensory processing. There are reasonably good computational models of primary visual cortex, and we are working our way up to the higher stages of visual cognition. We are uncovering what the basic learning algorithms are that govern how the strengths of synapses are modified by experience. The general architecture of our neuronal networks is being mapped out as we learn more about the interconnectivity between neurones and how different cortical areas project onto to one another. While we are still far from understanding higher-level thinking, we are beginning to figure out how the individual components work and how they are connected up.

Assuming continuing rapid progress in neuroscience, we can envision learning enough about the lower-level processes and the overall architecture to begin to implement the same paradigms in computer simulations. Today, such simulations are limited to relatively small assemblies of neurones. There is a silicon retina and a silicon cochlea that do the same things are their biological counterparts. Simulating a whole brain will of course require enormous computing power; but as we saw, that capacity will be available within the next decade or two.

The product of this biology-inspired method will not be an explicitly coded mature artificial intelligence. (That is what the so-called classical school of artificial intelligence unsuccessfully tried to do.) Rather, it will be system that has the same ability as toddler to learn from experience and to be educated. The system will need to be taught in order to attain the abilities of adult humans. But there is no reason why the computational algorithms that our biological brains use would not work equally well when implemented in silicon hardware.

Another, more "science-fiction-sounding" approach has been suggested by some nanotechnology researchers. Molecular nanotechnology is the anticipated future ability to manufacture a wide range of macroscopic structures (including new materials, computers, and other complex gadgetry) to atomic precision. Nanotechnology will give us unprecedented control over the structure of matter. One application that has been proposed is to use nano-machines to disassemble a frozen or vitrified human brain, registering the position of every neurone and synapse and other relevant parameters. This could be viewed as the cerebral analog to the human genome project. With a sufficiently detailed map of a particular human brain, and an understanding of how the various types of neurones behave, one could emulate the scanned brain on a computer by running a fine-grained simulation of its neural network. This method has the advantage that it would not require any insight into higher-level human cognition. It's a purely bottom-up process.

These are two strategies for building the software for a human-level artificial intelligence that we can envision today. There may be other ways that we have not yet thought of that will get us there faster. Although it is impossible to make rigorous predictions regarding the time-scale of these developments, it seems reasonable to take seriously the possibility that all the prerequisites for intelligent machines - hardware, input/output mechanisms, and software - will be produced within fifty years.

In thinking about the world in the mid-21st century, we should therefore consider the ramifications of human-level artificial intelligence. Four immediate implications are:

· Artificial minds can be easily copied.

An artificial intelligence is based on software, and it can therefore be copied as easily as any other computer program. Apart from hardware requirements, the marginal cost of creating an additional artificial intelligence after you have built the first one is close to zero. Artificial minds could therefore quickly come to exist in great numbers, amplifying the impact of the initial breakthrough.

· Human-level artificial intelligence leads quickly to greater-than-human-level artificial intelligence.

There is a temptation to stop the analysis at the point where human-level machine intelligence appears, since that by itself is quite a dramatic development. But doing so is to miss an essential point that makes artificial intelligence a truly revolutionary prospect: namely, that it can be expected to lead to the creation of machines with intellectual abilities that vastly surpass those of any human. We can predict with great confidence that this second step will follow, although the time-scale is somewhat uncertain. If Moore's law continues to hold in this era, the speed of artificial intelligences will double at least every two years. Within fourteen years after human-level artificial intelligence is reached, there could be machines that think more than a hundred times more rapidly than humans do. In reality, progress could be even more rapid than that, because there would likely be parallel improvements in the efficiency of the software that these machines use. The interval during which the machines and humans are roughly matched will likely be brief. Shortly thereafter, humans will be unable to compete intellectually with artificial minds.

· Technological progress in other fields will be accelerated by the arrival of artificial intelligence.

Artificial intelligence is a true general-purpose technology. It enables applications in a very wide range of other fields. In particular, scientific and technological research will be done more effectively when conducted by machines that are cleverer than humans. One can therefore expect that overall technological progress will be rapid.

Machine intelligences may devote their abilities to designing the next generation of machine intelligence. This next generation will be even smarter and might be able to design their successors in even shorter time. Some authors have speculated that this positive feedback loop will lead to a "singularity" - a point where technological progress becomes so rapid that genuine superintelligence, with abilities unfathomable to mere humans, is attained within a short time span. However, it may turn out that there are diminishing returns in artificial intelligence research when some point is reached. Maybe once the low-hanging fruits have been picked, it gets harder and harder to make further improvement. There seems to be no clear way of predicting which way it will go.

· Unlike other technologies, artificial intelligences are not merely tools. They are potentially independent agents.

It would be a mistake to conceptualise machine intelligence as a mere tool. Although it may be possible to build special-purpose artificial intelligence that could only think about some restricted set of problems, we are considering here a scenario in which machines with general-purpose intelligence are created. Such machines would be capable of independent initiative and of making their own plans. Such artificial intellects are perhaps more appropriately viewed as persons than machines. In economics lingo, they might come to be classified not as capital but as labor. If we can control the motivations of the artificial intellects that we design, they could come to constitute a class of highly capable "slaves" (although that term might be misleading if the machines don't want to do anything other than serve the people who built or commissioned them). The ethical and political debates surrounding these issues will likely become intense as the prospect of artificial intelligence draws closer.

Two overarching conclusions can be drawn. The first is that there is currently no warrant for dismissing the possibility that machines with greater-than-human intelligence will be built within fifty years. On the contrary, we should recognize this as a possibility that merits serious attention. The second conclusion is that the creation of such artificial intellects will have wide-ranging consequences for almost all the social, political, economic, commercial, technological, scientific and environmental issues that humanity will confront in the next century.

© 2000 Nick Bostrom

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Reply to the Artificial Intelligence Concept
posted on 09/21/2001 6:45 PM by Non of your business

[Top]
[Mind·X]
[Reply to this post]

I Know the Missing Key to Artificial Intelligence but I would rather choose not too. For you see Once Labour(Easily Replaced by Robotics) Energy, And Holdings Issues Are taken care of The rich will become Unstopable Lemme explain. The Rich Once they have Unlimited labour and energy Will be Extremely powerful ... but will still need humans to do research and to program setup and do extremely complex task for them. But If they dont need humans for that anymore Then they have no reason not to disband all humans for a Super Intellectual Computer that can think a thosand times faster Easier than humans Do even MORE complex task Have no more ethical Issues ... Obey with complete Loyalty And Best of all Dont cost a dime . This will greatly Diminish the need for Human services. Replacement for Labour(Soon to come) Replacement for Energy and Holdings (Soon to come). If Energy And Production Are knocked down Only two Things remain Holdings(For Instance owning land and owning its Production Rights ect ect ect) And Thinking. Once Labour/Energy/Thinking are done we will no longer Need Humans So They will get poorer And poorer and Sell there land(Holdings) To eat and Survive And the Richer will get richer and richer and richer And the Poor Poorer and Poorer And Poorer. Middle class will be destroyed(IM part middle class) Poor will be Destroyed And A bunch of whiny Stupid Lucky brats will get all the power So I shall never give the key for It would be Unlogical (Most scientist are middle class).

Re: Reply to the Artificial Intelligence Concept
posted on 09/22/2001 5:44 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

The concept of several people having all, while others have nothing at all is largely naive.

At least unlimited wealth would result in an unlimited charity.

- Thomas

Re: Reply to the Artificial Intelligence Concept
posted on 05/15/2002 1:33 PM by Citizen Blue

[Top]
[Mind·X]
[Reply to this post]

Thomas, I think that you are correct when you say that unlimited wealth gives occasion for unlimited charity; although i believe that this newfound power of technology will give rise to a different social situation. My only thinking is that there are still beliefs i.e. cultural biases that the wealthy may want to disseminate by utilizing this resource; or may still think that power is important in regulating their specific genes for the future; after all isn't it really about selfish survival of patterns e.g. genes? In the larger social context there most likely will be specific constraints on how AI is implemented, for the basic rights of mankind. I see it as an important political issue.

Re: Reply to the Artificial Intelligence Concept
posted on 05/16/2002 6:04 AM by peter@xeven.com

[Top]
[Mind·X]
[Reply to this post]

With about 6 billion of brains on this planet, only a small part of them can be considered really intelligent.

I do not regard copying as an act of intelligence.

How individuals learn is pretty important in acquiring intelligence, but real intelligence is a certain view on the world that is not shared with other brains. It is the key concept to make a difference, to change the world, instead of reacting to it.

Maybe it is possible to create learning systems that can come close to overall human intelligence but the key required for intelligence is to be found in metafysics, not in physics. So my opinion in this matter is : if a certain human can come to a high metaphysical level, he/she can copy that level into a machine and simulate a high intelligence. To have machines think about metaphysics will be as hard for them as for us, because we have to learn them to think about it. So I have no fear that someday, there will be machines that think better than every human.

Re: Reply to the Artificial Intelligence Concept
posted on 05/19/2002 3:32 AM by Citizen Blue

[Top]
[Mind·X]
[Reply to this post]

Isn't possible for AI to become so involved with us, that after awhile no one will be able to tell which is which? Maybe not duplicating per se, but merging with us, to the point, that we are it and it is we.

Re: Reply to the Artificial Intelligence Concept
posted on 06/25/2002 10:02 AM by peter@xeven.com

[Top]
[Mind·X]
[Reply to this post]

Interesting concept. I have not thought about this up till now.

But yes, I think that if we integrate other intelligence into our intelligence, we will after a while not even notice it is there. Probably the first thing we do is developing a form of symbiotic intelligence, rather than a stand alone form of intelligence.

But my focus is complete on the stand alone form of intelligence, the mathematics that come along with it, and the ethics that play a part in it.

So I believe that this stand alone intelligence will not evolve faster than human intelligence, but I think that if symbiotic intelligence is developed, the symbiotic intelligence will team up with the human intelligence. This will result in internal intelligence competition in order to have the best evolutionairy system in the end.

Re: Reply to the Artificial Intelligence Concept
posted on 06/26/2002 12:55 AM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

If you really feel that "machine intelligence" can never attain the "metaphysical", then you must feel that humans are somehow more than "meat-bots" (essentially, machines.) Otherwise, I can see no a-priori reason why humans cannot spawn a form of ... "machine" ... that will surpass the purely-human capacity for innovation, understanding, and even empathy. Naturally, such a construct would need to be able to evolve on its own, at which point (I say) all bets are off regarding their intellectual (and even emotional) limitations.

I suppose the common belief that "we are more than machines" comes largely from the assumption that "mechanical" means "strictly causal". Thus for humans to be merely machines would imply that we "invent nothing, contribute nothing", since we are essentially songs spewing from a player-piano, whose melodies were written long ago.

This follows, since a hard string of causalities can exercise no choice.

However, (if I understand strong QM correctly), this universe allows events that have no preceding cause. Bell's theorem (as I understand it) demonstrates the incompatibility of QM with any "hidden variables" theory that would (ostensibly) allow the calculation of all eventualities. I interpret this to mean that the universe cannot be reduced to some fixed amount of "information" or "pattern" from which all future could be deduced.

In essence, the universe does not possess the information necessary to "know" when that particular atom of carbon-14 will decay into carbon-12. It is a spontaneous event.

Thus, at foundation at least, that which is "purely mechanical" does not equate to that which is "purely causal".

My line of reasoning implies that purely deterministic algorithms, although capable of producing "effectively unpredictable results", is still short of the capabilities that may result when strongly unpredictable components are part of the assembly.

My Fantasy: Extrapolating from the observation that increasingly complex molecules support quantum energy differences of finer and finer degree (say, the energy boundary between the cis and trans states practically vanishes), a point is reached wherein the "energy of a thought" is enough take advantage of QM tunnelling. If the human meat-computer can thus incorporate sponaneity at its foundation ... what is to say that we cannot produce less traditional architectures that also incorporate spontaneity at this level?

Cheers.

____tony____

Re: When Machines Outsmart Humans
posted on 11/11/2001 5:14 PM by jjaeger@mecfilms.com

[Top]
[Mind·X]
[Reply to this post]

>We get somewhat different figures depending on what method we use and what degree of optimisation we assume, but typical estimates range from 100 million MIPS to 100 billion MIPS. (1 MIPS = 1 Million Instructions Per Second). A high-range PC today has about one thousand MIPS.

MIPS mean Machine Instructions Per Second, not Million Instructions Per Second.

James Jaeger

MIPS correction
posted on 05/13/2002 7:41 PM by just99@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

MIPS - millions of instruction per second
look it up james.

-as

oops
posted on 05/13/2002 7:57 PM by just99@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

MIPS = Million Instructions Per Second
oh well i made a mistake too...

Re: MIPS correction
posted on 05/13/2002 8:37 PM by wildwilber@msn.com

[Top]
[Mind·X]
[Reply to this post]

Yep

MIPS = Million Instructions Per Second

Re: When Machines Outsmart Humans
posted on 04/10/2003 9:42 AM by jvbelle

[Top]
[Mind·X]
[Reply to this post]

Please note the assumption: (super-)intelligence increases (somewhat linearly) along with the power of the hardware. Not necessarily true. I doubt whether hardware that's twice as powerful will result in even a noticeable increase in intelligence. There is some correspondence, yes, but not nearly as strong as we may think (I think ;-) Even if the hardware (& software) is powerful enough to become equivalent to the human brain, it will take quite a long time before it surpasses human intelligence. I suggest
a log-linear (if that) link between hardware power/complexity and intelligence. Secondly, superior intelligence does not equate "super-intelligence" - someone(thing) with an IQ of, say, 300, may not prove to be able to outsmart most of the people most of the time. It's one thing to devise a new TOE, quite another thing to keep up discovering dramatically new things.

Re: When Machines Outsmart Humans
posted on 11/12/2006 4:00 PM by quickcup

[Top]
[Mind·X]
[Reply to this post]

I am a little late on this one, but perhaps a little closer to the singularity too.

First, I appreciate how the original author has been able to synthesize many of the ideas and theories from other articles on this site in way that makes them clear and understandable.

I think it is pretty safe to say, as the article points out, that the software problem will be the biggest obstacle in the way of achieving human-level intellect in machines. The most conservative hardware requirement estimate suggests we will need cpu speeds of about 10,000 times the magnitude of today's fastest supercomputer. Even if Moore's law doesn't hold out in this respect and it is not possible to cram that much raw horsepower into a single box, a distributed system might make up the difference. This is probably the most natural architecture anyway since the brain we are trying to model operates in a highly parallel fashion. This is reasonable but I also feel it takes a substantial leap of faith to say that the software will evolve enough to keep pace with the hardware. Certainly the step-by-step procedural type algorithms won't advance beyond human capability because they only reflect how the programmer himself would solve an input problem if given sufficient time and the capability to never make an error. In response sometimes people say we need more abstract programming languages. But every example of these we have simply operates one layer above a procedural engine that executes the translated code. Even the translation process is procedural. So really this you give it an input and the computer will crank out an output insightless style of functional software has never really evolved. It's a flat line on a timescale graph meaning it is unreasonable to extrapolate where it will be at by 2050 or any other year. This fundamental dependence on procedural algorithms at least partially suggests an intrinsic limitation of our hardware model. Not only does it need to grow in computational power but it needs an architectural evolution too. When I think or problem solve, rarely do I consciously run an algorithm in my head.

This brings us to the proposed solution of making a crude model of the human brain using some amalgamation of hardware and software. This seems like a good idea. Model each region of the brain and the channels of communication that interconnect them. Then flip the switch and let the primitive connections expand in a chain-reaction like fashion into something more complex than we can understand or is even necessary to understand ' essentially the basic principle of a simulation. Of course, as pointed out, if it is a model of a human brain then it must gain knowledge like one too. However, the idea that you could train N brain models in parallel to be experts each on a different subject, then copy all the results to create a single model that is a super expect on N subjects seems to contradict the original principle. If the knowledge a machine gains is represented as a set of neural networks, should it be as simple as extracting one set and appending it to another to gain double the knowledge? It shouldn't seem that easy because if the first set was formed as the result of a complex iterative process then it is much more likely that two bodies of knowledge fed through the learning mechanism would form a much more interleaved data structure. This might not seem like too big of a disadvantage ' it will only cost N times the hardware but one of the machines will still be able to solve your problem that you pose to it. But I think it places a restriction on any machine surpassing human intellect. Being the owner of a breadth of knowledge enhances all your knowledge beyond its component value. It inspires new approaches and solutions not derivable in isolation. So in essence we might create a series of expert computers, no better and no worse than the human masters of the same field. Not a bad start.

Also, about the concern that these strong AIs will have a profound economic disruption by narrowing the job market down to a small set that can only be performed by a person; I agree that it is a valid worry. I am also in agreement with the poster that said it should be a political issue. Dissemination regulations will have to be placed on the technology so that it can be eased in, perhaps over a very long transitioning period, to enable people to acclimatize their lives to it.

Re: When Machines Outsmart Humans
posted on 11/12/2006 8:05 PM by NanoStuff

[Top]
[Mind·X]
[Reply to this post]

Holy hell, quickcup has a time machine.

Re: When Machines Outsmart Humans
posted on 11/13/2006 6:28 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

Someone commented that people do not like to think of themselves as 'machines'.

This is probably because they compare their own capabilities with the machines we have today. And let's face it, who wants to be some clunky mechanical automaton with all the intelligence of an insect? Even our most sophisticated robots are orders of magnitude less capable than we are.

But there is no physical law that I know of which states artificial life must forever be the lesser of nature. Already crude AIs in video games display non-deterministic behaviour more akin to biology than the strictly causal and mechanical reponses one expects of machines. As our hardware climbs towards the hardware capacity of mamillian brains and as we learn the biological principles of intelligence, expect a new definition of what a machine is and a better appreciation of merging nature and technology.

Re: When Machines Outsmart Humans
posted on 11/13/2006 12:18 PM by michie

[Top]
[Mind·X]
[Reply to this post]

It seems that the ability for machines to outsmart humans really will depend on whether or not we can create the software for artificial intelligence. The use of neuroscience is crucial in order to obtain at least a solid level of artificial intelligence that can be compared to that of a human. As for the hardware, it appears that from Moore's Law, speed will not be the main problem in trying to create artificial intelligence equal to or greater than that of human. It seems as if we are jumping ahead of ourselves since we have not even gotten to the human-level yet, and I believe that we will not achieve this level by 2019, regardless if the speed is available or not. Also, the use of input/output mechanisms are not something that we need to focus on as we already have them, as stated by Bostrom.

The dependence on neuroscience for creating human-level artificial intelligence is what I believe will make or break our ability to get to the human-level. If we cannot understand how the brain functions then being able to achieve the human-level of artificial intelligence is most likely improbable. Once the software and speed has been created in the future to make human-level artificial intelligence, then the ability to mass-produce this artificial intelligence will probably be very easy.

The capacity to create human-level artificial intelligence may not be as great as our expectations are because, as the speed evolves, which it has over time, the ability to go beyond human-level artificial intelligence seems that it could possibly tend to infinity. With that in mind, I agree with Bostrom that a problem could arise if artificial intelligence surpasses the human-level as we would be unable to compete with it. I also believe that if we go beyond human-level artificial intelligence, society will find itself unable to control the artificial intelligence that is evolving, especially if ethical issues become intertwined with this artificial intelligence. Since we have not reached human-level artificial intelligence, the role of ethics seems to be very uncertain. I think if we do obtain human-level artificial intelligence, then we should stop there, but I doubt that level will suffice. If the speed to produce beyond human-level artificial intelligence is created, there is no reason why someone would stop there if we have the resources to make it 'better'.

I do agree with Bostrom that human-level artificial intelligence and beyond human-level will definitely advance the technical processes in other fields, since it seems redundant to let this technology remain idle if we have the speed and software to do more. The use of artificial intelligence may not help us as much as we seem to think it will. If we have artificially intelligent 'beings' doing our work, not only in computer science, but in society, what would we do? I do not think that artificial intelligence will takeover all aspects of human life, but if we have it creating faster minds than our own, it seems as if we will be like obsolete computers that are just too slow.

The main obstacle that needs to be overcome is the software to create at least human-level artificial intelligence, since that at the rate we are going we will achieve the speed of a human brain by 2019, according to Moore's Law. It does not depend really on when we get to human-level artificial intelligence or beyond, but what we do with it, since I believe we will get to that level just not by 2019.