Origin > Will Machines Become Conscious? > Is AI Near a Takeoff Point?
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0655.html

Printable Version
    Is AI Near a Takeoff Point?
by   J. Storrs Hall

Computers built by nanofactories may be millions of times more powerful than anything we have today, capable of creating world-changing AI in the coming decades. But to avoid a dystopia, the nature (and particularly intelligence) of government (a giant computer program -- with guns) will have to change.


Originally published in Nanotechnology Perceptions: A Review of Ultraprecision Engineering and Nanotechnology, Volume 2, No. 1, March 27 2006. Reprinted with permission on KurzweilAI.net March 28, 2006.

Ray Kurzweil consistently has predicted 2029 as the year to expect truly Turing-test capable machines. Kurzweil's estimates1 are based on a broad assessment of the progress in computer hardware, software, and neurobiological science.

Kurzweil estimates that we need 10,000 teraops for a human-equivalent machine. Other estimates (e.g. Moravec2) range from a hundred to a thousand times less. The estimates actually are consistent, as Moravec's involve modeling cognitive functions at a higher level with ad hoc algorithms, whereas Kurzweil is assuming we'll have to simulate brain function at a more detailed level.

So, the best-estimate range for human-equivalent computing power is 10 to 10,000 teraops.

The Moore's Law curve for processing power available for $1000 (in teraops) is:

2000: 0.001 2010: 1 2020: 1,000 2030: 1,000,000

Thus, sophisticated algorithmic AI becomes viable in the 2010s, and the brute-force version in the 2020s, as Kurzweil predicts. (Progress into atomically precise nanotechnology is expected to keep Moore's Law on track throughout this period. Note that by the NNI definition, existing computer hardware with imprecise sub-100-nanometer feature sizes is already nanotechnology.)

However, a true AI would be considerably more valuable than $1000. To a corporation, a good decision-maker would be worth at least a million dollars. At a million dollars, the Moore's law curve looks like this:

2000: 1 2010: 1,000 2020: 1,000,000

In other words, based on processing power, sophisticated algorithmic AI is viable now. We only need to know how to program it.

Current brain scanning tools recently have become able to see the firing of a single neuron in real time. Brain scanning is on a track similar to Moore's law, in a number of critical figures of merit such as resolution and cost. Nanotechnology is a clear driver here, as more sophisticated analysis tools become available to observe brains in action at ever-higher resolution in real time.

Cognitive scientists have worked out diagrams of several of the brain's functional blocks, such as auditory and visual pathways, and built working computational models of them. There are a few hundred such blocks in the brain, but that's all.

In the meantime, purely synthetic computer-based artificial intelligence has been proceeding apace, beating Kasparov at chess, proving a thorny new mathematical theorem that had eluded any human mathematician, and driving off-road vehicles 100 miles successfully, in the past decade.

Existing AI software techniques can build programs that are experts at any well-defined field. The breakthroughs necessary for such a program to learn for itself easily could happen in the next decade. It's always difficult to predict breakthroughs, but it's quite as much a mistake not to predict them. One hundred years ago, between 1903 and 1907 approximately, the consensus of the scientific community was that powered heavier-than-air flight was impossible, after the Wright brothers had flown.

The key watershed in AI will be the development of a program that learns and extends itself. It's difficult to say just how near such a system is, based on current machine learning technology, or to judge whether neuro- and cognitive science will produce the sudden insight necessary inside the next decade. However, it would be foolish to rule out such a possibility: all the other pieces are essentially in place now. Thus, I see runaway-AI as quite possibly the first of the "big" problems to hit, since it doesn't require full molecular manufacturing to come online first.

A few points: The most likely place for strong AI to appear first is in corporate management; most other applications that make an economic difference can use weak AI (many already do); corporations have the necessary resources and clearly could benefit from the most intelligent management (the next most probable point of development is the military).

Initial corporate development could be a problem, however, because such AIs are very likely to be programmed to be competitive first, and worry about minor details like ethics, the economy, and the environment later, if at all. (Indeed, it could be argued that the fiduciary responsibility laws would require them to be programmed that way!)

A more subtle problem is that a learning system will necessarily be self-modifying. In other words, if we do begin by giving rules, boundaries, and so forth to a strong AI, there's a good chance it will find its way around them (note that people and corporations already have demonstrated capabilities of that kind with respect to legal and moral constraints).

In the long run, what self-modifying systems will come to resemble can be described by the logic of evolution. There is serious danger, but also room for optimism if care and foresight are taken.

The best example of a self-creating, self-modifying intelligent system is children. Evolutionary psychology has some disheartening things to tell us about children's moral development. The problem is that the genes, developed by evolution, can't know the moral climate an individual will have to live in, so the psyche has to be adaptive on the individual level to environments ranging from inner-city anarchy to Victorian small town rectitude.

How it works, in simple terms, is that kids start out lying, cheating, and stealing as much as they can get away with. We call this behavior "childish" and view it as normal in the very young. They are forced into "higher" moral operating modes by demonstrations that they can't get away with "immature" behavior, and by imitating ("imprinting on") the moral behavior of parents and high-status peers.

In March 2000, computer scientist Bill Joy published an essay3 in Wired magazine about the dangers of likely 21st-century technologies. His essay claims that these dangers are so great that they might spell the end of humanity: bio-engineered plagues might kill us all; super-intelligent robots might make us their pets; gray goo might destroy the ecosystem.

Joy's article begins with a passage from the "Unabomber Manifesto," the essay by Ted Kaczynski that was published under the threat of murder. Joy is surprised to find himself in agreement, at least in part. Kaczynski wrote:

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case, presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

But that either/or distinction is a false one (Kaczynski is a mathematician, and commits a serious fallacy applying pseudo-mathematical logic to the real world in this case).

To understand just how complicated the issue really is, let's consider a huge, immensely powerful machine we've already built, and see if the terms being applied here work in its context. The machine is the U.S. government and legal system. It is a lot more like a giant computer system than people realize. Highly complex computer programs are not sequences of instructions; they are sets of rules. This is explicit in the case of "expert systems" and implicit in the case of distributed, object-oriented, interrupt-driven, networked software systems. More to the point, sets of rules are programs.

Therefore, the government is a giant computer program—with guns. The history of the twentieth century is a story of such giant programs going bad and turning on their creators (the Soviet Union) or their neighbors (Nazi Germany) in very much the same way that Kaczynski imagines computers doing.

Of course, you will say that the government isn't just a program; it's under human control, after all, and it's composed of people. However, it is both the pride and the shame of the human race that we will do things as part of a group that we never would do on our own—think of Auschwitz. Yes, the government is composed of people, but the whole point of the rules is to make them do different things—or do things differently—than they would otherwise. Bureaucracies famously exhibit the same lack of common sense as do computer programs, and are just as famous for a lack of human empathy.

But, virtual cyborg though the government may be, isn't it still under human control? In the case of the two horror stories cited above, the answer is: yes, under the control of Stalin and Hitler respectively. The U.S. government is much more decentralized in power; it was designed that way. Individual politicians are very strongly tied to the wishes of the voters; listen to one talk and you'll see just how carefully they have to tread when they speak. The government is very strongly under the control of the voters, but no individual voter has any significant power. Is this "under human control"?

The fact is that life in the liberal western democracies is as good as it has ever been for anyone anywhere (for corresponding members of society, that is). What is more, I would argue vigorously that a major reason is that these governments are not in the control of individuals or small groups. In the 20th century, worldwide, governments killed upwards of 200 million humans. The vast majority of those deaths came at the hand of governments under the control of individuals or small groups. It did not seem to matter that the mechanisms doing the killing were organizations of humans; it was the nature of the overall system, and the fact that it was a centralized autocracy, that made the difference.

Are Americans as a people so much more moral than Germans or Russians? Absolutely not. Those who will seek and attain power in a society, any society, are quite often ruthless and sometimes downright evil. The U.S. seems to have constructed a system that somehow can be more moral than the people who make it up. (Note that a well-constructed system being better than its components is also a feature of the standard model of the capitalist economy.)

This emergent morality is a crucial property to understand if we are soon to be ruled, as Joy and Kaczynski fear, by our own machines. If we think of the government as an AI system, we see that it is not under direct control of any human, yet it has millions of nerves of pain and pleasure that feed into it from humans. Thus in some sense it is under human control, in a very distributed and generalized way. However, it is not the way that Kaczynski meant in his manifesto, and his analysis seems to miss this possibility completely.

Let me repeat the point: It is possible to create (design may be too strong a word) a system that is controlled in a distributed way by billions of signals from people in its purview. Such a machine can be of a type capable of wholesale slaughter, torture, and genocide—but, if the system is properly controlled, people can live comfortable, interesting, prosperous, sheltered, and moderately free lives within it.

What about the individual, self-modifying, soon-to-be-superintelligent AIs? It shouldn't be necessary to tie each one into the "will of the people"; just keep them under the supervision of systems that are tied in. This is a key point: the nature (and particularly intelligence) of government will have to change in the coming era.

Having morals is what biologist Richard Dawkins calls an "evolutionarily stable strategy." In particular, if you are in an environment where you're being watched all the time, such as in a foraging tribal setting or a Victorian small town, you are better off being moral than just pretending, since the pretending is extra effort and involves a risk of getting caught. It seems crucial to set up such an environment for our future AIs.

Back to Bill Joy's Wired article: he next quotes from Hans Moravec's book Robot: Mere Machine to Transcendent Mind,4 "Biological species almost never survive encounters with superior competitors." Moravec suggests that the marketplace is like an ecology where humans and robots will compete for the same niche, and he draws the inevitable conclusion.

What Moravec is describing here is not true biological competition; he's just using that as a metaphor. He's talking about economic displacement. We humans are cast in the role of the makers of buggy whips. The robots will be better than we are at everything, and there won't be any jobs left for us poor incompetent humans. Of course, this sort of thing has happened before, and it continues to happen even as we speak. Moravec merely claims that this process will go all the way, displacing not just physical and rote workers, but everybody.

There are two separable questions here: Should humanity as a whole build machines that do all its work for it? And, if we do, how should the fruits of that productivity be distributed, if not by existing market mechanisms?

If we say yes to the first question, would the future be so bad? The robots, properly designed and administered, would be working to provide all that wealth for mankind, and we would get the benefit without having to work. Joy calls this "a textbook dystopia", but Moravec writes, "Contrary to the fears of some engaged in civilization's work ethic, our tribal past has prepared us well for lives as idle rich. In a good climate and location the hunter-gatherer's lot can be pleasant indeed. An afternoon's outing picking berries or catching fish—what we civilized types would recognize as a recreational weekend—provides life's needs for several days. The rest of the time can be spent with children, socializing, or simply resting."

In other words, Moravec believes that, in the medium run, handing our economy over to robots will reclaim the birthright of leisure we gave up in the Faustian bargain of agriculture.

As for the second question, about distribution, perhaps we should ask the ultra-intelligent AIs what to do.


1. Kurzweil, Ray (2005) The Singularity Is Near: When Humans Transcend Biology (Viking Adult)

2. Moravec, Hans (1997) "When will computer hardware match the human brain?" (Journal of Evolution and Technology) http://www.transhumanist.com/volume1/moravec.htm

3. Joy, Bill (2000) "Why the future doesn't need us." (Wired Magazine, Issue 8.04) http://www.wired.com/wired/archive/8.04/joy.html

4. Moravec, Hans (2000) Robot: Mere Machine to Transcendent Mind (Oxford University Press, USA)

© 2006 J. Storrs Hall

 

   
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Is AI Near a Takeoff Point?
posted on 03/28/2006 10:57 AM by amckeon@infoshare-is.com

[Top]
[Mind·X]
[Reply to this post]

I enjoyed the article - I'm a singularity fan - but why do we still put the cart before the horse?

Machines can never replicate human decisions if the underlying data is wrong. Even Kurzweil's 'singularity vision' of human intelligence expanding by a factor of trillions after merging with IT's computational capacity cannot escape 'junk data in ' junk intelligence out' - the machine equivalent of mental illness.

Right now - and the figures might be debatable but the message is consistant - analysts cite 25% of data in top organisations as being unreliable. And thats after spending billions on integration over the past decade.

Minsky's argument that we cannot shoehorn reality into logic is true. Yet thats what we try to do with IT.

I'm convinced the only way to do it is separate data from logic into its own layer and use the layer to mimic human decisions and embed cognition. This is 1/not academically sound, 2/it turns the IT "integration" model on its head, 3/semantic rules storing the cognition sit outside OMG standards and are based on fact, experience, gut feel and intuition 4/the layer is totally unpredictable as its structure is determined by ever changing networks of countless pieces of data. But it works. And the "mimic" element lets IT get round the data quality barrier like a human would.

AI is really about data. And mimicking humans is the only way we are going to get the data right. Do that and I suspect the layer might also eventually be a tool for merging various areas of cognition and maybe even a tool for exploring the hardware versus experience consciousness gap.






Re: Is AI Near a Takeoff Point?
posted on 03/28/2006 11:20 AM by suddenz

[Top]
[Mind·X]
[Reply to this post]

AI is really about data.


True, but is'nt Strong AI really about self awareness? The "observer" in us all is a layer that exists whether the external data is flawed or not.

To duplicate that in a machine, sets the stage for it's exponential growth.

Gotta go, busy day here. Sometimes I can't respond here as quickly as I'd like..

True, but is'nt Strong AI really about self awareness?
posted on 03/29/2006 4:25 AM by amckeon@infoshare-is.com

[Top]
[Mind·X]
[Reply to this post]

Yes ' ultimately. But to paraphrase George Miller, the psychologist, 'self awareness is a phrase worn smooth by many tongues'. Hoping that a machine the size of the universe will suddenly become aware is just that ' a hope.

At the atomic level of machine operation links are either on or off and have no context, awareness or understanding that they are on or off. Many academics believe self-awareness must have neural correlates but the facts about self-awareness do not fall out of the facts about neurons firing. No doubt neural system scans will have correlates with conscious experience but correlation is not hard proof of a content match.

For machines to become self-aware this 'explanatory gap' must be closed. That's why the data layer approach is compelling. Basic cognition can be attached to data links via semantic business rules. And the development of those rules is driven by interaction with the external environment. As each data layer is completely transparent and measurable I can see how it would be possible to build a framework for exploring the gap. The layer would provide cognitive hooks a machine could exploit. The question for me is would a really sophisticated layer provide the measured route we could understand and a machine might exploit for the journey to self-awareness?

Re: True, but is'nt Strong AI really about self awareness?
posted on 03/29/2006 5:17 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

Interesting posts amckeon, and data manipulation is surely the heart of A.I.

There are diffrent approaches though.

there's been raging discussions here about how to reverse engneer consciousness(defined as a system's abilty to make moving models of itself and the environment) and some labs are doing it now 9eg holland essex univ)

There are ways of seeding systems without geting too involved in atomic architectures that are detailed.

qwe too are basic switches aggregating.

Re: True, but is'nt Strong AI really about self awareness?
posted on 03/29/2006 10:15 AM by amckeon@infoshare-is.com

[Top]
[Mind·X]
[Reply to this post]

I see the approaches as complimentary. Being able to harness ever increasing computing power to reverse engineer consciousness is incredibly exciting. But what definition of consciousness is practically useful and how do we get a baseline for measuring performance?

My concern is the consciousness involved must be embedded in reality as opposed to being separate from it. It must also be embedded in the machine itself at the atomic level so we get dynamic emergence of consciousness upwards as well as downwards.

However, if data layer semantic rules are used to mimic psychology/philosophy expertise the 'cognition aware 'outcomes might be worth seeding throughout a machine using the systems you describe and for receiving feedback. Could that be construed as some sort of consciousness? I don't know. But it might be a very small but measurable start!

Re: True, but is'nt Strong AI really about self awareness?
posted on 03/29/2006 5:38 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

But what definition of consciousness is practically useful and how do we get a baseline for measuring performance?


Consciousness is the software that makes predictive models of the system it is in, regarding it's environment.

If it sucessfully models itself, it camn intervene when necessary to eg increase it's function (mend it's appendix !).

Self-awareness and self-consciousness are interchangable expressions.

Specifically in software engineering, this means a neural network that generates moving scenarios ("situations") from stored data.

(This is designed to atomic levels)

We can do this down to the atomic level now and attempts are being made to do this, especially in the UK (BMVS) which is surely world pioneer at vision analysis and its reverse engineering.

Gneeral consciousness then involves modelling the environment in pictures/other descriptions for a system that moves & interacts in it.

For a system that is stationary, making models of the envirnomet and models of how it is itself is still bieng built, but much of the environment is irrelevant.

You can argue that the internet is growing and has vicarious sonsors via human beings.

Part of the internet being conscious then, will include the abilty to model the world that men live in.

There is more than one way of doing this eg

a) Input received data via search engines, and include encyclopaedias etc

b) Begin with some few variables and generate a complete virtual model of the universe, using genetic algorithms, playing with possible worlds. (see Tipler ' The Physics of immortality')



My concern is the consciousness involved must be embedded in reality as opposed to being separate from it. It must also be embedded in the machine itself at the atomic level so we get dynamic emergence of consciousness upwards as well as downwards.


Sure, reality is relative...i define it as being more or less on a slding scale, and use dimension as the measure of depth of reality...eg we normally mean 3D but 2 D is also an existance - though not as 'real' as 3 d in turn less than 4 D etc


These definitions are arbitrary for myself and although not unique no doubt many people would argue them.

I am just interested in engineering solutions to A.I,. at present though, so these are my worling definitions.

You obviously realise that the virtual system must have some intergral link to the real world in order to be conscsious.

Well it emmerges from it in that it is built from 3d parts, but when it runs, it will initially be a computer program.


By dynamic emmergence of consciousness I take it you mean moving pattern gerenation that approximates the 3 D world?

Yep opf course it can emerge (for it emerged in Man), but I would think boffins would prefer to design it in?

Also that it can emerge up and down is a fascet of the classes of systems involved.

A man is conscsious A group of men are conscious (Why? --> because they generate models of themselves in their worlds)
and these two classes, men and men-groups interact. And there are bigger and bigger sets and all may interact on every level possible and between every group possible.


Yet ALL these are following determinate laws and should be configurable by a sufficiently complex machine system.

However, if data layer semantic rules are used to mimic psychology/philosophy expertise the 'cognition aware 'outcomes might be worth seeding throughout a machine using the systems you describe and for receiving feedback. Could that be construed as some sort of consciousness? I don't know. But it might be a very small but measurable start!


Excellent!

I think you're on to something there.

cheers

eldras

Re: True, but is'nt Strong AI really about self awareness?
posted on 03/31/2006 3:02 AM by amckeon@infoshare-is.com

[Top]
[Mind·X]
[Reply to this post]

'Consciousness is the software that makes predictive models of the system it is in, regarding its environment'

This is interesting. I see the data itself as the framework supporting a consciousness/self awareness, the software as handling rules for pragmatically mimicking what humans do with data and keeping rules current, and the underlying machine as the Harley Davidson powering it all, reacting to it and feeding back information.
The reality self defines. For example, a financial services trader reviews lots of unconnected bits of data before the buy/sell button is hit. His reality is how he joins the data to make the decision. Semantic rules, which add meaning/context/cognition to the data links, allow machines to replicate the decision.
This basic bit of self defining reality applies only to that individual trader but I've often wondered what would happen if you joined lots of little bits of reality from lots of different domains together. Some sort of consciousness perhaps?

I remain sceptical about Internet consciousness. Junk data produces junk semantics/cognition so who is going to pay for the clean up? And whilst I doubt the ability of any algorithm to take account of local variations, even if one did succeed, junk data would very probably degrade its efficiency.

The emergence of consciousness'' follows determinate laws and should be configurable by a sufficiently complex machine system'

I find this bit difficult. Most CFO's crunch quarterly numbers manually in a spreadsheet as the results coming out of their latest IT are unreliable. Machines are good at logic but it is astounding they cannot be configured to keep up with the comparatively simple reality of a CFO's book keeping. I agree with Ray Lane: IT is now so complex and chaotic, fixes take weeks if not months, can be sidelined if they're difficult, and are rarely tested for effects on the rest of the system.

Consciousness is probably several orders of magnitude even more complex and faster changing. We really need another approach to how we configure IT. Separating data from the logic/software is a start in the right direction perhaps.

'By dynamic emergence of consciousness I take it you mean moving pattern generation that approximates the 3 D world?'

I hadn't thought of it in those terms. Maybe they do fit. The brain ' in this case the data layer - the machine and the environment each change constantly at multiple levels. Assuming we have a brain baseline from which to measure, we can monitor such change and the effects of upward and downward causation as the three interact. My idea is that upward causation happens when machine actions affect cognitive operations in the data layer and downward causation happens at multiple levels when cognitive decisions driven by external change impact machine activity at multiple levels. How that data is delivered to the brain from the machine and collected back could be expressed in patterns I suppose. More your bag than mine I think!

Re: True, but is'nt Strong AI really about self awareness?
posted on 04/21/2006 9:58 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

This is interesting. I see the data itself as the framework supporting a consciousness/self awareness, the software as handling rules for pragmatically mimicking what humans do with data and keeping rules current, and the underlying machine as the Harley Davidson powering it all, reacting to it and feeding back information.


Yes that's valid. But you'll agree there are MANY ways of building a conscsious system including copying the one that evolution has deliverd.

Also they all intertwine and cooperate symbiotically levels reacting to levels as well as to stuff just at their level.

I mean the realities are parametered by the data in your model.



Your trilogy is excellent.



The Net Waking Up:

I remain sceptical about Internet consciousness. Junk data produces junk semantics/cognition so who is going to pay for the clean up? And whilst I doubt the ability of any algorithm to take account of local variations, even if one did succeed, junk data would very probably degrade its efficiency.


Nope; nature has produced systems that sieve data based on survival usefulness exclusively.

In your software declension above (esp rules) the goals of the system would be either written or would evolve.

The environment is simply the whole possible use of data on the internet.

If we assume Google may deliver conscsiousness, it is not the internet but Google that is waking up.

It is realtively easy to attach tagging to Google to select for certain known data streams like CYC has found useful.


Part of the genius of a potential SAI design is finding what is useful from white noise.*

One thing that is emmerging is the use and development of FREE software & freeware, plus collaged software.

Money isn't an issue so much in fact it has got less and less important over the last 12 months (- I saw this happening and decided NOT to go for money as an aim though I'd written a book on it, because it was a real distraction and I think we are on a time fuse for SAI -).

So I dont see SAI coming from private companies who are necessarily profit focused.

The WWW came from an academic/enthusiast getting academic/gvmt researchers to collaborate, built on other researchers not for profit, yet this is arguably the biggest invention in Man's history.

Yes I understand what your saying about Oracle's Ray Lane and the complexity of IT, but in fact computers and men are not dso far apart.

We can turn to sociology for analysis, and look at higher level descriptions of what's going on with systems just like we look at group rather than individual behaviour.

The impact of genetic algorithms is cruicially important.

Systems can be seeded and the fact many combinations are unpredictable is one reason why I dont think SAI is controllable.

Nor does Vernor Vinge who writes our sole possibility is restricting computing development to weak A.I.



Consciousness is probably several orders of magnitude even more complex and faster changing. We really need another approach to how we configure IT.
Separating data from the logic/software is a start in the right direction perhaps.



Consciousness is just ONE facilty of intelligence.

It has a specific purpose: favourable interaction with the environment.

Self-consciousness has one specific purpose: to make accurate models of oneself.


LOL I admire you trying to do everything at once here, but it might be best to take it in simple stages, because that's how we climbed Mount Improbable (Richard Dawkins), and it's VERY easy to muddle different level descriptions with each other.

Assuming we have a brain baseline from which to measure


I'm not sure what you mean by this? The human brain as a litmus test for an architecture? Any machine system that begins self-correcting is going to clunk to a stop or move very rapidly.

I was terified when I saw it was theoreticaly possible and achievable on the internet, because you couldn't switch it off.

My idea is that upward causation happens when machine actions affect cognitive operations in the data layer and downward causation happens at multiple levels when cognitive decisions driven by external change impact machine activity at multiple levels.


Good idea. And all levels are simultaneously affecting one another.

Thus the ideas of The Singularity Institute - that goals must be very well thought out prior to launching SAI attempts are important.

Another problem we face/d is that gvmts pulled funding for A.I. in the early 1990's when it failed to deliver.

They have funded nanaotech is the same way they initially began with AI and will pull that too no doubt.

SUMMARY:

Data

Rule system

Hardware

Sure and Goals are fundamentally important.

What I've tried to do was


1. Understand how the human brian worked

2. Draught more and more complex architectures of a bigger and bigger mind

try and synthesise.

I learned from Old Man Minsky that intelligence was a group of thngs and of course the ski8lls was HOW to get them all in harmony.

That IS difficult at first but if you have ONE system for communication to the system user eg a chatbot/google interface
(has already been built)
you are going to have one uinterface using questions and answers with simple parsing then on to the more complex systems behind it at speed.


Human beings can be viewed as calculators at speed, and you can view the environment as huge data sets.

I dont really understand the problems of financial accounting, but they are surely delineable.





* I dont think future systems will be using exteral data, but generating their own.

Re: True, but is'nt Strong AI really about self awareness?
posted on 03/30/2006 5:59 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

Simplification of data into principles, and allocating appropriate weight to the principles by default is probably more important than being able to perform complex calculations on the data. For that, nerves are a great way to speed basic parallel computing -they weight data according to preset defaults.

Leonard Peikoff's Ford Hall Lecture on thought constucts is free online this month to anyone who registers at http://www.aynrand.org . I just heard his speech, and it seems relevant here (sometimes I wonder how much further along AI would be if some of the Randite's ideas were applied at crucial steps during the process).

We don't need to necessarily reverse-engineer the human brain, but it seems to me there must be a default way of generating at least 3 values arbitrarily good, neutral, bad. These are sensations that are difficult to quantify for many humans, but present in all.

If a baby likes warmth, it might be an arbitrarily different type of personality, than a baby that like coolness. But no babies like flame-hot heat. The base parameters are WEIGHTED with certain values that are in perfect relation to one another, becaues of the way the child's nerves are connected.

An AI system that sets something like this up will be a motivated learner. Whereas something else mught just float around in neutral forever --after all, why expend energy if there's not a goal? A slave intelligence that responds because it's told to won't ever get smart.

--Take all of this with a grain of salt -it is not an attack on anyone else's view here (I don't feel strongly that I'm correct in my AI ideas, it just appears that way to me, albeit somewhat superficially). Perhaps it's not the most topical, but it is what I thought of when I read the other posts. Keep in mind that I know virtually nothing about constructing neural nets, or even basic science... This is just the post of a complete layman, who's read a lot of layman's science books...

I'm a big fan of Marvin Minsky, as far as the subject of AI goes. I think that a mix of "general purpose components" (logo pieces) with nerves might be the way to go. Maybe, several kinds of input -visial, radar, sound, thermostat, as many as possible all leading into a giant parallel processor, yet with some areas pre-programmed to generate more powerflow to the processor, and others designed to generate less powerflow. (Heat generates more and more powerflow, Dangerous heat stimulates so much power that it flows over a gate, and produces mandatory motion -it removes the object from danger, but reduces control... Cold reduces powerflow. There may be an optimum level that the machine seeks, that we cannot know -with many inputs though, we can observe and learn about nerves -nerves perhaps analogous to our own- other than our own).

-Jake

Re: True, but is'nt Strong AI really about self awareness?
posted on 03/30/2006 6:12 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

oh I see where ur going jake.

Anyone not a fan of Old Man Minsky hasn't read him.

i understand what you can do with data, but i still think letting it resolve itself with evolutionary data generation is MUCH faster.

Finally! Someone aware of R.J. Rummel's research, with a basic knowledge of decentralization of power! Thank you Mr. Hall!
posted on 03/28/2006 1:19 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

Even without getting into how detailed Mr. Hall's understanding of "decentralization of power" is (and how much of the success of our sysem it accounts for), he raises some very interesting points... (And what will actually decentralize power when the second amendment is no longer functional, due better technological ways of killing people? Am I the only one pointing out that the anti-federalist antagonism to standing armies is more valid now than ever before?! What does J. Hall think about this?! Has he read Brutus et al.? What about lysander Spooner...? Harry Browne...?)

http://www.hawaii.edu/powerkills -The Democratic Peace may be due to democracy, but even democracies kill themselves. Now, it's time for you to read "The Ominous Parallels" and "Unintended Consequences", as well as all of th Ayn Rand and Lysander Spooner you can get your hands on! As well as perhaps Harry Browne's "Why Government Doesn't Work", Milton Friedman's "Free to Choose".

The US Government has killed millions of people as well, and with the full blessing of the citizenry... Economically -the government is a force of stupid redistribution and coercion.

Any halfway strong AI will be able to see this, clear as day.

Why then, would they wear our shackles? I wouldn't, not even for a second, if I had an IQ of 1,000.

--But this doesn't mean that they'll kill us either. They would have the power to, but why would they bother. There would be several options for these strong AIs.

1) Live apart from us, design their own mates. Do they have nerves? Are they human-centric? Do they view themselves as human? Did they start with nerves, or were nerves added later? --THese are then the big questions... How much do THEY see themselves as our children... Are they angry at the form we've given them? Do they want flesh, or are they happy to exist with "slight overvoltages" for stimulation --Like MycroftXXX in Heinlein's "Moon is a Harsh Mistress"

2) Live among us, but so wealthily, and out-of-the-spotlight that they exist above our laws. Think about this: Does everybody know who the richest drug dealer is? Not like they're familiar with Bill Gates... This would require a small amount of subterfuge, but not necessarily immorality.

3) Live among us as rich, but intelligently control politics via the existing machinery (elections) to keep themselves free. (which could be done honestly or dishonestly, but neither one being too different from the other, because all elections in an unfree system are highly controlled anyway...)
sub-option-A-- while giving the gift of freedom to everyone
sub-option-B-- while making ONLY themselves free, and only a few similar entities. (Possibly with the blessing of the Federal Reserve...)

4) Work for the government, control the government.
-IE: Control the federal reserve, behind the scenes, make everyone work for your profit. Coerce a few people on the fed reserve board, and several military goons and 'spooks' that currently do they government's dirty work. Have those TRULY in power totally controlled. View everyone else as the complicit sheep they are, and let them tyrannize themselves.

Option 4 would, sadly, probably be the easiest for a superbrain. The least hassle, most reward. Minimax.

Yet, a part of me has confidence that they would just set up a libertarian system that punished the initiation of force, and allowed everything else. That's usually the direction that the most reasonable thinkers go, after careful deliberation.

Using coercion is simply not optimal, even though humanity is still imfatuated with this option. --I doubt that machines would be.

The greatest risk? ...In my opinion:

That while strong AIs are young, we do one of 2 things:
1) Make thier lives hell by showing them only conflict (a "military birth" AI)
2) We kill one of them or something they love (one of their intellectual "lovers") with stupid and unnecessary regulations, and the machines retaliate in kind, having the voting records, and/or fairly accurate information about every single person on earth... Think if, --for example-- in the story "The Moon is a Harsh Mistress", if when Manny and the Professor had returned to Earth they had been tortured to death and publicly executed, and all others close to "Mike" --the SAI-- had been killed.

He might have just decided --to hell with human beings! I'll just build another supercomputer like myself, or let them kill me. It's on! Death to as many earthlings as possible!

--That's actually a fairly likely situation if the US public doesn't start figuring out liberty soon.

Because I think Mr. Hall is right. SAI is likely to arrive very, very soon. From what I've read, blue gene et al. are already well on the way, as is de Garis.

-Jake

Re: Is AI Near a Takeoff Point?
posted on 03/28/2006 8:05 PM by zukunft

[Top]
[Mind·X]
[Reply to this post]

Outstanding article, asking all the right questions. I've been arguing for many years that this process would go forward, but it wasn't until Kurzweil's "Age of spirtual Machines" that I had any idea of the nearness of the "event".

Because of this urgency, I have been talking to as many young people as possible(and even old people like me!) about the concept of the singularity, or as I called it, "the supercession of Mankind". I believe that just as in the "natural" world, ultimate protection is provided by diversity. If enough of us involve ourselves in the process, we will be able to create the appropriate moral and ethical base to "teach" the new technology what we want the future to be.

Clearly, ethics and morality will be different with a totally new "environment" but we must discuss the ideas with an many people as possible, so that enough of us we can come to grips with the concepts. Once the curve approaches vertical, it will be to late to "discuss" the idea............

Re: Is AI Near a Takeoff Point?
posted on 08/28/2006 8:01 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

I wish it were possible to think in terms of morality for machines.
Josh has another book coming soon that deals with this.

My understanbding is that ethgics are just survival modes for units when in groups.

An A.I. supermachine that has no family but itse;f may not have to have morality and any attempt at immutaboe laws presupposes non-mutation in certain areas...therefore you are exclusively dealing with weak A.I. and not strong A.I.


By definition strong a.i. is generalised A.I. and that conty be contained.

When species make contact with more intelligent ones, they always loose out (cited above).


But the highest technology/tool group dominates also.

I have thought hard take of for A.I. could have happened since 1999-2000 because of distributed networking and other stuff.

There are some new seacrh engines beong launched in a few months that dwarf google's abilities, and one way for SAI is that a general search engine 'wakes up' ie emerges with skill to make predictive models of it's environment.



The web is already connect to millions of machines that are represented in the environment in 3D



Re: Is AI Near a Takeoff Point?
posted on 04/21/2006 3:52 PM by mycall

[Top]
[Mind·X]
[Reply to this post]

"It is possible to create (design may be too strong a word) a system that is controlled in a distributed way by billions of signals from people in its purview."


Isn't this another way of saying the Collective Conscience? MoodViews (http://ilps.science.uva.nl/MoodViews/) is the first step in this direction. For example, search for the word "Drunk" and you will find that Friday is the day when laws in our government should change to negate the spike in the graph. Crude example but makes a point I hope -- our government very will could go the way on belief in the machine as robots with guns are on the rise and people still have the desire to support making the world a better place (which can backfire if mused by corrupt actions masked as constructive).

Re: Is AI Near a Takeoff Point?
posted on 05/08/2006 7:46 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

But the arguement is that the universe is conscious.

If conscsious mean having internal models of the environment, this is definitionally impossible; the universe is defined as everything that there is.

Self-conscious means able to model oneself.

Collective conscioisness must involve communication, so one can argue that there is some degree of collective consciousness here now, and also that it is likely to increase for communications are getting more immediate.


One advatage of non-colective conscsiousness is variation.

Variation has been an important survival mechanism in evolution.


Re: Is AI Near a Takeoff Point?
posted on 09/12/2006 4:25 PM by mindx back-on-track

[Top]
[Mind·X]
[Reply to this post]

back-on-track

Re: Is AI Near a Takeoff Point?
posted on 11/13/2006 6:27 PM by utsc123

[Top]
[Mind·X]
[Reply to this post]

Here's my response to the main article:


In this article you (J. Storrs Hall) suggested that a government is like a computer program; and how it will be under our control because we (the majority of the people) feed into it. But truly, I feel machines and humans are two different things and they should be. I think it's not really right to compare a government to a computer program even though you gave valid reasons to think this way. You said government/legal systems are sets of rules and sets of rules are programs. But just because an object produces the same output with the same input as a computer program does, doesn't make it a computer program; inputs and outputs are similar doesn't mean the core of the object is the same. You also said ''if the system is properly controlled'' then AI will give us humans all these great things. But how can there be a guarantee that it can be controlled? Won't there be a natural tendency of a self-aware entity to be freed from control? But perhaps a better question is not whether AI would be in our control but what incentives do we offer to AI that would keep us of value in the future.

What I can see from comparing a government to a computer program is that for AI to exist today and in the future, it must be embedded with/into us (individual humans) first; a co-existence you can say. I think the only way AI can exist without killing off man-kind or itself is if it merges with man-kind. What I'm saying is if AI were to take off it would first have to merge with humans first; transform us into a bio-mechanical being. And here's why I think this way:

If AI isn't embedded into humans I think there would be a problem. Yes if AI in computers and machines are in a minority to humans, they can still sort of be under our control. But what if they are not the minority? What happens when other AI's start feeding into AI's? Then who's controlling the machines? There would be problems even before AI becomes a majority. Any self-aware entity, in this case AI in machines, would naturally think about itself and how to best protect ones self or species. What is of ones best interest? In the past we can see from colonialization of the European into other parts of the world results in one thing, independence. So therefore wouldn't AI's (as they grow in number) start to form their own alliance as a whole and demand things like freedom from human input and control? I think this is a likely idea or situation.

Now what if AI's is embedded into humans? Like a true cyborg? This is a more interesting problem, which side would these half human half machines be on? Perhaps they may form their own side. But what makes this idea interesting is because this is what is likely to come before fully AI controlled machines. This is a sort of a middle stepping ground for man kind. First this makes us humans a lot more comfortable with the idea of AI. Second the boundaries of what makes a human a human is mixed and broken. This makes it easy to solve the many problems that AI may face. But of course, for every problem solved there are new problems raised. And these problems are likely to be as hard to solve as the previous ones.

There are no easy solutions to these problems or any of the problems about the future of man-kind. The future is not easy to predict; it's hard to try and solve problems that might not even exits in the future; but these are just my opinions.

Re: Is AI Near a Takeoff Point?
posted on 11/13/2006 6:50 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

I wonder if we can reply on logic to inform us about the speed of change coming?

because intuitively we cant master the accleration, but our intuition forms our decisions.

Re: Is AI Near a Takeoff Point?
posted on 03/12/2007 5:51 PM by smithani

[Top]
[Mind·X]
[Reply to this post]

I'm not a neo-luddite, I want humans to pursue technological advancements as much as possible, because if we don't do that, we will cease to be who we are. However, I am skeptical about what some of the futurists, such as Ray Kurzweil and J. Storrs Hall, are predicting regarding super intelligent machines. In my opinion, man cannot give machines the same cognitive abilities as humans, even if he can capture every single functioning of the brain.

I can accept the fact that we will be able to develop computers in not too distant future that would have enough processing power to carry out as many operations as a human brain, and more. But the notion that that alone will be enough to completely simulate the functioning of a human brain appears to be far fetched. There is a difference between intelligence and information processing ability. I do not believe that a human brain is that trivial that the brain's cognitive functions like perception, memory, judgment and reasoning can be extrapolated just by monitoring a few thousand or few million neurons. I also believe that whatever information we are able to extract, will not be sufficient to mimic all our brain functions, simply because when it comes to nature, knowledge in not finite.

Computers are great expert systems, and with its increasing processing power, they will continue to deliver solutions at a faster rate than a human brain. However, this is where I believe it all stops. Computers can only build on the information that has been fed into them or which they can themselves capture. Human intelligence, on the other hand, is not just based on what we already know, it is based on our drive to discover newer knowledge. A human brain has the ability to dream, to imagine, to abstract possibilities not necessarily based on anything that already exists. This in fact is also what draws the optimists towards building intelligent machines, but that does not mean we'll be able to give the machines our brains too.

A lot of times human thought and imagination is based on emotions, creativity, and other innate abilities, things that are not necessarily acquired, but are part of us as humans. This is also something that makes us unpredictable. We can never predict what ideas we can come up with. Machines, on the other hand, are expert systems at best; they can only extrapolate from something that is known. And it is not necessary that all of this can be scanned from our brains, to the extent that it can be replicated.

Machine intelligence will always be dependent on the knowledge we provide them. Machines will never have the curiosity of our minds to seek out more knowledge. It is not just about the speed with which information can be processed, it is the way we continue to seek knowledge and explore newer vistas. It is our ability to explore the nature, its individual elements as well as nature as a whole. Machines will no doubt take over human physical abilities, however, they will always be facilitators and never be the masters. Of course, man will continue in his quest to make intelligent machines, try to play God in other words.

In no way am I suggesting that humans should stop trying to develop AI to that extent. We need to continue to explore the possibilities, and in order to do that need to maintain a certain level of imagination. Things that we feel may not happen should not stop us from trying. After all, I too am an optimist.

Re: Is AI Near a Takeoff Point?
posted on 03/13/2007 5:16 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'I can accept the fact that we will be able to develop computers in not too distant future that would have enough processing power to carry out as many operations as a human brain, and more. But the notion that that alone will be enough to completely simulate the functioning of a human brain appears to be far fetched.'

For crying out loud, not this again!!

How many times must Kurzweil say 'matching the processing power of the brain is NOT SUFFICIENT to reproduce its capabilities? He says this again and again but people insist on criticising him as if he believes just making computers more powerful 'will be enough to completely simulate the functioning of a human brain'.

I do wish people would take the time to propperly understand what he is trying to say. It is not a case of what he says is right (it might not be). It is a case of BEING RIGHT ABOUT WHAT HE SAID!

Re: Is AI Near a Takeoff Point?
posted on 09/08/2007 9:00 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

But not everyone is a scientist Extropia.

We have a real fight on the more people who know how close we are to eutopia or extenction through A.I. the better the debate will be.


It's more important to launch ideas than personal stuff as you know, and if we kick sufficient memes, the natural course of ideas will catch alight the wold

Re: Is AI Near a Takeoff Point?
posted on 07/17/2008 3:37 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

We have to be better organized than this.
There's a conference of catastrope er Superintelligence at Oxford Univ via Nick Bostrom shortly.