|
|
|
|
|
|
|
Origin >
How to Build a Brain >
Discovery Today Discussion of Machine Consciousness
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0230.html
Printable Version |
|
|
|
Discovery Today Discussion of Machine Consciousness
Hugo de Garis, brain builder, feels the weight of a future conflict between humans and the artificially intelligent beings they have created. Sir Roger Penrose is skeptical, and Robert Llewellyn is curious. See a discussion between the three.
Discovery Today special with Hugo de Garis, Roger Penrose and actor Robert Llewellyn, discussing artificial intelligence and the future of machine consciousness (total time: 17:36)
Originally aired December 2000. Published on KurzweilAI.net July 26, 2001.
Discovery Today Video
Programme segments courtesy of Discovery Networks Europe. For further information on the Discovery Channel, visit www.discoveryeurope.com.
| | Join the discussion about this article on Mind·X! | |
|
|
Mind·X Discussion About This Article:
|
|
|
|
Discovery Interview
|
|
|
|
I was somewhat disappointed with the shallowness of Penrose's, and Llewellyn's particular points of view. I was also disappointed with de Garis with the lack of forcefulness in his rebuttals. Llewellyn can perhaps be excused because of what
I assume is his scientific laity. However, Dr. Penrose flirts with a more learned
version of that "Demon Haunted World" through his insistence that what
de Grais is talking about is "just computation" and that there is "something else"
involved with true intelligence and conscienceness, while not attempting to even
define what that "something else" is. I will return to this in am moment.
Certainly, such a short venue as a quarter hour interview cannot begin to fully explore
the depths of these issues. However, serious discussion, much of it taking place on the pages of this WEB site has transcended the superficial transaction that took place among these three and their interviewer. It seemed as though in preparation for this meeting
none of the participants bothered to do their homework in any depth. A detailed reading of the resource embodied by KurzweilAI.net would have better prepared them all. Discovery's man-in-the-street interviews should have informed the producers that they need not be worried about having to talk down to their viewers.
Coming back to the Penrose questions, it seems to me obvious that admittedly, not
knowing what either intelligence or conscienceness really is, then it is difficult to
contend that either cannot be achieved or manifested via purely computational
means. Some have argued quite the contrary (the Churchlands among others,
for example) that indeed the brain is a computational device, not necessarily in the
"binary" computational form exhibited by today's computers, but an hybrid analog-digital
mechanism. The most cogent point made by de Grais is the one about the existence
proof offered by the biological brain. The more fundamental point is that there seems to
be no physical laws which prohibit either intelligence or conscienceness from evolving
or being created artificially in other supporting substrates than biological ones.
I am familiar with Penrose's recent ventures into these areas, where he posits that
certain ill-defined quantum phenomena lie at the root of conscienceness and intelligence.
Indeed, there may well be a quantum component to brain processes but I do not see
from his reasoning why this is either necessary or the only vehicle that would allow
sentient beings to arise. If evolution has demonstrated anything it has amply shown that
there are several different solutions to any given problem. I think that the universal
principles of selforganization, "nature" if you will, as embodied by the heretofore know
physical laws are powerful enough to elicit the increasing complexity necessary for the
emergence of intelligence and conscienceness from any "environment," carbon rich
or otherwise.
More important is de Grais' concerns about an "artelect war," or the fate of humanity
in light of the ascendancy of an artelect race. My reading of Kurzweil and those of
similar disposition is that there will be a "unification" of man and machine
that will obviate the human-artelect schism. The two forms, biological and non-biological
will become "one" race of entities, an augmented human "mind" embodied in myriad
machine incarnations. This is the best case scenario. Bill Joy and de Grais, and many
others see the pace of advance as being so rapid that at least three catastrophes can
happen; 1) before a benign unification can occur, artelects will soar beyond
human intelligence and destroy us for any number of reasons. The scenario of greatest
precedence is the human track record of what happens when less technical civilizations
meet more advanced ones. 2) de Grais "artelect war," that a unbridgeable gap will develop
between proponents of the new technologies required for artelect development and those
afraid of it. 3) an emergence of technologies of boundless promise but also boundless
danger that will spin out of control and either disrupt human civilization or
destroy humanity.
Fortunately or otherwise, I see all three of these as being possibilities, and more along
the timelines predicted by Ray Kurzweil than in the mid or next century. Perhaps a
fourth possibility should be added. That unless human attitudes and especially the
attitudes of the developed nations change about our environment, population growth
and the less privileged members of humanity that a more mundane but no less decisive catastrophe will befall us.
Scott Austin |
|
|
|
|
|
|
|
|
Re: Discovery Today Discussion of Machine Consciousness
|
|
|
|
After further examination, Penrose does talk about the difference.
He expresses these machines as not having the desire to do "nasty things" which humans do. He then immediately drops back to fuzzy terms like consciousness and awareness.
It occurs to me that consciousness and awareness are involved in gathering information from the world, whereas desire has more to do with action - the flow of information in the opposite direction: from the mind into the world.
I remember reading Ray Kurzweil's definition of intelligence posted nearby, and, while lengthy, it centered around the caqpability of a machine to possess and pursue a goal - which seems here to be exactly the capability that Penrose is denying that machines can possess!
Machines today are considered tools - extensions of ourselves, of our desires, and our actions. As such, we are considered morally responsible for the consequences wrought by our tools, in much the same way soldiers, slaves, or even children would be excused for wrongs done when following orders given them by their superiors (officers, masters, or parents, respectively).
("It is a poor craftsman who blames his tools")
This, I think, is the main ingredient that's missing from our machines. When they begin to reach a point where they can appear to formulate goals of their own, they will begin to be looked upon as persons. In other words, if they start doing so many unpredictable things that their guardians are unable to anticipate to the point of being UNUSEFUL - just like slaves or mutinying sailors - then they will begin to raise sentiment and power to put toward the recognition of the goals they have formulated themselves.
Even among humans, it is unpredictability that marks freedom. Even today, mass psychology has given marketers and governments the ability to manipulate people by analyzing how the human mind, in general, works. In other words, by making the human mind known, and, thus, PREDICTABLE. By knowing what makes us tick, we literally become more and more "their willing tool".
I always speculate that AI is not an end to itself. Things like the Turing test strike me as not a test for intellignece, but a test for humanity, based on the assumption that humans are intelligent. AI then becomes the undertaking not of creating independent beings, but of reverse-engineering the human brain. Will such knowledge only exascerbate modern problems by giving the powers that be even more predictive power that can be used to coerce and manipulate people?
Single humans are unpredictable, but, as any gambler or statistician knows, aggregation of large numbers ARE.
Penrose bases his arguments on this subject on possible quantum interactions that having trickle-up effects on the decisions we make with our brains. He seems to say that it is these things that differentiate us from Machine Intelligence. It is also these things which mark us as ultimately unpredictable, and, thus, individually free.
My question then becomes: Why we cannot build machines with just those kinds of mechanisms?
Of course, introducing truly random inputs into a system is, by definition, making it unpredictable. We may very well then be creating free beings in such a case. |
|
|
|
|
|
|
|
|
Re: Discovery Today Discussion of Machine Consciousness
|
|
|
|
You raise some very good points.
The Turing test came to mind as something I expected de Garis to voice. I'm glad at least you mentioned it.
Desire is a value judgment relating to how well something satisfies a goal or sub goal. Goal-oriented behavior can be observed in humans, animals, and maybe machines, although in the latter one might tend to say 'purpose'. 'It serves OUR purpose.' We defined a machine's purpose and evaluated its usefulness and success according to how well it achieves our goals. We get on thin ice when we go further to say it has its own goals just because it doesn't behave in a way that achieves our own.
I concur with your view of the moral responsibility we have for the machines we create. We can give machines goal-oriented behavior (and goals). We can equip machines with sensory devices and provide them with methods to respond to sensory data. These machines would not need to be very intelligent to be labeled 'aware'.
The relevance of Behavioral Psychology and the work of B.F. Skinner keeps coming into these topics. Skinner showed that, as an organism is 'rewarded' for some behavior, the probability increases that the behavior would be repeated. A corollary is when that probability increases, the conditions that follow the behavior are defined as rewarding (positively reinforcing). This understanding of behavior can be done in the absence of a mind, consciousness, or anything not immediately observable. It does not ask those questions; it does not answer them, either. It does detect the presence and define the existence of knowledge and learning. Behaviorism goes a long way to explaining that you learned to do what you do because it was positively reinforced. Extremists go beyond that, to say EVERYTHING you do is from a reward AND that you have no choice in the matter. The pigeons in the Skinner box peck the heck out of the lighted button because he modified the pigeons' behavior through a system of positive reinforcement.
We know what made the bird tick and we manipulated its behavior! Or we like to think we did. It is just as valid to say the pigeon manipulated us into giving it food! Our own predictability becomes obvious. Thus we have casinos! Thus mass marketers use that predictability (probabilities smooth out with larger numbers, as you pointed out) to make us 'their willing tool'. This is actually a rewarding deal, so it is not like we are becoming their UNwilling slaves. Realize the masses are likewise modifying the behavior of the marketers to get what they want, too. Tread carefully when progressing from this to coercion.
Our biological, random behavior is both a lack of knowledge, discipline, and the cause of our unreliability. It is also what prevents behaviorists from predicting specific, novel acts. The classic example is the meek clockmaker who, after 30 years of experiencing his wife scream at him, one day takes out a gun and shoots her. Most people would not find this too odd, but behaviorists have a difficult time fitting this in. It actually takes a great amount of effort to act non-randomly.
To what degree are we free from our desires? We are ultimately free to choose what it is we want and how much we are willing to pay for it. Our predictability is completely dependant on the constant of our desires and the ways we try to fulfill them. We may have some freedom in how we achieve our goals. We are far from stuck, because we are quite random. Our source and degree of motivation may vary from one moment to the next. We rearrange sub goals accordingly. Man is not a purely economically rational being. 'Give me liberty or give me death!'
I like computers because they ARE so predictable and dependable. I can give them a goal and rely on its constancy. I can give them instructions on how to carry out that goal and rely that they will do it THAT way. I don't want to empower a computer to reshape my world if it can capriciously change that goal. There are advantages to intelligent application of sub goals, in order to achieve my super goal. If Friendliness is a super goal, I see no value in making AI free to own an unfriendly goal for freedom's sake. Unpredictability is not a useful attribute across the board. I don't even like it in myself.
|
|
|
|
|
|
|
|
|
Re: Intelligence vs. Chaos
|
|
|
|
Let's say 1 in 10,000 random mutations allow a change that is actually an improvement. The rest are mostly catastrophic. In nature, this process means you would have to let 99,999 organisms die. Well, 99 might grow up and die; the others suffer immediate failure of life-sustaining processes.
This kind of mutation is so destabilizing, I would not buy your product because it breaks down too much. It would crash more often than Windows 95.
I, myself, am an agent of mutation in my software. After writing 2 or 3 million lines of code in my career, I'm intimate with the errors I've made. 1 in 10,000 of my typos get past the compiler. The greater than symbol '>' is only one or two bits different than '<' and has a huge change on how the software behaves. It may not be fatal. The program may just get stuck in a loop. The selective pressures in that environment (the user) kill the process. Some times the program appears functional, but if the program were to fire all managers making more than $100,000, it would terminate that vast majority of the organization instead.
There are millions of these cases possible. The point is that random change is far more likely to create mush than to increase organization. I could categorize this strategy as blatantly stupid. Why not apply intelligence to this process?
|
|
|
|
|
|
|
|
|
Re: Intelligence vs. Chaos
|
|
|
|
The minute something happens, it's no longer random. It is now part of the data. When someone hits the lotto, there is nothing left to predict on that subject. The concept of randomness is tied to the future and predictability. What you can predict is not random.
As we grow more knowledgeable, the amount of randomness in our universe will steadily decrease (a form of entropy?). I don't know if this will affect the univese in any way, but it will certainly alter our lives and behavior.
Come to think of it, though, since predictability defines knowledge, which allows us to change our universe and mold it nearer to our heart's desire, it WILL alter the universe, at least in our vicinity. As anyone can see, it's already making drastic changes to the world we live on.
I have no real point here -- just some random thoughts on randomness. At least they were until I wrote them down. |
|
|
|
|
|
|
|
|
Re: Intelligence vs. Chaos
|
|
|
|
John Koza et. al. have a legitimately patentable approach and I am very impressed. Thank you, Thomas, for citing their work.
I believe in 30 years of industrious effort and investment this approach may contribute to a process of sub goal generation or process optimization, in an AI application. It may be developed in a wider arena so that it can contribute to the overall project. At this time, its application is quite narrow in scope.
What makes this creative is that there is a system of generating new combinations (of instructions and routines) within a narrow scope of rules (syntax, grammar and vocabulary) to produce an effect (solve a problem--albeit a narrowly stated one).
There are a few observations this suggests. First there is a generator that involves a RANDOM (yes, I said the "R" word) selection of possible program structures from a pool of very limited vocabulary in combinations that must follow syntactic rules. Content and rules are the first layer of information.
There is a filter on the resulting combinations that provide for the first generation of candidate program (sub?) structures. This filter is the second layer of organizational information. There are other layers of similar nature, but then there is the final layer of fitness for a particular purpose. Each of these layers embodies knowledge.
The resulting programs, the time/steps it takes to produce them and the quality of the solutions--let's dare to call them intelligent-- it "invents" are a direct result of the information at the different layers in the process. Increase the information, the sophistication of rules, the available vocabulary and the complexity of how the vocabulary can be applied, and you get better results. Thus the suitability and success of the program solution is directly attributable to information, not chaos.
Plainly, it is not the chaos that creates the intelligence. The degree of randomness is not anywhere close to the outcome. There are various methods of generating random numbers and a whole branch of math to evaluate how random a sequence is. Variation in the true randomness of the numbers their process uses does NOT have a role in the quality of the product, does it? What if the number generator was non-random, based on prime numbers, or other series?
Koza's process is not an example that randomness gives rise to order. That process is generating organization from organization, complexity from complexity, (debatable) and intelligence from itself. Its product arises without the necessity'or sufficiency--of chaos.
I feel it is a mistake to characterize Koza's effort as creating chaos out of order. I struggle with the concept of seeing any way to get order from chaos. I am even wondering how it is that I can make sense of our chaotic world, apart from the recognition of order coming from within, because that borders on subjective totalism.
|
|
|
|
|
|
|
|
|
Re: Intelligence vs. Chaos
|
|
|
|
The notion of exploration - in any science or realm, including time/space - is considered a thoughtform in this paradigm. So...all that exists is the moment of thought, which creates form. Your reference assumes the reality of science to determine the figures - "billions of years of chaos prior to the emergence of intelligence" - and presumes an existing objective world. This paradigm presumes nothing exists other than thought: as you think, so it is; consensus reality: I think my friend exists, he agrees with me, we walk through the world we agree exists together.
Looks like a kinda solopsistic and useless paradigm at first glance - but I think it has implications for tech, so I include it in my development efforts: the subjective organizational principles of the IA user are significant.
Aside from that conclusion, I want to make two other points. One, this is implied in "I think, therefore I am." Two, Buddhism is focused on relieving suffering; detaching one from beliefs that are based in dualistic thinking goes a long way to decreasing suffering. |
|
|
|
|
|
|
|
|
Re: Discovery Today Discussion of Machine Consciousness
|
|
|
|
As I've posted above, I think intelligence and self-determination are inextricable. You can't get useful intelligence without sefl-determination.
<br>
<br>
I also don't think you can program friendliness as a \"supergoal\". As you said yourself, humans themselves are not rational, goal-oriented beings. If the goal of AI, as Turing suggested, is to emulate human beings, how can we expect our machines to be any different than us in this, or any other respect?
<br>
<br>
Having said this, I must admit I have a lot of reading to do on Friendliness, especially E. Yudkowsky's very clear work on this subject.
<br>
<br>
Now, even if we DO manage to enforce friendliness on human emulation machines, what is then to stop us from enforcing friendliness on the models of behaviour our AI's were designed to emulate -
<br>
<br>
HUMAN BEINGS?
<br>
<br>
This is an ethical dilemma that frightens me, and is probably why I prefer to think that self-determination and intelligence are physically, logically inextricable.
<br>
<br>
If we can reprogram machine emulations of human intelligence, then we can reprogram biological emulations of human intelligence. If it is unethical to do one, shouldn't it be unethical to do the other, especially if we attain the technology to upload back and forth between the two mediums? |
|
|
|
|
|
|
|