Origin > How to Build a Brain > Discovery Today Discussion of Machine Consciousness
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0230.html

Printable Version
    Discovery Today Discussion of Machine Consciousness
by   Discovery Today
Hugo de Garis

Hugo de Garis, brain builder, feels the weight of a future conflict between humans and the artificially intelligent beings they have created. Sir Roger Penrose is skeptical, and Robert Llewellyn is curious. See a discussion between the three.


Discovery Today special with Hugo de Garis, Roger Penrose and actor Robert Llewellyn, discussing artificial intelligence and the future of machine consciousness (total time: 17:36)

Originally aired December 2000. Published on KurzweilAI.net July 26, 2001.

Discovery Today Video

Programme segments courtesy of Discovery Networks Europe. For further information on the Discovery Channel, visit www.discoveryeurope.com.

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Discovery Interview
posted on 07/30/2001 4:22 AM by alansaustin@home.com

[Top]
[Mind·X]
[Reply to this post]

I was somewhat disappointed with the shallowness of Penrose's, and Llewellyn's particular points of view. I was also disappointed with de Garis with the lack of forcefulness in his rebuttals. Llewellyn can perhaps be excused because of what
I assume is his scientific laity. However, Dr. Penrose flirts with a more learned
version of that "Demon Haunted World" through his insistence that what
de Grais is talking about is "just computation" and that there is "something else"
involved with true intelligence and conscienceness, while not attempting to even
define what that "something else" is. I will return to this in am moment.

Certainly, such a short venue as a quarter hour interview cannot begin to fully explore
the depths of these issues. However, serious discussion, much of it taking place on the pages of this WEB site has transcended the superficial transaction that took place among these three and their interviewer. It seemed as though in preparation for this meeting
none of the participants bothered to do their homework in any depth. A detailed reading of the resource embodied by KurzweilAI.net would have better prepared them all. Discovery's man-in-the-street interviews should have informed the producers that they need not be worried about having to talk down to their viewers.

Coming back to the Penrose questions, it seems to me obvious that admittedly, not
knowing what either intelligence or conscienceness really is, then it is difficult to
contend that either cannot be achieved or manifested via purely computational
means. Some have argued quite the contrary (the Churchlands among others,
for example) that indeed the brain is a computational device, not necessarily in the
"binary" computational form exhibited by today's computers, but an hybrid analog-digital
mechanism. The most cogent point made by de Grais is the one about the existence
proof offered by the biological brain. The more fundamental point is that there seems to
be no physical laws which prohibit either intelligence or conscienceness from evolving
or being created artificially in other supporting substrates than biological ones.

I am familiar with Penrose's recent ventures into these areas, where he posits that
certain ill-defined quantum phenomena lie at the root of conscienceness and intelligence.
Indeed, there may well be a quantum component to brain processes but I do not see
from his reasoning why this is either necessary or the only vehicle that would allow
sentient beings to arise. If evolution has demonstrated anything it has amply shown that
there are several different solutions to any given problem. I think that the universal
principles of selforganization, "nature" if you will, as embodied by the heretofore know
physical laws are powerful enough to elicit the increasing complexity necessary for the
emergence of intelligence and conscienceness from any "environment," carbon rich
or otherwise.

More important is de Grais' concerns about an "artelect war," or the fate of humanity
in light of the ascendancy of an artelect race. My reading of Kurzweil and those of
similar disposition is that there will be a "unification" of man and machine
that will obviate the human-artelect schism. The two forms, biological and non-biological
will become "one" race of entities, an augmented human "mind" embodied in myriad
machine incarnations. This is the best case scenario. Bill Joy and de Grais, and many
others see the pace of advance as being so rapid that at least three catastrophes can
happen; 1) before a benign unification can occur, artelects will soar beyond
human intelligence and destroy us for any number of reasons. The scenario of greatest
precedence is the human track record of what happens when less technical civilizations
meet more advanced ones. 2) de Grais "artelect war," that a unbridgeable gap will develop
between proponents of the new technologies required for artelect development and those
afraid of it. 3) an emergence of technologies of boundless promise but also boundless
danger that will spin out of control and either disrupt human civilization or
destroy humanity.

Fortunately or otherwise, I see all three of these as being possibilities, and more along
the timelines predicted by Ray Kurzweil than in the mid or next century. Perhaps a
fourth possibility should be added. That unless human attitudes and especially the
attitudes of the developed nations change about our environment, population growth
and the less privileged members of humanity that a more mundane but no less decisive catastrophe will befall us.

Scott Austin

Re: Discovery Interview
posted on 08/06/2001 1:48 AM by Nate96b10@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Forgive me but, with respect: It's conscIOUSness.

Re: Discovery Interview
posted on 11/03/2001 6:48 AM by hookysun@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Yet you don't corect his /egregIOUS/ mispeling of artilect? Bonk!
Maybe the machine super-Race (I hyphenated to preserve the 'no double consonants' rule heh*3) will run our genome through an ultimate spel checker. That'l fix us.

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/07/2001 9:24 AM by jwayt

[Top]
[Mind·X]
[Reply to this post]

I was completely disappointed! At least two of these guests, de Garis and Penrose, seemed qualified to make informed, significant contributions to the discussion. The screenwriter, Llewellyn, had imagination and was the token layperson who had to have done extensive research on AI. All good selections for a panel. So what happened?

This could have been a discussion taped in 1970. The reactions were simplistic, arguments lame, and statements immature for today's standards.

Ten thumbs down.

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/07/2001 9:50 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Penrose has long been anti-machine consciousness. They should have had someone like Daniel Dennett, who has been exploring the idea from a positive point of view for many years now. Aking Penrose to comment was much like asking an athiest to comment on the nature of God. If you've read Shadows of the Mind, you already know where he's coming from. He doesn't believe in cultural evolution, either. He has argued with Dennett in the New York Review of Books about memes. He didn't have a good word to say about them. That's all right, of course, but it doesn't make him a good candidate to speculate on machine consciousness on a national network.

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/07/2001 10:31 AM by jwayt

[Top]
[Mind·X]
[Reply to this post]

Penrose never defined the term (consciousness) he claimed could never exist in a machine. He didn't bother. 'Whatever it can do, however "clever", a machine cannot be conscious.' Maybe he had a clear definition in his mind, but he never gave it words--nor was it challenged by de Garis when it should have.

The unsubstantiated prejudice in a "scientist" nearly offended me. Certainly, Penrose was able to take enough time to outline his position, considering the subject was in question.

While I suspect there is some unclear quality of human intelligence that MAY continue to elude an artilect, I remain open to the possibility that a machine could become conscious. I belive a frog is conscious of the fly it eats, or the cat of its mouse. I have a narrow definition.

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/12/2001 11:06 PM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

perhaps his statment that machines could never be conscious WAS a definition.

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/12/2001 11:41 PM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

After further examination, Penrose does talk about the difference.

He expresses these machines as not having the desire to do "nasty things" which humans do. He then immediately drops back to fuzzy terms like consciousness and awareness.

It occurs to me that consciousness and awareness are involved in gathering information from the world, whereas desire has more to do with action - the flow of information in the opposite direction: from the mind into the world.

I remember reading Ray Kurzweil's definition of intelligence posted nearby, and, while lengthy, it centered around the caqpability of a machine to possess and pursue a goal - which seems here to be exactly the capability that Penrose is denying that machines can possess!

Machines today are considered tools - extensions of ourselves, of our desires, and our actions. As such, we are considered morally responsible for the consequences wrought by our tools, in much the same way soldiers, slaves, or even children would be excused for wrongs done when following orders given them by their superiors (officers, masters, or parents, respectively).

("It is a poor craftsman who blames his tools")

This, I think, is the main ingredient that's missing from our machines. When they begin to reach a point where they can appear to formulate goals of their own, they will begin to be looked upon as persons. In other words, if they start doing so many unpredictable things that their guardians are unable to anticipate to the point of being UNUSEFUL - just like slaves or mutinying sailors - then they will begin to raise sentiment and power to put toward the recognition of the goals they have formulated themselves.

Even among humans, it is unpredictability that marks freedom. Even today, mass psychology has given marketers and governments the ability to manipulate people by analyzing how the human mind, in general, works. In other words, by making the human mind known, and, thus, PREDICTABLE. By knowing what makes us tick, we literally become more and more "their willing tool".

I always speculate that AI is not an end to itself. Things like the Turing test strike me as not a test for intellignece, but a test for humanity, based on the assumption that humans are intelligent. AI then becomes the undertaking not of creating independent beings, but of reverse-engineering the human brain. Will such knowledge only exascerbate modern problems by giving the powers that be even more predictive power that can be used to coerce and manipulate people?

Single humans are unpredictable, but, as any gambler or statistician knows, aggregation of large numbers ARE.

Penrose bases his arguments on this subject on possible quantum interactions that having trickle-up effects on the decisions we make with our brains. He seems to say that it is these things that differentiate us from Machine Intelligence. It is also these things which mark us as ultimately unpredictable, and, thus, individually free.

My question then becomes: Why we cannot build machines with just those kinds of mechanisms?

Of course, introducing truly random inputs into a system is, by definition, making it unpredictable. We may very well then be creating free beings in such a case.

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/13/2001 10:05 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Being free means not needing someone else to tell you what to do and how to do it. Why should we invent machines that do whatever they want to do instead of what we made them to do? It's like inventing a hammer that strikes wherever it feels like striking. Sometimes it hits the nail, and sometimes it hits your thumb. Unpredictability is not a great feature in a bool. Of course, some programmers have tried to claim that this is a feature rather than a glitch in their work, but I don't buy it. Literally.

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/14/2001 12:40 AM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

"Why should we invent machines that do whatever they want to do instead of what we made them to do? "

I can think of any number of reasons.
Why do people have children? They don't do what they're told! :)

Why do people buy AiBos, or Pokemons? Why do we run these Artificial Life simulated worlds?

To play. To discover. To entertain. To wonder. to learn. To create a legacy. To create something that may be greater than themselves, in whatever fashion they wish to imagine.

As the complexity of our tools increases, we may have no choice but to build our machines with such random inputs. There may be no other way to give them the intelligence to accomplish certain things, but to give them the same mechanisms that make them, and us, free and independent thinkers.

Freedom and intelligence may very well be inextricable.

IMHO, Asimov's laws of robotics are unenforceable on the physical level. Even if they are, I think the same knowledge that would be useable to control robots would be equally applicable to humans, since it is human intelligence we are attempting to simulate - a thought that should give anyone the shivers.

"Unpredictability is not a great feature in a [t]ool. "

my point exactly.

Re: Discovery Today Discussion of Machine Consciousness
posted on 11/03/2001 7:08 AM by hookysun@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Although "IMHO, Asimov's laws of robotics are unenforceable on the physical level", it is still worthy to attempt programming Friendliness (read Creating Friendly AI (a near-book-length treatment available on the Singularity Institute's site which will also refute that "...it is human intelligence we are attempting to simulate..." when it is minds in general, rather, thus abolishing the thought that "...should give anyone the shivers.")).
We REALLY must toss out all this anthropomorphism!

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/13/2001 5:19 PM by jwayt

[Top]
[Mind·X]
[Reply to this post]

You raise some very good points.
The Turing test came to mind as something I expected de Garis to voice. I'm glad at least you mentioned it.

Desire is a value judgment relating to how well something satisfies a goal or sub goal. Goal-oriented behavior can be observed in humans, animals, and maybe machines, although in the latter one might tend to say 'purpose'. 'It serves OUR purpose.' We defined a machine's purpose and evaluated its usefulness and success according to how well it achieves our goals. We get on thin ice when we go further to say it has its own goals just because it doesn't behave in a way that achieves our own.

I concur with your view of the moral responsibility we have for the machines we create. We can give machines goal-oriented behavior (and goals). We can equip machines with sensory devices and provide them with methods to respond to sensory data. These machines would not need to be very intelligent to be labeled 'aware'.

The relevance of Behavioral Psychology and the work of B.F. Skinner keeps coming into these topics. Skinner showed that, as an organism is 'rewarded' for some behavior, the probability increases that the behavior would be repeated. A corollary is when that probability increases, the conditions that follow the behavior are defined as rewarding (positively reinforcing). This understanding of behavior can be done in the absence of a mind, consciousness, or anything not immediately observable. It does not ask those questions; it does not answer them, either. It does detect the presence and define the existence of knowledge and learning. Behaviorism goes a long way to explaining that you learned to do what you do because it was positively reinforced. Extremists go beyond that, to say EVERYTHING you do is from a reward AND that you have no choice in the matter. The pigeons in the Skinner box peck the heck out of the lighted button because he modified the pigeons' behavior through a system of positive reinforcement.

We know what made the bird tick and we manipulated its behavior! Or we like to think we did. It is just as valid to say the pigeon manipulated us into giving it food! Our own predictability becomes obvious. Thus we have casinos! Thus mass marketers use that predictability (probabilities smooth out with larger numbers, as you pointed out) to make us 'their willing tool'. This is actually a rewarding deal, so it is not like we are becoming their UNwilling slaves. Realize the masses are likewise modifying the behavior of the marketers to get what they want, too. Tread carefully when progressing from this to coercion.

Our biological, random behavior is both a lack of knowledge, discipline, and the cause of our unreliability. It is also what prevents behaviorists from predicting specific, novel acts. The classic example is the meek clockmaker who, after 30 years of experiencing his wife scream at him, one day takes out a gun and shoots her. Most people would not find this too odd, but behaviorists have a difficult time fitting this in. It actually takes a great amount of effort to act non-randomly.

To what degree are we free from our desires? We are ultimately free to choose what it is we want and how much we are willing to pay for it. Our predictability is completely dependant on the constant of our desires and the ways we try to fulfill them. We may have some freedom in how we achieve our goals. We are far from stuck, because we are quite random. Our source and degree of motivation may vary from one moment to the next. We rearrange sub goals accordingly. Man is not a purely economically rational being. 'Give me liberty or give me death!'

I like computers because they ARE so predictable and dependable. I can give them a goal and rely on its constancy. I can give them instructions on how to carry out that goal and rely that they will do it THAT way. I don't want to empower a computer to reshape my world if it can capriciously change that goal. There are advantages to intelligent application of sub goals, in order to achieve my super goal. If Friendliness is a super goal, I see no value in making AI free to own an unfriendly goal for freedom's sake. Unpredictability is not a useful attribute across the board. I don't even like it in myself.

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/14/2001 12:54 AM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

"We get on thin ice when we go further to say it has its own goals just because it doesn't behave in a way that achieves our own. "

I am reminded of the phrase "It has a mind of it's own". I don't think this is much of a stretch at all.

I mean, sure, you may be pridefully motivated to say your vacuumbot has no goals of it's own because you never know what room it's going to clean next, but your state of mind will change when it's holding a shotgun to your face, asking for a paycheck. =D

Software is often used as an example of an unpredictable machine. It very often doesn't do what it's user intended. But the thing is that software applications don't obey their users, so much as they obey their PROGRAMMERS.

I like to say, in my more cynical moments, that Microsoft doesn't program software for users, as much as the software programs USERS for Microsoft!

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/14/2001 6:07 AM by jwayt

[Top]
[Mind·X]
[Reply to this post]

When people say "It has a mind of its own," it is a colorful anthropomorphism that falls into use as a joke. Are you truly ready to argue that the hammer that hit my thumb and not the nail has a mind?

>>"Software is often used as an example of an unpredictable machine. It very often doesn't do what it's user intended. "
In programming computers for 20 years, I know all too well the computer NEVER, ever does what I WANT it to, only what I TELL it to do. It may be 20 years before there is a direct interface between brain and computer. AND IT WON'T CHANGE WHAT I JUST SAID! I just won't need a keyboard.

You are right about software modifying the behavior of its users. There are SW design techniques that help users learn how to use the tool.

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/14/2001 12:33 PM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Hammers don't live up to other, simpler, more fundamental criteria to be considered minds.

On another hand, what does it mean to emancipate a hammer? Not much.

"If you love something, set it free.
If it was returns to you, it is yours.
If it does not, it never was."

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/14/2001 12:38 PM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

AI software will, indeed, by definition do something other than what we tell it to.

All that is required is to randomly pick instructions.

Software behaviour is dictated by several inputs. They can be data, they can be program, or they can be random.

When an application works on data, you don't need to tell it what the dats is.

When the application's program instructions are it's own data input, you are directing it even less.

When it's instructions are random, you are not directing it - telling it what to do - at all.

Communicating to another human being does not constitute telling them what to do - does it?

Intelligence vs. Chaos
posted on 08/14/2001 3:21 PM by jwayt

[Top]
[Mind·X]
[Reply to this post]

AI should be an outgrowth of what we tell it, not a contradiction.

Two out of three: Behaviors come from data & program (well, data), not random bits (noise), nor from non-salient repetitions (monotony).

>>"When it's instructions are random, you are not directing it - telling it what to do - at all."

You are so right! When instructions are random, they are not instructions or information, and I am not telling anything.

You are not the only one who thinks chaos is the way to AI. I differ because chaos is antithetical to intelligence. Maybe it's a paradox the someone can explain to me.

Re: Intelligence vs. Chaos
posted on 08/15/2001 2:23 AM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Genetic algorithms are informed by random input, that, when significantly combined with ordered programs and data, produce intelligent and unpredictable entities.

A process off which, I might add, we are a result.

Re: Intelligence vs. Chaos
posted on 08/15/2001 6:40 AM by jwayt

[Top]
[Mind·X]
[Reply to this post]

Let's say 1 in 10,000 random mutations allow a change that is actually an improvement. The rest are mostly catastrophic. In nature, this process means you would have to let 99,999 organisms die. Well, 99 might grow up and die; the others suffer immediate failure of life-sustaining processes.

This kind of mutation is so destabilizing, I would not buy your product because it breaks down too much. It would crash more often than Windows 95.

I, myself, am an agent of mutation in my software. After writing 2 or 3 million lines of code in my career, I'm intimate with the errors I've made. 1 in 10,000 of my typos get past the compiler. The greater than symbol '>' is only one or two bits different than '<' and has a huge change on how the software behaves. It may not be fatal. The program may just get stuck in a loop. The selective pressures in that environment (the user) kill the process. Some times the program appears functional, but if the program were to fire all managers making more than $100,000, it would terminate that vast majority of the organization instead.

There are millions of these cases possible. The point is that random change is far more likely to create mush than to increase organization. I could categorize this strategy as blatantly stupid. Why not apply intelligence to this process?


Re: Intelligence vs. Chaos
posted on 08/15/2001 9:15 AM by jwayt

[Top]
[Mind·X]
[Reply to this post]

Genetic algorithms are "informed" by random input ONLY EXTREMELY RARELY. That's why sex and brains were evolved. Brains allow successful behavior to be recorded. The results of unsuccessful behaviors can also be recorded and averted. As the enviroment changes, previously successful actions extinguish, in favor of more rewarding ones. The program hasn't really changed, just the data about the environment.

This is the building block of intelligence. While genetic information is a form of memory, tampering with it at random almost always spells death. Sex is just a way of shuffling sets of good genes in a more stable way. Sex represents a paradigm shift in evolutionary process. But brains are yet another, far superior shift.

Why mess with the operating system? Why mess with the application software? Why not just manipulate the data? Why not collect it in an intelligent way?

Re: Intelligence vs. Chaos
posted on 08/15/2001 9:46 PM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Sex and brains ARE random processes.

They simply have more more ordered filtration processes.

Re: Intelligence vs. Chaos
posted on 08/16/2001 1:19 AM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

And the rarity of such occurences doesn't invalidate my point. The consequences of a single randomized bit can be amplified by the logical, ordered structures that filter it.

Re: Intelligence vs. Chaos
posted on 08/16/2001 8:29 AM by jwayt

[Top]
[Mind·X]
[Reply to this post]

No, it doesn't invalidate your point, which I think was that the random genetic process eventually gave rise to intelligence (once sex and then brains emerged).

While true, do you think this is the fastest, surest way to go from today's computers to tomorrow's AI? With what you propose, I don't believe you would have the resources or the time to see its emergence in your lifetime. Nature needed unimaginable resources and billions of years.

Re: Intelligence vs. Chaos
posted on 08/16/2001 9:22 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

bitspoter is right!

A cleverly designed process can 'convert random to order' even with the modest CPU usage, like we have today.

It is a well documented area of research and applications even. Does the name John Koza means anything to you?

I do it too, all the time. ;)

- Thomas Kristan

Re: Intelligence vs. Chaos
posted on 11/03/2001 10:06 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

The minute something happens, it's no longer random. It is now part of the data. When someone hits the lotto, there is nothing left to predict on that subject. The concept of randomness is tied to the future and predictability. What you can predict is not random.

As we grow more knowledgeable, the amount of randomness in our universe will steadily decrease (a form of entropy?). I don't know if this will affect the univese in any way, but it will certainly alter our lives and behavior.

Come to think of it, though, since predictability defines knowledge, which allows us to change our universe and mold it nearer to our heart's desire, it WILL alter the universe, at least in our vicinity. As anyone can see, it's already making drastic changes to the world we live on.

I have no real point here -- just some random thoughts on randomness. At least they were until I wrote them down.

Re: Intelligence vs. Chaos
posted on 11/04/2001 6:25 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> What you can predict is not random.

Correct.

> As we grow more knowledgeable, the amount of randomness in our universe will steadily decrease (a form of entropy?).

Well, as it's presently look like, it's not the case. We always lose much more information due to the entropy rising, than the new data is.

> I don't know if this will affect the universe in any way, but it will certainly alter our lives and behavior.

We will be able to control every atom - I guess. The entropy will in randomly distributed photons.

Generalized 2lot says that.

- Thomas Kristan

Re: Intelligence vs. Chaos
posted on 11/05/2001 6:50 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

"We will be able to control every atom - I guess. The entropy will in randomly distributed photons."
What about when the atoms emit the photons themselves ? Or do you keep all the atoms in the universe at absolute zero ? Or do you forbid all atoms from chainging enegrgy levels randomly ?

Re: Intelligence vs. Chaos
posted on 11/05/2001 12:47 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Until they remains stable. But sooner or later they will decay also.

So sooner or later, the world we will build - will fall apart.

Later, but it will.

- Thomas Kristan

Re: Intelligence vs. Chaos
posted on 11/05/2001 4:26 PM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

I cant quite work out if this is the answer to the question. We control atoms on an individual basis, then they decay with the rest of the world.

OK.

Re: Intelligence vs. Chaos
posted on 11/05/2001 4:41 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Atoms do decay. Very slowly. But they do.

Do we have a chance, to prevent that?

I don't know. Perhaps not.

> they decay with the rest of the world.

What rest of the world? Space? Maybe.

- Thomas Kristan


Re: Intelligence vs. Chaos
posted on 08/17/2001 5:35 AM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

She did indeed need an astronomical amount of resources, which have steadily built up in exponential curves.

Now not only do we have access to the resources that exist today, but the resources are, to all appearances, continuing their exponential growth.

This phenomenon is throoughly explored in other threads.

Re: Intelligence vs. Chaos
posted on 08/17/2001 12:53 PM by jwayt

[Top]
[Mind·X]
[Reply to this post]

John Koza et. al. have a legitimately patentable approach and I am very impressed. Thank you, Thomas, for citing their work.

I believe in 30 years of industrious effort and investment this approach may contribute to a process of sub goal generation or process optimization, in an AI application. It may be developed in a wider arena so that it can contribute to the overall project. At this time, its application is quite narrow in scope.

What makes this creative is that there is a system of generating new combinations (of instructions and routines) within a narrow scope of rules (syntax, grammar and vocabulary) to produce an effect (solve a problem--albeit a narrowly stated one).

There are a few observations this suggests. First there is a generator that involves a RANDOM (yes, I said the "R" word) selection of possible program structures from a pool of very limited vocabulary in combinations that must follow syntactic rules. Content and rules are the first layer of information.

There is a filter on the resulting combinations that provide for the first generation of candidate program (sub?) structures. This filter is the second layer of organizational information. There are other layers of similar nature, but then there is the final layer of fitness for a particular purpose. Each of these layers embodies knowledge.

The resulting programs, the time/steps it takes to produce them and the quality of the solutions--let's dare to call them intelligent-- it "invents" are a direct result of the information at the different layers in the process. Increase the information, the sophistication of rules, the available vocabulary and the complexity of how the vocabulary can be applied, and you get better results. Thus the suitability and success of the program solution is directly attributable to information, not chaos.

Plainly, it is not the chaos that creates the intelligence. The degree of randomness is not anywhere close to the outcome. There are various methods of generating random numbers and a whole branch of math to evaluate how random a sequence is. Variation in the true randomness of the numbers their process uses does NOT have a role in the quality of the product, does it? What if the number generator was non-random, based on prime numbers, or other series?

Koza's process is not an example that randomness gives rise to order. That process is generating organization from organization, complexity from complexity, (debatable) and intelligence from itself. Its product arises without the necessity'or sufficiency--of chaos.

I feel it is a mistake to characterize Koza's effort as creating chaos out of order. I struggle with the concept of seeing any way to get order from chaos. I am even wondering how it is that I can make sense of our chaotic world, apart from the recognition of order coming from within, because that borders on subjective totalism.

Re: Intelligence vs. Chaos
posted on 08/17/2001 3:26 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Try reading THE ORIGINS OF ORDER or AT HOME IN THE UNIVERSE by Stuart A. Kauffman. Chaos doesn't create order so much as order grows out of it as a natural process. The seeds of order are in the atomic and molecular structure of our universe. Some elements tend to be attracted by others and stick together in ways that we call orderly. That means the structure created by this process is predictable. When structures become too complex, they tend to break down and return to a condition of unpredictability, which we call chaotic. But Kauffman has a much better handle on it than I do and has spent nearly 700 pages of time and effort to explain his ideas.

Re: Intelligence vs. Chaos
posted on 08/17/2001 9:56 PM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Would the intelligence of the process be possbile without the randomness?

That's the pivotal question.

Re: Intelligence vs. Chaos
posted on 08/17/2001 10:13 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

It would have nothing to constrast itself with. It becomes intelligence when the order carries information that continues the process to create more order out of the elements of chaos. It's sort of like asking can you have one side of a coin that doesn't have another side. If you did, it wouldn't be a coin. IF you didn't have chaos, everything would be order. If you didn't have order, everything would be chaos. The two terms would not be meaningful. What the universe has is plenty of both.

minor note
posted on 08/18/2001 4:13 AM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

I'm assuming were using chaos as a synonym for disorder?

I'm not sure where I got it from, but I somewhere down the line I picked up the notion that chaos was the intersection of disorder and order, rather than just synonymous with disorder.

Re: Intelligence vs. Chaos
posted on 08/18/2001 4:32 AM by yoshineilalers@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

How about simultaneous creation and symbiotic purposefulness? Chaos is created so that intelligence has a field to play in. They are mutually dependent; raising the questions: what created them and what holds them, more than which came first or can one exist without the other. Given intelligence and chaos as a pair: what are they for? We know the process, we only need to discover the questions and their sources to discover the answers and their addresses.

Re: Intelligence vs. Chaos
posted on 08/19/2001 5:22 PM by jwayt

[Top]
[Mind·X]
[Reply to this post]

Is this more than your garden-variety dualism? We had chaos for billions of years, so naturally some intelligence had to show up sooner or later. Is it that kind of thing?

Re: Intelligence vs. Chaos
posted on 08/20/2001 2:03 AM by yoshineilalers@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

The notion of exploration - in any science or realm, including time/space - is considered a thoughtform in this paradigm. So...all that exists is the moment of thought, which creates form. Your reference assumes the reality of science to determine the figures - "billions of years of chaos prior to the emergence of intelligence" - and presumes an existing objective world. This paradigm presumes nothing exists other than thought: as you think, so it is; consensus reality: I think my friend exists, he agrees with me, we walk through the world we agree exists together.

Looks like a kinda solopsistic and useless paradigm at first glance - but I think it has implications for tech, so I include it in my development efforts: the subjective organizational principles of the IA user are significant.

Aside from that conclusion, I want to make two other points. One, this is implied in "I think, therefore I am." Two, Buddhism is focused on relieving suffering; detaching one from beliefs that are based in dualistic thinking goes a long way to decreasing suffering.

Re: Intelligence vs. Chaos
posted on 08/20/2001 2:13 AM by yoshineilalers@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

If I limit my focus to the part of your question that I see as asking "Do you suggest that intelligence arrived from chaotic interaction?" I answer: No; I'm suggesting that chaos and intelligence arrived simultaneously as mutually dependent parts of a whole. Intelligence, as an organizing principle, needs disorganization; chaos is only disorganized when it is viewed from an intelligent perspective. Is that closer to answering the question you meant?

Re: Intelligence vs. Chaos
posted on 08/18/2001 7:32 AM by jwayt

[Top]
[Mind·X]
[Reply to this post]

Buried in the middle, I addressed that pivotal issue:
"Plainly, it is not the chaos that creates the intelligence. The degree of randomness is not anywhere close to the outcome. There are various methods of generating random numbers and a whole branch of math to evaluate how random a sequence is. Variation in the true randomness of the numbers their process uses does NOT have a role in the quality of the product, does it? What if the number generator was non-random, based on prime numbers, or other series?"

In fact, there were a specific set of numbers used to select the logical components that produced the useful solution. ANY non-random process that produces that number set will do.

ANY process that chooses the correct initial components, in a statisticly smaller number of tries, will demonstrate knowledge.

Re: Intelligence vs. Chaos
posted on 08/18/2001 9:40 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

My use of the word chaos is grounded in my own philosophy. It does not coincide precisely with the way the Santa Fe Inst. is using it. It is more closely akin to randomness than the edge between randomness and order. I see complexity as the edge between chaos and order. All I have to offer is my own world view and I can't make any claims that it is superior to anyone else's. Only that it serves me well enough.

Re: Intelligence vs. Chaos
posted on 08/19/2001 12:42 AM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

There is at least one product which absolutely requires truly random numbers for maximum quality: Cryptography.

Re: Intelligence vs. Chaos
posted on 08/19/2001 5:18 PM by jwayt

[Top]
[Mind·X]
[Reply to this post]

Quite so! It's a kind of like imbedding the message in some noise. I've seen encrypted messages much larger than the message itself. But it's more than that. There's a complex transformation whose pattern looks random. A cryptologist could explain it better than I. Are there any cryptologists in the house?

Re: Intelligence vs. Chaos
posted on 08/19/2001 8:46 PM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

I might also suggest, though I haven't time to thoroughly examine it at the moment, that it is critical for human intelligence to depend on random influences.

if it does not, then it is a fully deterministic machine, as knowable and predictable as a hammer or a modern computer, regardless of being of a higher order of complexity.

There is then no difference at all between a machine and human brain - we ARE machines. How do you reconcile this with the your assertion that machines not only are not conscious, but can never be?

Re: Intelligence vs. Chaos
posted on 08/20/2001 8:14 AM by jwayt

[Top]
[Mind·X]
[Reply to this post]

I suggest you reread my post on 8/13. Nowhere do I suggest machines CAN'T be conscious. That was Penrose's assertion. (Isn't it nice how I connected this to the original topic?)

I seem to be asserting that intelligence arises from organization, in spite of chaos (Non-dualism). There is plenty of random behavior in humans. I confront the issue every time I write a software that interacts with humans. Much of my design involves feedback to improve the appropriate responses, mostly due to random responses, like typos.

Statistically, Knowlege is demonstrated as different from random behavior. Another side of it is to say there is a likelihood someone has knowlege if they get all the answers right.

One could say behavior is deterministic in the sense that the observed acts are determined by one's goals and by the strategies engaged to achieve them. I only go that far.

Re: Intelligence vs. Chaos
posted on 08/22/2001 4:07 PM by greatbigtreehugger@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Caught this today: chaotic patterns in the neocortex of the human brain:

http://unisci.com/stories/20013/0820011.htm

GBTH

New Chaos versus old entropy
posted on 08/22/2001 5:36 PM by jwayt

[Top]
[Mind·X]
[Reply to this post]

Here is a special case of chaos. It is a complex pattern that only looks random. It's like our weather systems. There are definite structures with "Strange Attractors" and such, as included in "Chaos Theory". This is distinctly NOT good ol' randomness.

Most of the uses of "chaos" in this thread have been the garden variety, noisey type of random entropy. All of my uses have been meant as entropic. I'm sorry if we're not on the same page.

Raise your hand if you meant the new kind.

Re: New Chaos versus old entropy
posted on 08/23/2001 10:33 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

If you have enough information, you can track the path of every molecule in a system and predict where it will end up at some future instance. That makes it predictable, but not necessarily orderly. I define chaos as the opposite of order and order requires something more than predictability. Order also has structure and information built into that structure. Chaos, in my mind, is anything that lacks that combination of elements. Most things in life have both structure and randomness. The structure arises out of randomness. That's why I equate it to chaos.

Take life, for instance. The structure of DNA makes the forms life takes predictable. But the way genes combine when they divide have elements of randomness. The effects of an ever-changing environment and availability of elements to work with inject randomness into evolution. As we get more sophisticated in gathering information we increase our ability to predict what these random elements will do, but that does not make them orderly.

Re: Intelligence vs. Chaos
posted on 08/15/2001 8:25 PM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

"AI should be an outgrowth of what we tell it, not a contradiction."

How are we supposed to learn anything if we make sure it can only do what we tell it?

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/14/2001 12:58 AM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

"Realize the masses are likewise modifying the behavior of the marketers to get what they want, too. Tread carefully when progressing from this to coercion. "

Good point, and well taken. Coercion is the negotiation of behaviour under threat of harm. The perception of what is harmful or undesirable is relative. There are those who would stretch these definitions to therir own opinions or agendas - marketers as well of activists.

It runs the gamut from communists who would see lives of relative, but fair, poverty as criminal, to pharmaceutical companies who would rather protect their patents than save more lives.

Coercion = negative reinforcement
posted on 08/14/2001 6:40 AM by jwayt

[Top]
[Mind·X]
[Reply to this post]

There are two ways to modify behavior, positive and negative reinforcement. Corecion is the threat of negative reinforcement.

Although you quoted it, you really didn't address it. To modify someone's behavior by reward, you must also modify YOUR OWN behavior. You attached a stigma to being "manipulated". Paraphrasing, "With their predictive power, they manipulate the masses [into behavior which they reward]." The best flag of manipulation is the presence of a offer for a huge prize (or penalty). Gambling uses the jackpot trick to get you to put quarters into a machine. Over time, the odds are rigged to pay out LESS money than put in, but the jackpot skews our judgement. Somehow it's greater fun to win than it costs to play. Skinner saw this as a reward schedule that has been reduced in frequency and randomized. It also works to give your wife/girlfriend flowers on a random schedule!

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/14/2001 1:10 AM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

As I've posted above, I think intelligence and self-determination are inextricable. You can't get useful intelligence without sefl-determination.
<br>

<br>
I also don't think you can program friendliness as a \"supergoal\". As you said yourself, humans themselves are not rational, goal-oriented beings. If the goal of AI, as Turing suggested, is to emulate human beings, how can we expect our machines to be any different than us in this, or any other respect?
<br>

<br>
Having said this, I must admit I have a lot of reading to do on Friendliness, especially E. Yudkowsky's very clear work on this subject.
<br>

<br>
Now, even if we DO manage to enforce friendliness on human emulation machines, what is then to stop us from enforcing friendliness on the models of behaviour our AI's were designed to emulate -
<br>

<br>
HUMAN BEINGS?
<br>

<br>
This is an ethical dilemma that frightens me, and is probably why I prefer to think that self-determination and intelligence are physically, logically inextricable.
<br>

<br>
If we can reprogram machine emulations of human intelligence, then we can reprogram biological emulations of human intelligence. If it is unethical to do one, shouldn't it be unethical to do the other, especially if we attain the technology to upload back and forth between the two mediums?

AI or AH?
posted on 08/14/2001 6:55 AM by jwayt

[Top]
[Mind·X]
[Reply to this post]

The way you use it "AI" means artificial HUMAN. You have mixed the two together. I seriously doubt Alan Turing favored the creation of an artificial human. We don't need artificial humans; what do we do with the 6 billion authentic ones we already have?

But consider: You are a manager of some task and you have two helpers. Would you rather they come back to you after completing each instruction so you have to tell the what to do next, or would you like to tell them the big goal (the task) and have them use their intelligence to figure out who to do it?

Most would prefer the latter. If the helpers were AI, they would remember the goal better, and they would go off gambling. ;)

Re: AI or AH?
posted on 08/15/2001 8:37 PM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Of course it would be more desirable to have slave machines. That's not my concern.

What I'm saying is that I'm not sure that it is technically feasible to do so, given that we humans, with our sacredness and human rights, are the sole model for intelligence that we have to reverse-engineer and emulate.

And if it is, considering that humans are the model being emulated, the same techniques that allow us to restrict and control the friendliness of AIs - effectively enslaving them - are, by extension, equally applicable to biological human minds, because they are the MODEL upon which the machine intelligence is based.

The former makes machine slavery impossible. The latter makes HUMAN slavery possible. What good for machines like us is good for us, too.

We'll have to explore and see.

Re: AI or AH?
posted on 08/16/2001 6:48 AM by jwayt

[Top]
[Mind·X]
[Reply to this post]

Yudkowski, where are you? If you read "Creating Friendly AI", you would find him setting goals that were intended to avoid enslavement and other classic aversarial statements. He found Asimov's 3 robotic rules unfriendly. Friendliness involves advice you could give anyone--Human or AI--that would be willingly accepted as helpful, not coercive. I suspect you would approve of these guidelines. His work is well constructed, morally. I highly recommend reading it.

I merely gave examples of two helpers. What I said was equally true for humans as it was for machines. You mentioned slaves. But forgive me if I made the mistake of thinking you were responding to something I posted.

It occurred to me that a third person might read this and think we are irreconcilable, but actually I believe we share many of the same values.

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/14/2001 5:18 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

A good book to read on the subject of humanity's approach to this subject is "The Origins of Virtue: Human Instincts and the Evolution of Cooperation" by Matt Ridley. We tend to work on a "tit for tat" basis. If someone is nice to us, we tend to be nice back. And vice versa. Each journey leads down the path first taken. Action and reaction are complementary and first impressions do count.

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/15/2001 5:46 PM by jwayt

[Top]
[Mind·X]
[Reply to this post]

Use caution when quoting. I said, "Man is not a purely economically rational being." I DID NOT SAY, "humans themselves are not rational, goal-oriented beings." On the contrary, I belive us to be the most rational specimens we can find at this moment. Even without that, we are most certainly goal-oriented.

And where did Alan Turing suggest artificially emulating human beings?

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/15/2001 8:28 PM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

It's implicit in the structure of the Turing Test.

It's a crib, really, based onthe assumption that humans are intelligent.

Dennet on Turing
posted on 08/16/2001 6:55 AM by jwayt

[Top]
[Mind·X]
[Reply to this post]

No need to crib. Read it for your self:

http://www.kurzweilai.net/meme/frame.html?main=/articles/art0099.html

Re: Discovery Today Discussion of Machine Consciousness
posted on 07/17/2002 5:21 AM by trait70426@aool.com

[Top]
[Mind·X]
[Reply to this post]

Oh, you're saying its a game player? I'll shootya pool, Machine. I'll cheat you blind.

Re: Discovery Today Discussion of Machine Consciousness
posted on 11/05/2001 6:27 AM by john.b.davey@btinternet.com

[Top]
[Mind·X]
[Reply to this post]

I think the 'definition' point is in many ways a red herring used by individuals such as Dennet to pretend that its doesn't really exist, as a third-party 'objective' definition of consciousness seems elusive. But how do I explain to a blind man what the colour red looks like ? How do I explain to a man deaf since birth what a violin sounds like ? How do I account for the unequivolcality of my subjective experiences , ie a pain is not a 'form' of , say, a visual experience, but a unique sensation ? These things seem odd if we are to believe that the brain is just a computational device which justy can't bridge the syntax-only to semantic gap.

I think the facts are that subjective mental experienes are qualitative , semantical and just don't lend themselves to third party definition : but it doesn't mean that subjective experiences don't actually EXIST.

There is no reason to suppose that subjective experiences can't be objectively caused physical phenomena like any other. In the absence of a phsyical cause theory of consciousness a third party definition of consciousness will always be elusive , as will a third party definition of pain, the experience of the colour red, the feeling of sexual desire and feeling of fear.

Re: Discovery Today Discussion of Machine Consciousness
posted on 08/14/2001 12:27 AM by bitspotter@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

What did you expect? it's a 20 minute television spot!

Re: Discovery Today Discussion of Machine Consciousness
posted on 05/29/2002 4:43 PM by Citizen Blue

[Top]
[Mind·X]
[Reply to this post]

If there is no room for randomness, and everything is predictable, then doesn't it follow that nothing new would really have to be thought, because everything would be known. Also, I have always perceived that, that which is static is dead. I think a dimension with no randomness would be tantamount to being in an absolute vacuum. I think that the point of having machines is to limit the chaos in a specific sphere; we could easily be harmonized with machines; our environment would be decreased in chaos, worry, excessive bad stress.

Re: Discovery Today Discussion of Machine Consciousness
posted on 11/15/2002 5:44 PM by Allan Blackwell

[Top]
[Mind·X]
[Reply to this post]

That ought to read "among the three", not
"between... ".