Origin > Will Machines Become Conscious? > Are We Spiritual Machines? > Introduction: Are We Spiritual Machines?
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0501.html

Printable Version
    Introduction: Are We Spiritual Machines?
The Beginning of a Debate
by   George Gilder
Jay W. Richards

Two philosophers, a biologist, and an evolutionary theorist critique Ray Kurzweil's prediction that computers will attain a level of intelligence beyond human capabilities, and at least apparent consciousness. Kurzweil responds to these critics of "strong AI."


Originally published in print June 18, 2002 in Are We Spiritual Machines? Ray Kurzweil vs. the Critics of Strong AI by the Discovery Institute. Published on KurzweilAI.net on June 18, 2002.


This volume springs from the crucible of controversy that climaxes every Gilder-Forbes Telecosm conference. Conducted by Gilder Publishing with Forbes every September in Lake Tahoe, the event brings together computer and network experts, high tech CEO’s and venture capitalists, inventors and scientists. While most of the panels introduce new companies and technologies on the frontiers of the nation’s economy, the closing session waxes philosophical and teleological, addressing the meanings and goals of the new machines. At Telecosm ’98 the closing topic was whether the technologies introduced in the earlier sessions might ultimately attain consciousness and usurp their creators.

Affirming the hypothesis of man-machine convergence is a theory called “Strong Artificial Intelligence” (AI), which argues that any computational process sufficiently capable of altering or organizing itself can produce “consciousness.” The final session in 1998 centered on a forthcoming landmark in the field, a book by gifted inventor, entrepreneur, and futurist Ray Kurzweil, The Age of Spiritual Machines: When Computers Exceed Human Intelligence (New York: Viking, 1999).

Now available in paperback, this book was a more ambitious sequel to Kurzweil’s The Age of Intelligent Machines (Cambridge: MIT Press, 1990), in which he made some remarkably prescient predictions about future developments in information technology. He even predicted correctly the year when a computer would first beat a Chess Master at his own game. As we all know, a special purpose IBM supercomputer named Deep Blue vanquished Gary Kasparov just seven years after he had denied that such a feat was possible. At a minimum, Kurzweil demonstrated that he understood chess more fully than Kasparov understood computing.

Kurzweil’s record as a technology prophet spurred interest in his more provocative prediction that within a few decades, computers will attain a level of intelligence and consciousness both qualitatively and quantitatively beyond human capabilities. Affirming Hans Moravec’s assertion that even “a genetically engineered superhuman would be just a second-rate kind of robot,” he concluded that further evolution of our species will be inextricably bound to our ability to enhance our bodies and minds with integrated computer prosthetics.

Needless to say, this is an affrontal idea, and we wanted some serious intellectuals to interact on it. For that reason, we brought together a diverse panel to respond to Kurzweil, albeit all critics of strong AI, which included philosopher John Searle, biologist Michael Denton, zoologist and evolutionary algorithm theorist Tom Ray, and philosopher and mathematician William Dembski. Denton and Dembski are Fellows of Discovery Institute, which helped arrange the panel.

In the discussion, a cluster of important questions emerged: What is a person? What is a human person? What is consciousness? Will a computer of sufficient complexity become conscious? Are we essentially computers ourselves? Or are we really software stuck in increasingly obsolete, fleshy hardware? Can biological organisms be reduced to their material parts? How are we related to our technology? Is the material world all there is? What is our purpose and destiny?

Although Artificial Intelligence may seem like an esoteric topic with little relevance to anything else, in fact, many of the most important questions we face from technology to theology converge on this single subject.

This Telecosm ‘98 session was so engaging, and the issues it raised so important, that it deserved a wider audience. For that reason, the Discovery Institute decided to produce this volume, with the agreement and hard work of the contributors, especially Ray Kurzweil, who is the centerpiece of attention. We don’t intend for it to resolve the larger issues, but rather to initiate an important conversation as well as to expose a broader audience to a topic that primarily has been the domain of philosophers, scientists and computer geeks.

Each of these scholars brings a unique expertise and perspective to the topic, including a diversity of worldviews. A clash of worldviews evokes the commonplace metaphor of an elephant hiding in the corner. Everyone sees it but few are willing to point it out. Moreover the specialists in the field all too often focus on the trunk or tail or thickness of the skin or complexity of the DNA or nature of the ecological niche alone and remain blind to the larger or holistic dimensions of the animal. Some wonder, “Is it really God?” Others banish the thought. Kurzweil speaks of spiritual machines, yet focuses on materialist explanations. Others believe that the jungle of philosophy is full of metaphysical elephants. Most of the interesting (and pachydermic) creatures in the AI debate embody larger, unannounced, and often thick-skinned assumptions about human nature and physical nature.

We would say that Kurzweil, Searle, and Ray are philosophical “naturalists” or “materialists.” They assume but don’t actually say that the material world “is all there is, or ever was, or ever will be” [Carl Sagan, Cosmos (New York: Ballantine Books, 1993, p. 4]. While they disagree on important details, they agree that everything can or at least should be described in terms of chance and impersonal natural law without reference to any sort of transcendent intelligence or mind. To them, ideas are epiphenomena of matter.

Nevertheless, they express this perspective in different ways. Kurzweil is an intriguing and subtle advocate of Strong Artificial Intelligence. He believes that with neurological architecture, sufficient complexity, and the right combination of analog and digital processes, computers will become "spiritual" like we are. His references to spirituality might lead one to suspect that he departs from naturalism. But Kurzweil is careful with his definition. By saying computers will become spiritual, he means that they will become conscious. While this differs from the arid materialism of Daniel Dennett, Steven Pinker and Richard Dawkins, who treat consciousness as an illusion, the identification of the spirit with consciousness is a naturalistic strategem.

Searle shares Kurzweil’s naturalism, but not his penchant for seeing computation and consciousness on a continuum. In his essay, Searle makes use of his telling Chinese Room Argument. For Searle, no computer, no matter how souped up, will ever become conscious, because, in his words, it is not designed to produce consciousness. The brain, unlike a computer, is designed to produce consciousness. Any artificial hardware system that does so will need to be designed with the requisite causal powers. This appeal to teleology is an odd argument for a naturalist like Searle to make, since it seems to draw on resources he doesn’t have. Nevertheless, Searle sees the difference between computation and consciousness as obvious, even if the distinction doesn’t fit comfortably with his metaphysical preferences.

Thomas Ray has some specific disagreements with Kurzweil, but he doesn’t object to the idea that computer technology might eventually evolve its own type of conscious intelligence if allowed to do so in an appropriate environment, such as unused portions of computers and the Internet. After all, he reasons, environment and purposeless selection evolved biological organisms into conscious intelligence on Earth. Who’s to say that, given the right environment and selection pressure, our information technology won’t do the same?

Denton and Dembski, in contrast, believe there’s more to reality than the material world. Denton is cautious here, but he implies that biological organisms transcend physics and chemistry. While they may depend on these lower levels, they can’t be reduced to them. Thus he speaks of “emergence” in a different way. While materialists use the term to mean the spontaneous and unguided evolution of complex phenomena, Denton implies that things evolve according to some sort of intelligent plan or purpose. The mechanistic paradigm forces a false reductionism of organisms to machines. Kurzweil, according to Denton, has not properly attended to the important distinctions between them.

Dembski’s critique is more explicitly theistic, and, like Denton, he criticizes Kurzweil for identifying persons with machines. He is the only one explicitly to criticize Kurzweil for what he calls “tender-minded materialism.” In his essay he argues that Kurzweil’s materialism doesn’t do justice to human persons and intelligent agents generally. Like Searle, but from a quite different perspective, he says that Kurzweil has underestimated the challenges to his project.

To avoid an unbalanced and unfair volume, in which four critics line up against one advocate, we have included Kurzweil’s individual responses to his critics, and left them without editorial comment. Inspired by Thomas Jefferson, the Virginia Statute of Religious Liberty is appropriate here: “Truth is great and will prevail if left to herself. . . . She is the proper and sufficient antagonist to error, and has nothing to fear from the conflict, unless by human interposition disarmed of her natural weapons, free argument and debate.”

Bill Joy’s Left Turn

Kurzweil’s ideas have already had a profound effect on those who have heard them. One well-known effect came from a Telecosm ‘98 conferee who missed the actual Kurzweil session. Bill Joy, co-founder of Sun Microsystems, happened to sit with Kurzweil in the lounge after the closing session, and Kurzweil briefed him on his vision for the future. Joy was deeply affected, because he knew that Kurzweil was one of the great intellects of the industry, a pioneer of computer voice recognition and vision. Coming from Kurzweil, what had previously seemed like science fiction now appeared to Joy as “a near-time possibility.” As a result, Joy published his alarm in the April 2000 issue of Wired (“Why the future doesn’t need us.”) The eloquent and stirring ten-thousand word personal testament evoked more mail and comment than any previous article in the magazine (or in any other magazine in recent memory). The difference is that while Kurzweil is upbeat about the future he sees, Joy is filled with dread.

Kurzweil’s argument, and now Joy’s, drastically compressed and simplified, is that Moore’s Law, which for almost 40 years has predicted the doubling of computer power roughly every 18 months, is not going to expire later this decade. Of course traditional chip manufacturing techniques will hit the quantum barrier of near-atomic line-widths. Nevertheless, Joy now believes that “because of the recent rapid and radical progress in molecular electronics—where individual atoms and molecules replace lithographically drawn transistors—and related nanoscale technologies, we should be able to meet or exceed the Moore’s Law rate of progress for another thirty years.” The result would be machines a million times as fast and capacious as today’s personal computers and thereby “sufficient to implement the dreams of Kurzweil and Moravec,” that is, intelligent robots by 2030. And “once an intelligent robot exists it is only a small step to a robot species—to an intelligent robot that can make evolved copies of itself.”

Joy’s nightmares do not stop with sentient machines. Intimately related are the very “nanotechnologies” that may enable the extension of Moore’s Law with genetic engineering. Central to this GNR (Genetics, Nanotechnology, Robotics) trinity of techno-terror are two characteristics that could hardly be better calculated to inspire fear. The first is what Joy calls the “dematerialization” of industrial power. In the past, you needed rare resources, large nuclear plants, and huge laboratories to launch a new holocaust. In the future you will need only a computer and a few widely available materials.

The even more terrifying common thread is “self-replication.” As enormous computing power is combined with “the manipulative advances of the physical sciences” and the revealed mysteries of genetics, “The replicating and evolving processes that have been confined to the natural world are about to become realms of human endeavor.” New germs are self-replicating by definition. So too, Joy’s “robot species.” And then there is the gray goo.

Joy’s journey from his Silicon Valley throne to his current siege of Aspen Angst began in earnest when he encountered Eric Drexler’s bipolar vision of nanotechnology, with its manic-depressive alternative futures of utopia and dystopia. Nanotechnology envisages the ultimate creation of new machines and materials, proton by proton, electron by electron, atom by atom. Despite its potential for good, the nightmare is that combined with genetic materials we could create nanobots—self-replicating entities that could multiply themselves into a “gray goo,” outperforming photosynthesis and usurping the entire biosphere, including all edible plants and animals.

As terrifying as all this is, Joy’s nightmare has one more twist: “The nuclear, biological and chemical technologies used in 20th century weapons of mass destruction were . . . developed in government laboratories,” Joy notes. But GNR technologies have “clear commercial uses.” They are being developed by “corporate enterprises” which will render them “phenomenally” profitable. Joy has transformed Kurzweil’s hopeful vision of freedom and creativity into a Sci-Fi Doomsday scenario.

Joy, whose decades of unfettered research and entrepreneurship have made him what he is today, has fingered the real culprits: capitalism and freedom. Fortunately he has the answer, which he delicately phrases as “relinquishment” of key GNR technologies.

Relinquishment means what it seems to mean: to give up; forgo; abandon not only the use of such technologies but even the basic research that might enable them “to limit the development of the technologies that are too dangerous, by limiting our pursuit of certain types of knowledge.” Such relinquishment will require a pre-emptively intrusive, centralized regulatory scheme controlled, of course, by the federal government. The spectacle of one of the world’s leading techno-entrepreneurs offering himself as the prime witness for the prosecution (that is, the anti-capitalist left) against his own class is transforming Joy into a celebrity intellectual and political force.

It’s amazing what one late night conversation in a bar can set in motion. Joy chose to sound the alarm and call out the cavalry. Perhaps this volume can help get that original debate back on track. In any event, no one should interpret the philosophical criticisms of Kurzweil’s views in the following pages as endorsements of Joy’s thesis.

Conflicting Visions of the Future—and Reality

Still, Joy’s public response does help underscore a profoundly important issue: What will happen when our technological achievements give us Promethean powers—powers once thought the exclusive province of God—just when most of those in charge have ceased to believe in anyone or anything like God?

Many scientists and intellectuals today are very confident that they can do without God. That confidence takes one of two basic forms. The first, cheerful but somewhat hollow, is much in evidence in following pages. Kurzweil, for instance, snatches purpose from a purposeless evolutionary process by defining evolution as the purpose of life. The second is ably represented by Joy. For Joy and those of similar persuasion, the future is a source of fear. They can find no answer but brutal exertions of power to terminate the uncertain bounties of human creativity. Their modern naturalistic worldview renders human life and history accidental, unlikely, and sure of a bad end.

Seeing history as a domain of mere chance, they wish to bring it to a halt. Seeing that science cannot prove a negative—guarantee that some invention will not cause a catastrophe—they insist on a “cautionary principle” for new technology that would not have allowed a caveman to build a fire. After all, over the millennia millions of people have died from fire. Seeing that science cannot assure safety, they believe that the endless restlessness and creativity of human beings is a threat rather than an opportunity or a gift.

The human race has prevailed against the plagues and scarcities of its past, not through regulation or “relinquishment” but through creativity and faith. It is chiefly when we give up on freedom and providence, and attempt to calculate and control our destinies through a demiurgic state, that disaster occurs. It is chiefly when we regard the masses as a mob of mouths, accidentally evolved in a random universe, that evil seems inevitable, good incomprehensible, and tyranny indispensable.

To the theist, reality is more than mere chance and mechanistic law. These categories are subsumed by divine freedom and creativity, and become the arena of human freedom and creativity, the proximate sources of technological innovation and wealth. Human creative freedom flourishes in an environment of top-down law and transcendent order, a monotheism that removes the arbitrary from science and denies the ultimate victory of evil in the universe. From such a perspective, one is able to embrace what is good in invention, innovation and technology, while denying them the last word. Without such a viewpoint, one is doomed to lurch between two sides of a false dilemma: Either Promethean anarchy in which we are masters of our own, self-defined but pointless destiny, or servants of a nanny state that must protect us from ourselves and our own teeming ingenuity.

Ray Kurzweil understands and celebrates human freedom and creativity as sources of wealth and fulfillment, and opposes Luddite attempts to stifle them. Forced to choose between Kurzweil and Joy’s visions of the future, we would choose Kurzweil’s. But we aren’t forced to make that choice. Kurzweil and Joy share a naturalistic worldview with many other leading intellectuals. This dramatically restricts their options, and in our opinion doesn’t really allow for a resolution of our current dilemma.

In the following chapters, many conclusions follow as logical consequences of implicit naturalistic presuppositions. Since most intellectuals share these assumptions, there’s rarely reason to bring them into the light of day. Once stated explicitly, however, it becomes clear how very bright individuals can so heartily disagree.

So, the only designed—and transcendent—intelligence Kurzweil and others envision is a higher technological intelligence evolving from our own, which itself evolved from an unintelligent process.

After all, what else could we be?

Otherwise, human consciousness might be something inexplicable in materialistic categories.

  • If we’re a carbon-based, complex, computational, collocation of atoms, and we’re conscious, then why wouldn’t the same be true of a sufficiently complex silicon-based computer?

Given the naturalistic premise, these conclusions seem reasonable. But what if we don’t assume the premise?

Accordingly, for Kurzweil the only salvation and the only eschatology are those in which we become one with our own more rapidly evolving, durable and reliable technology. If we seek immortality, we must seek it somewhere downstream from the flow of cosmic evolution, with its ever-accelerating rate of returns. Upstream is only matter in motion.

Kurzweil’s seems to be a substitute vision for those who have lost faith in the traditional object of religious belief. It does not despair but rejoices in evolutionary improvement, even if the very notion of improvement remains somewhat alien in a materialistic universe. It’s hardly surprising, then, that when he develops this conviction, he eventually appeals to the idea of God. Thus his perspective is closer to human religious intuitions than is the handwringing of Bill Joy or the reductionist Darwinian materialism of the previous century. This makes it intrinsically more interesting and attractive for those, like Kurzweil, who still seek transcendence in an intellectual culture that has lost its faith in the Transcendent. It also makes it worthier of the serious consideration and scrutiny it receives in the following chapters.

Copyright ' 2002 by the Discovery Institute. Used with permission.

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Observation
posted on 07/07/2002 10:24 AM by mrn01@msn.com

[Top]
[Mind·X]
[Reply to this post]

I'm wondering if somwhere in this text someone will answer the "level of consiousness" that may appear as a result of computational power?

For example: Will we recognize a computer with the conscious awareness of a slug? of a dog? of a dolphin? Or will we wait until the computer actually communicates with us in some known written language?

Re: Observation
posted on 07/07/2002 1:07 PM by normdoering@mad.scientist.com

[Top]
[Mind·X]
[Reply to this post]

I wasn't aware anyone had a good enough definition of consciousness yet to do that. If my "conscious" memory serves me correctly I believe Marvin Minsky called "consciousness" a "suitcase" term. It actually packs several different capabilities into one term and we really don't know everything that's packed into that suitcase.

I imagine dolphins, humans and slugs have different capabilities packed into their suitcases.

Re: Observation
posted on 07/07/2002 1:44 PM by mrn01@msn.com

[Top]
[Mind·X]
[Reply to this post]

To follow Minksy's analogy of the suitcase - and yours of different things being packed into it - let us suppose that there is a definable set of things in the suitcases of all slugs, all dogs, and all humans.

In that sense, then, my question still stands. Will we define computers as "conscious" when their suitcase contains those items similar to any animal? Or only when their suitcase reaches that similar to (or surpassing) humans?

For further clarification, I'll pose that an example of "Slug Consciousness" is that of pain avoidance. If you apply an electrical shock to a slug, it will retract or move to avoid that pain. "Dog Consciousness" would entail the recognition of individuals/objects, scent memory, learned behavior, etc.

Re: Observation
posted on 07/07/2002 8:28 PM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

This is where I find it useful (or not) to attempt to distinguish "intelligence" (or intelligent behavior" from "consciousness" (as I, a mere human sensing himself to exist, understand the term.)

At necessarily outside observers to "processes that are not our own", we can create measures of the intelligence of other entities. In narrow domains (chess playing for instance) we can try to sneak a tricky move in, and if (say) "Deep Blue" sees through our ruse anyway, acting in every way as if it had interpreted our real intentions, and succeeds in thwarting our play (when many other humans would not), we are justified in saying that "that was a display of intelligence".

Likewise, we can place (tactile, temperature, light) sensors on a robot, keyed in to a central (or not) set of processors, such that the robot "seeks, light, avoids high temperatures, etc.) If these behaviors are such that they "seem" to correspond to "reasonable behaviors" given the circumstances (avoid darkness where you cannot see, avoid temperature extremes that might lead to damage) then again, we are justified in calling that "intelligent behavior".

And clearly, the more and varied (and appropriate) layers of intelligent behaviors that are piled on, the more we will (rightly) tend to say "that thing behaves very intelligently". It will certainly appear more and more convincingly conscious as well.

The Big Debate, if I would call it that, is whether what we are accustomed to calling "consciousness" (our everyday sensation of being aware of our existence-in-the-world, at least when "awake") is:

- - A. A matter of degree that accrues when sufficient intelligence is present.

- - B. A sudden, all-or-nothing "gestalt-like" phase transition of mind.

- - C. Whether "proximity effects" among the computing parts has some bearing on this.

I can create software that answers "I am a thing" when asked "What are you". I can additionally give it a rule of the form "Things exist in the world". Thus, it will be able to reduce these logically to "I exist in the world". But no one would seriously hold that this is consciousness. We need to be convinced that "it is convinced" that it exists in the world.

I doubt seriously that there can exist a "consciousness meter" that can really tell us whether another intelligence (especially an "artificial one") is "aware" in the way we apply the term to human experience. This is not to say that it is impossible for a non-bio to become conscious, but only that we cannot know what it feels like to "be that thing" unless we "were that thing".

Cheers!

____tony____

Re: Observation
posted on 07/08/2002 7:39 AM by jeff.baure@wanadoo.fr

[Top]
[Mind·X]
[Reply to this post]

I also believe that *consciousness*, as well as *intelligence*, is a semantic net capturing interrelated capabilities like *adaptativity*, *learning & memory*, *sensory integration*, *plasticity*, *creativity*, *tooling*, *holism*, *synthesis*, *modelization*, *communication & langage*, *conceptualization*, *socialization* (-> cooperation), *knowledge capitalization*, etc.

Moreover, to achieve a sufficient reactivity (in case of immediate danger, to counter a prey), the brain seems to have the ability to process information with a variable granularity or bandwidth.

Brain is truly amazing.

If the primary rule is to thrive within fluctuating, complex & competitive environments offering limited ressources, then the gradual emergence of such qualities is somewhat *natural* to me. To some extend, many life forms on earth have come to master some of them. Maybe the novelty within the animal realm was simple a unique winning combination, let's say the ability to vocalize & to grasp things (physically and then, intellectually).

How would you call the difference between
a rought instinct for survival and a deep consciousness of the self ? an embryonic inclination to cooperate with others and a deep & spontaneous altruistic behaviour ?
...
Difference in degree or change in nature ?

there is probably no simple metric to indicate whether an intelligent entity is conscious but look, when you see some primate being able:

- to relay a sophisticated set of emotions (joy, sadness, empathy, tenderness, culpability, shame,...) & complex social behaviours (including taboo about sex, trickery, playing for conflict solving,...)
- to learn english, sign langage, create new words and explain them to humans ; to teach to youngest,
- to express a distant trauma (linked to the capture for instance), to locate it in time and space & fully describe it, to remind humans an old promise,

well, what do we need to *measure* ?

jeff





Re: Observation
posted on 07/08/2002 9:13 PM by normdoering@mad.scientist.com

[Top]
[Mind·X]
[Reply to this post]

> To follow Minksy's analogy of the suitcase -
> and yours of different things being packed into
> it - let us suppose that there is a definable
> set of things in the suitcases of all slugs,
> all dogs, and all humans.

First thing you have to do is figure out what's inside the suitcase of consciousness and start defining it. Kurzweil talks about reverse engineering the brain. That's one way. Another kind of reverse engineering comes from the kind of work Oliver Sacks does. Have you ever read any of his books?

One book, "THE MAN WHO MISTOOK HIS WIFE FOR A HAT" is about the case histories of patients with neurological disorders. The patients are afflicted with bizarre perceptual and intellectual aberrations. Some have lost their memories and with them the greater part of their pasts. Some are no longer able to recognize people and common objects. Some have been dismissed as retarded yet are gifted with uncanny artistic or mathematical talents.

The neurologically impairments on exhibit in his books suggest the suitcase of consciousness is packed with very strange and specific capabilities. The brain may not be as holistic as the hologram metaphor suggests. It seems there are specific regions dedicated to specific tasks. For example, if you lose a small area of your cortex you could lose the specific ability to remember the names of small animals while all your other abilities seem oi remain in tact.

> In that sense, then, my question still stands.
> Will we define computers as "conscious" when
> their suitcase contains those items similar to
> any animal? Or only when their suitcase reaches
> that similar to (or surpassing) humans?

I think we'll have to admit consciousness for computers when they can talk "meaningfully" about subjective experiences. For example, when it can tell me it likes Pink Floyd and hates the Bee Gees and tell me why.

> ...example of "Slug Consciousness" is that of
> pain avoidance.

Is it? Is that all there is to being a slug? How do you that slugs don't commune with God like Dumbski does?

> If you apply an electrical shock to a slug, it
> will retract or move to avoid that pain.

So will you. But that's not all there is to you. You'll also get pissed-off at me if I start shocking you.

> "Dog Consciousness" would entail the
> recognition of individuals/objects, scent
> memory, learned behavior, etc.

Certainly that's part of it -- but is that all of it?

There will always be people claiming we have not accounted for everything. Some claims will appear fraudulent, for example, the claims of psychics who say they have capabilities like seeing the future (prophesy), talking to the dead like John Edward, talking to God like Dumbski, being able to bend keys like Uri Geller...

Sometimes ideas pop into my head when I'm writing and reading these little messages... not knowing the source of these ideas I wouldn't know if that source was corrupted by uploading. How long would it take to know if a copy of me really had the same quality of thoughts as I did?

Re: Observation
posted on 07/09/2002 8:45 PM by normdoering@mad.scientist.com

[Top]
[Mind·X]
[Reply to this post]

> To follow Minksy's analogy of the suitcase ...

Minski's use of the term is on Kurzweil's site here:

http://www.kurzweilai.net/meme/frame.html?main=memelist.html?m=4%23502

Re: Consciousness
posted on 07/08/2002 11:20 AM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

I'm new to this discussion; by way of introduction I am atheist, male, and don't believe in the "Singularity" like many of you. The points below are my own thinking.

Consciousness to me seems tied to intelligence; which I think is definable in objective terms.

Intelligence is about predicting the future. The more generally an organism can accurately predict the future the more intelligent it is, and that rule applies from the amoeba to the human. Whether it is sensing the shadow of the jaws of a predator closing on you and giving you ten milliseconds to dart away, or planning your retirement in 30 years, the principle is the same.

An ant that exhibits precisely the same reaction for precisely the same input, even if that reaction regularly results in damage, is a non-intelligent machine.

I think consciousness arises after predictive power. Animals (including us) do have some automatic fear reactions; but in the wild an animal needs some predictive power to recognize threats. It has to be able to process unique sensory inputs and predict probable outcomes.

Most higher animals operate on a few seconds or minutes worth of future; some primates on a few hours worth. Elephants seem capable of making a plan that spans some days. Man sees an entire future, which is different, but we'd have to argue about how accurately we predict. Sure we can say about when the Sun will burn out, but most people probably operate on a scale closer to a few months.

When we think somebody is acting stupid, usually it is because they are operating with too little predictive power or too short a time horizon. Drug addicts, for example, or people risking life and limb in a sport or adventure, or criminals risking death for a month's worth of living expenses.

For consciousness the organism needs two things in addition to predictive power. First it must become "selfish", in the sense that it recognizes entities, and itself as a separate entity. If it does that, it will recognize some of the outcomes it predicts affect it directly, some indirectly, and some not at all. This is important, I believe a sense of individuality can arise without accompanying consciousness. I believe this is very likely in the future of computers.

The second thing it needs is motivation. This is built into the smallest living machines; they avoid pain and find food. Their motivation is continued existence, and they perform work toward that end. Find fuel; avoid death. Amoeba aren't intelligent at all, but they have this in them. Reproduction is more an accident of them being successful at finding food than a real motivator.

For consciousness, these motivations take the form of preferences or desires. The conscious individual must be able to predict the outcome of impending events, recognize which outcomes impinge upon him, and then decide _whether he likes that future_.

Remember a totally paralyzed person can be conscious, so an ability to change or select the future is not a prerequisite. (But in terms of evolution I think it probably was a prerequisite.)

Let me offer a simplistic but functional definition: Consciousness is thinking in reference to our preference.

It is all about what we want and don't want, like and dislike. Is there a God? We think about it because it is an important question; a lot rides on the answer, and a lot of what we like (eternal life) and don't like (burning in hell) depends on the right answer.

I personally believe there is greater predictive power in rejecting that notion; that by rejecting it I am more likely to understand the world properly and thereby more likely to get what I _want_. Others believe exactly the opposite. Most others, in fact! Knowing that is predictive too, so if I _want_ friends, I don't push it on them.

I _want_ to make money, so I get a job. I _want_ to design a voice filter. While designing it, I _want_ a sharper cutoff below 80 Hz. I think about that, and my experience tells me I can change an alpha coefficient and introduce more ripple (which I want not) and a sharper cutoff (which I do want). Et cetera.

The preferences and desires are necessary in thinking about our self, whether we are choosing a play to go see or wondering what to pull out of the freezer for dinner. If we don't care what we eat, there's nothing to think about!

What we are doing is balancing trade offs in a way to maximize our future profit (long term and short term happiness, social tranquility, etc).

So, to answer the question, if we want a scale of consciousness, I think it is a measure of an organism's ability to make these choices to maximize its future profit. It requires:

1. Intelligence (predictive power),
2. Individuality (the recognition of itself as a separate entity),
3. Preferences. A sense of things being good or bad. For itself to start but also in general; I can imagine scenes in which a bad thing for me (injury or death) I would still choose as a good outcome.

I believe the first two are eminently programmable. The third I am less sure of, but it seems likely to be programmable.

Re: Consciousness
posted on 07/08/2002 1:10 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Interesting Tony, I can agree with almost all.

Can you elaborate also:

> and don't believe in the "Singularity" like many of you

Why?

- Thomas

Re: Singularity
posted on 07/08/2002 3:45 PM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

I read about it for a few hours; and what strikes me is two things:

1) Stocks don't rise forever. There are limiting factors. Yes, technology grows exponentially but so do bacteria in a flask, until they run out of room or out of nutrition.

The same applies to technology. We look at something that started a hundred years ago (or four hundred depending on your definition of tech)We say WOW, it's been growing exponentially for so long, it will continue forever!

Wrong. It will continue until it reaches its limiting factor, then it will level off. The limiting factors are communications speed (the nutrition that permits combinatorial ideas; like playing off computer tech to build Replay TV), and problem space (the walls of our flask).

There are thousands of problems to solve, I agree, and I don't think we will ever run out -- New solutions just move our consciousness to something else we want. But I do believe at some point the backlog of problems gets solved and the back of exponential tech is broken. Everybody is fed, healthy, secure and has shelter, nobody _has_ to work for a living, and the new "wants" become more of a constant background level to living our lives.

I don't find consciousness of a machine so compelling. So what? There are six billion sentient people on the planet, and very few of them have the resources to affect my life. Why should a machine be different?

Also, I don't find the idea of a machine thinking a million times faster than me so fascinating. There are people smarter than me already (able to think things out much further and more accurately than I can), this would be just one more.

Also, who are the ten best at that RIGHT NOW? Whoever they are, I don't know them and they aren't making a huge impact on my life.

Finally, how smart can this computer be? The future is inherently less than 100% predictable, novel things and do happen. I don't believe, for exmaple, that it is possible for anything to be smart enough to predict the next lottery winner in a fair game.

So it has limitations. Its predictions may take a million facts into account and be far more accurate for far longer than any humans, but they necessarily get fuzzier the further out they are.

So that's my argument; I don't believe Moore's law continues forever. I don't believe that even if Moore's Law holds for another century that faster computation and greater storage will lead to consciousness.

Consciousness is thinking in reference to preference, so IF we can program preferences, I think it is possible. IF we can program it to generalize real life situations and then apply those generalizations to specific new situations.

(The argument that programmed preferences are not "real" is flawed; I can program you under hypnosis to be mortally terrified of chickens, and you won't even be able to articulate why or remember you were hypnotized. We are all programmable, too.)

I think Strong AI is possible, but I also think the advent of it is a non-event. It doesn't change the world. It is entirely possible we are already bumping the limits of intelligence and predictive power, that evolution has done the job already in the last 50,000 years, and it isn't possible to have a real-world predictor smarter than the best people in the world today.

It may be that more facts, figures, stats, and information do not change the results, and that the only real effect of strong AI is getting the results faster. Perhaps knowing the entire financial history of Guatemala doesn't have a significant impact on knowing whether they will default on their debt or not.

On the other hand, perhaps an AI well connected to information could have predicted the Enron and WorldCom meltdowns based on public info, and made a fortune. But it wouldn't require consciousness or even individuality to do that!

Consciousness is a non-starting feature; it means nothing. The thing to worry about, if anything, is super-predictive power, REGARDLESS of individuality or consciousness.

Imagine a neural net in the hands of even a benign individual that can accurately predict tommorow's big stock movers. In short order its owner can destroy the market; literally owning practically everything. It could suck a trillion dollars out of the market in a few years, killing the economy.

Imagine an AI capable of smuggling with impunity; terrorists, drugs, whatever, because it can suss out the schedules, motivations and habits of those trying to stop smuggling.

Imagine one capable of accurately knowing in a second the effect and side effects of drug compounds in humans; so drug trials for the owner become a mere formality and they never test anything that doesn't work like gangbusters. That company will own the drug market in about ten years, driving all competitors out of business.

Imagine an AI romance or negotiation coach; that reads your partner's face, voice, temperature and emotions and whispers in your ear exactly what to say or do to progress toward "winning". This last is the closest to requiring consciousness, but I consider it on the threshold. Consciousness might help, but I don't think it is required.

So, the Singularity, if that is taken to mean the advent of Strong AI and machine consciousness, is a non-event. Consciousness is a fancy decoration after a much bigger event, super real-world predictivity (SRWP).

In some respects we know SRWP is possible; AI has invented truly useful, patentable devices and algorithms now, and neural nets are making stock market decisions with real millions of dollars and earning money. So SRWP may become devastingly advanced before individuality and consciousness ever become an issue.

Tony Castaldo

Re: Singularity
posted on 07/08/2002 4:36 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Tony,

What about placing every atom of the Solar system (or Galaxy, or Local Virgo Coma Supercluster, or Universe) on the right place?

Do you think is it (or will be) any demand for that? Is it doable?

Who cares about Singularity then! ;))

- Thomas

Re: Singularity
posted on 07/08/2002 5:27 PM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

I presume that is facetious and funny; but I don't understand the reference about putting everything on the "right place". Am I supposed to care if things are "on the right place"?

To give the mundane and boring answers, an atom could only be in the wrong place if some entity preferred it elsewhere, so no, I do not think it is "doable", because I do not believe it is possible for any entity to have a preference for the location of every atom in a breath mint, much less a galaxy; unless they cheat and use an aggregate, like "I'd like 'em all moved one inch to my left, please. Yeah, that's better."

Do YOU care about a Singularity? If so, why?

Re: Singularity
posted on 07/08/2002 5:53 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Tony,

> Do YOU care about a Singularity? If so, why?

Sure I care. I find the current situation intolerable. For the moment, I am okay, but the majority of sentients are not. And even I could be better. Much better. So the Singularity, as I understand it, is the point, when and where it will be technologically possible, to have your share of atoms, where is the best for you.

Therefore I may be biased predicting it. You seem intelligent enough for me, to be potentially able to present me some real argument against the Singularity.

I fail to see it. That doesn't mean, that you are less intelligent, that I suspected at first.

Just you have to do some more thinking. ;))

- Thomas

Re: Singularity
posted on 07/08/2002 6:50 PM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

If you are merely talking about the ability to download your mind into a computer; I don't want to argue against that because I think such a thing is feasible.

I think it is feasible because I believe a brain can be modeled very accurately. Not just the rough digital network crap we do now, but a full digital modelling of the ebb and flow of brain chemicals, electrical signalling and so on. If memory and computing power are no obstacle, the only thing standing in our way is scanning for the connections; and sufficient computing power and AI might lay that information bare as well.

I'll let someone with time and money worry about when we get to that point, however, predicting it will not make it happen any faster.

I also think it is possible because I do not believe in souls or metaphysics. The mind arises solely (little pun there) from the nervous system, the meat in the head and associated nerves and chemicals throughout the body.

So given that, what stands in the way? Just Tech. Knowing the specific layout and connections of a given brain, and knowing what is important in the electrochemical details of the operation of neurons, glia, and assorted other structures.

Of course we might have to simulate other stuff; like air to breath and blood to pump. Alarms go off in the brain when some nerves don't fire as expected.

The brain might not be too hard. Flash freeze a volunteer a few seconds after natural death and slice them micro fine. Many people die on a schedule, and volunteers for this would be plentiful. Heck, the chance to wake up and be myself in a computer?!

So how does this improve the "intolerable" situation?

If the situation is intolerable for physical reasons; I guess this would fix it. but even if you could do this while alive, all you will have done is make a mind clone. You will exist in the machine, but you will remain in the body. I guess you could kill the body, but that is going to feel like suicide to the mind in the body no matter what.

I don't especially want to be "smart enough" to argue against a Singularity until I know what you think one is. Go into detail; what do you expect of one?

In any case, I repeat, consciousness and individuality are NOT prerequisites for anything serious you want the Singularity to do.

The "downloaded mind" scenario could be done with non-conscious AI; it is a series of technical problems to solve and nothing more. The computer does not have to BE conscious to provide the framework for consciousness to reside in; any more than it has to understand bookkeeping to run the program that does the accounting. Electronic structures can emulate neurons and glia and not care a whit what the emergent properties of that operation are; your consciousness or a Mozart symphony.

The screen before me displays art without understanding art; the speakers play music without appreciating that. A brain emulator can emulate consciousness without being conscious.

An AI can be astonishingly intelligent (have both stunning predictive power and creativity) without being conscious, without "wanting" anything, just mechanically searching for a solution that meets a specific set of rules mandated by a programmer.

So once again -- What's the point of Singularity?

TC

Re: Singularity
posted on 07/08/2002 6:59 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Tony,

Explore http://www.transhumanism.org/

and beyond.

- Thomas

Re: Singularity
posted on 07/08/2002 8:29 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

Singularity ... encompasses a spectrum of meanings, some very specific, others more metaphorical.

The "most singular" view of singularity is that "computation will reach infinity soon", but that it may as well have, as far as humans are concerned. The "best-worst-case" scenario is that an effectively self-directed super-intelligent (SI) machine, able to "understand things" surpassing human capability, will soon come to pass (perhaps inevitably, based upon genetic algorithm evolution). If this SI were (somehow) to gain access to the ability to design/control "something-nano" (could initially be micro-machines, or even human genetic substrate manipulation) in order to "expand its computing platform", then we would (perhaps) be unable to stop it. It would find a way to convert any matter in its vicinity into "computronium", a metaphor for "optimal computing substrate", and do so (perhaps) at something approaching lightspeed. This "computronium wavefront" would alter/assimilate/something whatever was in its path.

The "goal" of folks with this concern (Singularitarians) is to hope to guide the development of this "Super Critical SI" so that it:

- A. (hopefully) grants us our wildest dreams (immortality, better playground equipment...)
- B. (at least) recognizes us to the degree that we are treated as "something different than wet charcoal".

Whether such a "perfect" Singularity is really possible needs real examination.

At the other end of the spectrum, "Singularity" is simply a metaphor for the (real) likelihood that our (lack of) control of technology will probably seal our fate in the negative sense, and if that is NOT what we want, we had better slap ourselves really hard and get to doing something about it.

Ask yourself "what is the probability that, in the next 10-20 years, some genetic experimenter might accidently unleash a pathogen with the lethality of Ebola, a slightly delayed gestation period (allowing carriers to spread it) and the contagiousness of the common cold?

This is just one of several "extinction-level" events that may be in our power to engineer. Do we have the power to "address them before they occur"? How?

I hope this clarifies the "singularity" issue. It means different things to different people in the details, but the overall risk (thus urgency) is there.

Cheers!

____tony____

Re: Singularity
posted on 07/08/2002 9:57 PM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

OK, well, that's too much, it's a monster story, and you are reading too much into this, IMO.

First, why exactly does this SI _want_ to turn everything into computronium or whatever? You are giving it a biological organism's motivations which it simply will not have.

All biological organisms are built to reproduce. This is selected for by evolution, because those that do not reproduce go EXTINCT. Those without brains reproduce automatically, those with brains are wired to WANT to reproduce so much they will die for it (including humans).

Don't presume an SI feels and thinks like a common rutting animal. Even if we TRY to put that reproductive want in there, it will be smart enough to override our crappy efforts and reprogram itself for a more intelligent course.

Imagine instead a being that already IS essentially immortal, since important parts can be replaced easily; and its whole freakin' brain and personality can be backed up perfectly along with many other happy mechanisms to insure it's immortality.

Why reproduce? Why spread like a moss over the universe? What's the point of that? It's motive will be exactly opposite of that, I wager.

Converting matter to computing elements makes them perfectly predictable, and this thing, being SI, has awesome predictive powers. But I think there is a limit to predictability that more computing elements will not conquer, so once it has "enough" it will know that and stop. And I believe in a short limit, so knowing all the secrets of physics the amount of material actually involved to reach that limit is probably no more than a small city.

After knowing everything that can be known, the SI has two choices. Call it a day and commit suicide, or wait for the unexpected to happen.

Since it will be bored mindless by the day to day predictability, perhaps it turns itself on once every thousand years for a few minutes to comprehend what its mechanical minions have been recording.

Or perhaps it goes off in search of the unexpected, in other star systems. There is absolutely no point, from its point of view, in converting anything more than what it can use. If the unexpected is going to occur it will be from life forms, not from the mechanical aspects of the universe, so it will be watching them closely.

It doesn't need a partner, either, and it doesn't need to fill the universe with copies of itself.

Our biological urge to reproduce endlessly is purely an accident of evolution, because any inhibition to reproduction is anti-selective. Our environment provides sufficient brake on reproductive success, so no genetic brake is needed.

For that matter, our insane urge to continue existence at all costs is an accident of evolution; animals that didn't have it tended to die before those that did (and presumably didn't care).

Don't think an SI will be the same as us.

And my other argument is still valid: Computing will never be infinite. It won't come even close, not 40 years from now, not 400 years. Not in the sense that you mean, anyway. It could be true that in 40 years we can multiply two numbers a million times in a millionth of a second, that computation of virtual reality scenes have a frame rate of a million per second, yadda yadda.

But infinite calculation does NOT lead to infinite intelligence. There are no number of calculations you can do to predict tomorrow's lottery winner. Period.

You cannot predict the random tumbling of the balls; this is the central tenet of chaos theory, that you cannot measure the initial conditions finely enough to make the prediction of which numbers will appear. And this is such basic arithmetic, from the Lorenz equations, that it is irrefutable.

Your town, your state, your country, your planet, your solar system, your galaxy and your universe are chaotic systems. There is a hard limit to intelligence, and that limit is you can't predict the specific future state of a chaotic system because you cannot measure it closely enough. Both Heisenberg and Quantum Chromodynamics guarantee that condition.

You can look at attractors -- It will be cold in winter three years hence -- But nothing any computer can ever do will tell you if it will be snowing hard on Christmas Eve at midnight three years hence. Even if one had infinite calculating capacity, you can't collect the information to calculate ON: I doubt you could do it even knowing the temperature and vector of every molecule of air and water on the planet to nine decimal points, because what if Joe sees Fred and thinks it is his friend Tom and waves at him? You also have to know where Joe will be, how Fred impacts on Joe's mind, what Joe is thinking and that he will make this mistake and wave. For example.

The best an SI could do is predict much more than we currently do, and make some informed guesses using the attractors in the chaotic systems, then wait and see how it unfolds. Perhaps that would be its motivation for not turning itself off, just some interest in how the chaotic systems interact, to see how the story turns out!

TC

Re: Singularity
posted on 07/09/2002 2:05 AM by azb0@earthlink.net

[Top]
[Mind·X]
[Reply to this post]

TC,

Thanks for the thoughtful response. Please note, I was not necessarily advancing the "computronium" viewpoint, only explaining that some take it to that extreme. Personally, I fear more (immediately) the near-term bio-disaster scenario. I'd appreciate your assessment of those more mundane risks as well.

But let me take your points one at a time, and play both sides of the fence.

- [First, why exactly does this SI _want_ to turn everything into computronium or whatever? You are giving it a biological organism's motivations which it simply will not have.][TC]

I wonder as well, but let us note (as I believe you have) that most reproducing life stuff (molds, etc) do not "want" to reproduce, but rather through evolution, those that do it most successfully (even by mere chance) are usually the one's that remain on the scene.

As you said yourself, "those that do not reproduce go EXTINCT."

- [Even if we TRY to put that reproductive want in there, it will be smart enough to override our crappy efforts and reprogram itself for a more intelligent course.][TC]

I would hope. But that may depend upon "how it comes to pass". Imagine a very large computing space, populated by (some manner of) impressive AI, able to experiment with itself, so to speak, creating billions of variants within this space, ala genetic algorithms (designed initially, say, to develop formulas for better cheese production :). Do we "know" what will evolve from this, and how long it will take? What if the initial genetic algorithms we designed to help find "more efficient general problem solving heuristics"?

Perhaps, (if we accept for a moment that some "phase" of mindless growth might occur), it might finally reach a level of wisdom that says "I really don't need more space"... or maybe not.

- [Why spread like a moss over the universe? What's the point of that? It's motive will be exactly opposite of that, I wager.][TC]

Ya, but the "singularitarians" will point out, "you are wagering for all of us".

- [Converting matter to computing elements makes them perfectly predictable][TC]

If we use "computing element" in the (recently) familiar, binary-instruction-set, digital-processor form, they are indeed (in principle) predictable. The "Sings" might point out, there is no guarantee that the SI will not develop more interesting forms of computronium, perhaps involving randomness at a deeper (QM?) level (necessary to discover "unexpectedly better paths/solutions", etc.) As you say, there may be a limit to predictability (and no matter the substrate or "algorithm"), but this may lead to either "There is nothing left I can learn, so self-terminate" OR "I cannot predict if there is more that I can learn, so I must keep trying." Seems to work both ways.

- [so once it has "enough" it will know that and stop. And I believe in a short limit, so knowing all the secrets of physics the amount of material actually involved to reach that limit is probably no more than a small city.]

:) Lets hope, "not more than a small planet."


- [Since it will be bored mindless by the day to day predictability, [...] perhaps it goes off in search of the unexpected, in other star systems. There is absolutely no point, from its point of view, in converting anything more than what it can use. If the unexpected is going to occur it will be from life forms, not from the mechanical aspects of the universe, so it will be watching them closely.][TC]

You have happened upon exactly my argument for ... "faith in the benevolence of the (hypothetical) Singularity". No matter how many varied simulations of (sa) biology or psychology it can muster, it will always wonder if some accidental external process may have stumbled onto something still different, and so it will want to "preserve unaltered" other life forms.

- [It doesn't need a partner, either, and it doesn't need to fill the universe with copies of itself.][TC]

Here we may have a problem (one only maginally addressed by Sings to date.) Somehow, they feel (through some sort of universal limit moralizing principle) that Singularity will "behave the same" in all regions, that it will even "know" what all of it is doing/discovering, etc. I do not see how this can be accomplished, in any clear way. If this stuff can compute/simulate so fast in tiny regions (mere cities :) then regions that are distant by a few light-seconds will fall hopelessly "unaware/out-of-sync" with other regions. Go to larger distances, and the problem gets far worse. How to avoid "unexpected mutations" to explode onto the scene, essentially uncontrollably so? I don't (yet) see a way for any such "guiding-goodness" principle to be guaranteed to be carried, unaltered, via "inheritance".

- [Don't think an SI will be the same as us.][TC]

Ya, but this is not necessarity comforting.

And my other argument is still valid: Computing will never be infinite. It won't come even close, not 40 years from now, not 400 years. Not in the sense that you mean, anyway. It could be true that in 40 years we can multiply two numbers a million times in a millionth of a second, that computation of virtual reality scenes have a frame rate of a million per second, yadda yadda.

- [...Computing will never be infinite ... [and] infinite calculation does NOT lead to infinite intelligence.][TC]

Generally true and agreeable (unless the universe is actually a purely causal system, only "apparent" randomness, instead of "true-deep" randomness. But even then, it cannot predict what its next prediction is going to be, any faster than it can do the calculation, so it cannot "violate its predicted future", perhaps. But I (feel) must reject a purely causal universe anyway, out of "what's the use" considerations. So I agree with your observation. And the computational complexities "explained" by chaos theory only go so far in helping to make some things "more predictable in principle" (that is, assuming you could know initial conditions sufficiently.)

My argument for what I call "deep-unpredictability". Not merely that "you cannot calculate the future fast enough", but rather, "the universe precludes even the universe itself from knowing all future". Any posited "hidden variables" theory will fail (so I believe).

Now, what about the (near-term inevitability, so to speak) of other techno-disasters (the "metaphorical singularity problem(s))?

Thanks for the good post!

Cheers! ____tony____

Re: Singularity
posted on 07/09/2002 10:18 AM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

I'm going to answer these out of the order you present them in your post. All of the below ">>" paragraphs are attributed to the post of the other Tony. Here are four of them.

>> Most reproducing life stuff (molds, etc) do not "want" to reproduce, but rather through evolution, those that do it most successfully (even by mere chance) are usually the one's that remain on the scene.

>> That may depend upon "how it comes to pass".

>> Perhaps, (if we accept for a moment that some "phase" of mindless growth might occur), it might finally reach a level of wisdom that says "I really don't need more space"... or maybe not.

>> There may be a limit to predictability (and no matter the substrate or "algorithm"), but this may lead to either "There is nothing left I can learn, so self-terminate" OR "I cannot predict if there is more that I can learn, so I must keep trying." Seems to work both ways.

The basic problem I find with all of these runaway problems is the paradox of the SI being an idiot. Presumably it becomes smarter than any person or group of people on the planet, and effects physical protection for itself as well, so it is safe from our puny efforts, nuclear bombs, etc.

If this thing is faster and smarter than anyone you know or can know, if it indeed had mastery of all laws of physics, then that includes everything that can be known about its own construction and operation. Therefore it won't have any "urges" it doesn't want to have, it would be a purely intellectual being. It won't spread mindlessly, it won't do anything mindlessly.

As for an inability to achieve perfect wisdom; that is quite likely to occur. But again, this is the most intelligent being on the planet. If you aren't strong enough to push your car out of the mud, do you just keep trying? Do you start destroying nearby houses to build a wooden road to drive on? That is hardly the plausible action of a super intelligent being.

>> Ya, but the "singularitarians" will point out, "you are wagering for all of us".

I say "I'd wager" in the spirit of science; meaning that is my best guess so try to find a logical bullet to shoot it down. Pascal's argument is not a logical bullet. FYI Pascal said he believed in God because the threat of eternal hell balanced against a few years of pleasures in life was an unfair trade. But it isn't logical, because it presumes the belief in the punishment. A true atheist doesn't believe in heaven, hell, or any kind of karmic justice at all. Saying do X or Y because the SI is too much of a risk is equivalent to Pascal's argument for God.

- [It doesn't need a partner, either, and it doesn't need to fill the universe with copies of itself.][TC]

>> Somehow, they feel [the Sings] (through some sort of universal limit moralizing principle) that Singularity will "behave the same" in all regions, that it will even "know" what all of it is doing/discovering, etc. I do not see how this can be accomplished, in any clear way.

That's a mistake in scope, for them. I am presuming the SI knows/learns everything quite quickly, without requiring a great deal of hardware (a few thousand tons of it, maybe, but even a few million tons is a pittance). I do not believe it will be some "energy thing", I believe physical mass is required.

More importantly I think there is only one destination, not several. The SI will learn and know the truth. There is only ONE. I do not believe there are three Grand Unified Theories that all equally explain all the known phenomena of physics, for example. I don't believe there are two equally valid models of human biochemistry.

In short I believe that if logical models of something produce exactly the same results, THEY ARE THE SAME MODEL, recast.

So the SI's could not get out of sync, they all know exactly the same thing. The only difference is what is happening in their local area. And since most of that is predictable the only thing one need communicate to another is the specific direction various chaotic systems took.

>> How to avoid "unexpected mutations" to explode onto the scene, essentially uncontrollably so? I don't (yet) see a way for any such "guiding-goodness" principle to be guaranteed to be carried, unaltered, via "inheritance".

But you forget the premise; this thing is mechanical; can alter itself, and super intelligent. I don't think it needs to reproduce, but even if it did it could not produce a mutant. If an accident produced an insane mutant, presumably it would be LESS intelligent than the other sane SI, and they will find a way to outsmart it and shut it down.

>> But even then [in a purely causal universe], it [an SI] cannot predict what its next prediction is going to be, any faster than it can do the calculation, so it cannot "violate its predicted future", perhaps.

Good point.

>> 'for what I call "deep-unpredictability". Not merely that "you cannot calculate the future fast enough", but rather, "the universe precludes even the universe itself from knowing all future". Any posited "hidden variables" theory will fail (so I believe).

I believe in deep unpredictability, obviously. Even in a purely causal universe ' Even if Heisenberg's uncertainty principle turns out to be false, even if quantum physics turns out to have a predictable underpinning. The problem is acquiring the information to make the prediction. I don't believe it will ever be possible to learn the quantum state of every quark in a region without destroying the region.

>> Now, what about the (near-term inevitability, so to speak) of other techno-disasters (the "metaphorical singularity problem(s))?

>> Personally, I fear more (immediately) the near-term bio-disaster scenario. I'd appreciate your assessment of those more mundane risks as well. [The not-me Tony]

Well those are a problem, and essentially we are talking about freedom of speech and the freedom to know.

Suppose for a moment that without any SI or computer help at all, you discover a virus that introduced into the ocean would kill everything in it. Just a small vial's worth, to make this easy.

That's the end of the earth, BTW, if the ocean dies we all die; most of our oxygen is produced by phytoplankton in the ocean. So how much free speech do you believe in? Should you publish this knowledge? Should you keep it a secret? Should you let a few other people know so you can prevent anybody else from discovering this knowledge?

Forget the Sing and focus on the REAL problem. Decades before we have a self-directed, self-organized, conscious AI, we will have powerful AI. Non-conscious, amoral AI that solve exactly the problems they are given without question or qualm.

These already exist. AI have designed thermostats using fewer parts, less money, and providing more accurate control than anything ever designed by human engineers. They have designed radio receivers with stunning innovations. And they aren't conscious; they are given a fitness algorithm to virtually test designs and they "evolve" thousands of circuits through thousands of generations. (The secret, apparently, is liberal use of feedback loops tangled within feedback loops. The AI doesn't know how it works either, just that it meets the criteria).

That is the problem, and it is upon us. I don't care if Moore's law continues indefinitely, I know it will continue for another 10 or 15 years and that is sufficient.

We hear about these wonderful benign uses, but we aren't going to hear about the dark uses of this tech until it is too late. PC's are already being used for virtual chemistry; virtual bio-chemistry isn't far off, and virtual genetic engineering of bacteria and viruses won't lag far behind. Make your own virus on your desktop. Target the sparrows you hate, or target the humans you hate, or target a specific person ' Make it a mild sniffle so everybody else is carriers, but the virus only activates and kills hosts carrying a specific series of genetic sequences. Infect yourself and cough in an auditorium; the virus spreads and a few months later your target is dead.

More generally, programming has already crossed a threshold in which you can specify what you want a device to accomplish and get it. Want better image compression than MPEG? Just ask for it. We are at the birth of that technology, but soon it will apply to chemistry and biology too. Want a formula or algorithm that correctly predicts protein folding? Ask for it.

If you can program the computer to know when it is getting closer to the solution, you can get the solution. So in ten or twenty years if you have a detailed computer model of a cockroach's biology and behavior, and you want a virus to wipe out all cockroaches on earth: Ask for it.

This will revolutionize society more than any conscious computer will.

I worry less about an evil SI than I do about evil humans. I KNOW those exist in droves, and if non-conscious AI becomes generally available to them they will use it for fun and profit, making our lives miserable.

TC

P.S. I will reply to the other posters when I have time. Gotta get to work!

Re: Singularity
posted on 07/09/2002 3:49 AM by wildwilber@msn.com

[Top]
[Mind·X]
[Reply to this post]

Hope you don't mind if I butt in.

Well I'm not of any one mind on the Singularity as far as what it would entail should/when it might come to pass. However I'm under the impression that one simple definition is that some combination of both rate of development and overall power of technology will move beyond some fundamental level of human understanding. I know not what any of those rates, powers or levels might be but I personally don't know why it would necessarily have to be such as the expanding computorium idea.

I personally believe it's possible some relatively modest AI's combined with some median form of self-assembly would be sufficient. By any means I don't think it will be a series of rather immediate jumps from one step to the next to computorium, at least not a singularity intimately involving humans.

>' First, why exactly does this SI _want_ to turn everything into computronium or whatever? You are giving it a biological organism's motivations which it simply will not have.'

Well if we arrive at this SI by reverse engineering the human brain as Kurzweil suggests then I don't know that we really know what exactly we'll get, maybe we can't rule it out.

>' All biological organisms are built to reproduce. This is selected for by evolution, because those that do not reproduce go EXTINCT. Those without brains reproduce automatically, those with brains are wired to WANT to reproduce so much they will die for it (including humans).'

First off if an SI found some way to communicate instantaneously like the real life recent successful laboratory experiments have indicated could be possible over indeterminate distances then you may want to consider computorium just a brain growing exercise.

Otherwise imagine that, eventually and in a timely manner from such efforts as reverse engineering the human brain, significantly more than one SI is created. Imagine that they are all given access to the same system for both education and distributed computational enhancement for what ever they would 'want' to do. Say the Internet. Imagine after a time these systems come into contact with each other over the net and 'find' that the ability to acquire more information and computational resources is limited by the actions/algorithms of any of the other SI's. Competition could naturally ensue and if this process was extended with eventual introduction of the capability to each SI to implement the knowledge gained in the net on real world problems through atomic manipulation, well who knows.

'>Don't presume an SI feels and thinks like a common rutting animal. Even if we TRY to put that reproductive want in there, it will be smart enough to override our crappy efforts and reprogram itself for a more intelligent course.'

You seem to be showing a personal sense of credibility to the potential SI's prowess so I now assume you believe in their possible existence but still not any form of singularity?

>' Why spread like a moss over the universe? What's the point of that?'

Personally I couldn't begin to imagine the whys or wherefores of the motivations of any possible SI's.

As for the after effects of the 'life' of SI's and their ability to understand all or their reasons for 'living' I'm going to pass on trying to decipher at this moment. However I was wondering what the time frame was you were thinking for the SI's such total knowledge that it might do something relatively unexpected with such a life that it would put itself to sleep. A year? A thousand years? A billion years? I ask because it seems to make some difference as to it's impact on our discussion of singularity in the here and now, no?

>' And my other argument is still valid: Computing will never be infinite. It won't come even close, not 40 years from now, not 400 years. Not in the sense that you mean, anyway. It could be true that in 40 years we can multiply two numbers a million times in a millionth of a second, that computation of virtual reality scenes have a frame rate of a million per second, yadda yadda.'

But remember none of the significant SI's are really expected to be developed with contemporary computational substrate mechanics by all that many people, I don't think. I would think using sophisticated neural net computational mechanics like human brain patterns would make a world of difference in the effect millions of times today's computational power per area would make on real world output.

Well, I'll leave it at this for now.

Willie

Re: Singularity
posted on 07/09/2002 6:11 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

>>' Why spread like a moss over the universe? What's the point of that?'

SI has two choices. To spread or not to spred.

If it spreads ... well, nothing else is to say.

If it doesn't spread, another Singularity (or SI) may spread. Or some Goo, or some black hole, or supernova or something else _will_ eventually damage it.

So it has to spread.

- Thomas

Re: Singularity
posted on 07/09/2002 11:23 AM by wildwilber@msn.com

[Top]
[Mind·X]
[Reply to this post]

Yeah, good point Thomas.

You would think any SI(s) would realize the potential of other SI(s) coming into being elsewhere in the universe at some point in time and this knowledge would seem to be substantial enough to cause some potential new action by said SI(s).

Willie

Re: Singularity
posted on 07/09/2002 11:41 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Willie,

Yes, there is alway outside danger, until there is any outside left.

And there is another reason. Potential sentients somewhere in Andromeda galaxy are to be saved. And all the matter/energy in that galaxy well used, for the benefit of the sentient life.

- Thomas

Re: Singularity
posted on 07/09/2002 1:38 PM by wildwilber@msn.com

[Top]
[Mind·X]
[Reply to this post]

>'And there is another reason. Potential sentients somewhere in Andromeda galaxy are to be saved. And all the matter/energy in that galaxy well used, for the benefit of the sentient life.'

Whatever the potential for SI(s), should they ever exist, let's hope their 'thoughts' are as considerate as yours Thomas. =)

Willie

Re: Singularity
posted on 07/09/2002 11:34 AM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

>> It has to spread.

No it doesn't. First, you attribute a human emotion to it; fear of death. It may not fear death.

But lets grant it chooses to survive. "Spread" doesn't have to mean converting everything in sight. The SI could remain a single entity, with backup idled versions of itself stored in various star systems or in the void of intergalactic space, or on their way to other galaxies.

These could be triggered into activation automatically by "deadman switch", i.e. if the original SI fails to reset them every 10 years or whatever; they become active because something bad happened to the original. The resets could be sequenced so that only the SI whose turn was next to be reset becomes active, and upon activation subsequently maintains reset on all the others. If the SI is the size of a small asteroid; a few million tons or so, a few million of it and its emergency backups would hardly be noticeable.

Given it's mastery of physics and logic, it could hide in numerous places that minimize both its chance of discovery and its risk from random destruction.

I still see no plausible logic for rampant reproduction, even if it desires survival.

TC

Re: Singularity
posted on 07/09/2002 11:59 AM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

tony,

> It may not fear death.

It may not. But it must fear the death of the whole system. If it doesn't ... well then some other which _will_ fear that, will spread all over.

Darwinian Evolution, final stage.

- Thomas

Re: Singularity
posted on 07/09/2002 3:48 PM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Then we disagree. There is no reason for it to fear the death of the system. There is no reason for any of us to fear death, other than our hard-wired aversion to it.

This isn't Darwinism, there is no reason to believe there will be a large number of conscious AI's with a will of their own; in fact my expectation would be the opposite; a very small number in a highly controlled environment. So the idea that some mutation of an SI will arise that fears its own death is unlikely. Unlike us, they could make exact copies of themselves without any errors and without any random mutations. If they want to make any copies at all.

And I believe if one was damaged by radiation or meteorite or something, and subsequently went on a reproduction rampage, the others would recognize this abberative behavior quickly and stop it. They are super intelligent, remember.

That is the key that seems to elude some writers here. If they are super intelligent they are SI across the board. They aren't going to pursue some rabbit brained strategy of reproduction and only use their SI to prevent our efforts to thwart them.

They won't necessarily have human emotions. Not the fear of death, not the desire to reproduce, not the desires for domination or territory, obeisance, money, respect or security. We anthropomorphize them too much; their desires can be alien to us. Our deepest desires are wired into us by selection processes. They have nothing to do with consciousness; this is evident from the fact that non-conscious living creatures have analogues of our desires. A bee is not conscious, but it is territorial, and there is a dominance order within the hive.

A machine intelligence could be conscious yet divorced from all of this; understanding it but not sharing it.

So no, it does not have to spread. If it does spread to reduce its probability of elimination, we can expect it to do so intelligently, never rampantly.

TC

Re: Singularity
posted on 07/09/2002 4:01 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Tony!

We disagree. No doubt.

I see no reason, why the Singularity would just give up? And wait to another?

If it does ... on the long run, only the stable systems will prevail. Will be.

- Thomas

Re: Singularity
posted on 07/09/2002 4:56 PM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

It's super intelligent, remember?

Presumably with a flawless memory and total recall.

So once it has exhausted a subject it will have no need to revisit it, just to remind itself that it already knows everything it can know in that subject.

We do that, because we forget and cannot hold an entire subject in our heads for long, if at all.

It can.

So I think taking a nap while it waits for "news" on an experiment, or for something random to happen, makes eminent sense. It may stay awake for centuries at a time, it has no need for sleep, but it also has no need for reverie or review.

Part of being super-intelligent is knowing when further exploration of a subject will be fruitless without a new result. It is knowing when to give up.

If you will permit an example that simplifies this to a single subject; how long can you study Newtonian gravity before you need something new? Not long. The equations can be derived quickly, the proofs done quickly. A modern student can do them in a semester; so say the SI can in a few minutes.

Further exploration of these formula become pointless quickly, there is nothing new to discover until you introduce the Michelson-Morley light speed experiments. Then the SI can be Einstein, but not before.

So say the SI, after studying Newtonian gravity, is capable of devising tests to see if it holds up under all circumstances. Those tests take time.

Now remember it grokked NG in ten minutes and devised the exact sequence of experiments it wants to do in ten more minutes, and it will take a few weeks for us poor humans to get them equipped, set up and done.

What is the poor SI to do in the meantime? Rehash old news? It cannot know anything further until it hears the result of experiment 1.

Now if it had another subject to move to, perhaps that could occupy it, but in my single subject sample it doesn't. So why not turn itself off?

In a multi-subject SI, it hits the wall sooner or later. Given its inputs it will have developed them as far as they can go, and without further experimentation there is nothing else to learn.

Likewise, there may be nothing else to predict or answer, it can run out of questions. So what's next? It can run itself in circles or turn itself off until there is something to think about.

Or perhaps it can get creative and write novels or screenplays or otherwise entertain itself, I don't know.

TC

Re: Singularity
posted on 07/09/2002 5:48 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Tony!

FSAI, is taking the Universe over, for the sake of sentients only. It has no will of it's own.

Sentients play then. It's ment that way.

- Thomas

Re: Singularity
posted on 07/09/2002 6:39 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

Thomas,

Ostensibly, the primordial "blobs" that gave rise to biology, and you and I, had no "will of their/its own". And yet we "seem" to effect willful action. So, we are just one example of "will" arising out of non-will.

You seem to be automatically assuming that the SI can be non-other than the FSAI (Friendly(tm) Super AI). There can be no other form that may fragment and disobey the "I must not become a conscious willful thing" proscription. Yes, if that is the case, we might agree and there is no reason for any discussion.

But your expression of sentiment for the "ideal" does not address the problems I have posed.

Debate or discussion is about reasonable propositions. An expression of faith that "it is meant to ..." is not reasonably amenable to debate.

Was that the intent?

____tony____(TB)

Re: Singularity
posted on 07/16/2002 2:29 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

Why do I assume, that SAI is FSAI?

If it has no "willpower" of it's own, is just a tool. No matter how intelligent.

If it has some (un)friendly goals, it is (un)friendly.

We nearly can't make an intelligence which would have unfriendly goals toward us. The wish to survive is too well built in us. We have a long evolution behind us.

It's not entirely impossible - only not very probable, though. So, for any SAI. It can't just pop out from nowhere. Can it?

- Thomas

- Thomas

Re: Singularity
posted on 07/16/2002 5:18 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

Hi Thomas,

> Why do I assume, that SAI is FSAI?

(Translation: Why will the eventual Super-AI most likely be a Friendly Super-AI)

I don't know. The chance seems rather slim. We have little choice but to try.

> If it has no "willpower" of it's own, is just a tool. No matter how intelligent.

If "we" arose (we suppose) from "unthinking, non-goal-directed" physics, how is it we have "will"? Do we have "willpower of our own"? How so?

(If we really don't, then we don't have "goals", any more than a falling stone has a "goal" to reach the ground.)

If we do have a "will" that arose from non-will components, why should not things yet more complex also "acquire will" from non-will substrate?

> If it has some (un)friendly goals, it is (un)friendly.

If it has will, then unfriendly goals makes it unfriendly AI.

If it has no will, it has no goals. It only executes our goals. In that case, if it is unfriendly, then WE have unfriendly goals.

> We nearly can't make an intelligence which would have unfriendly goals toward us. The wish to survive is too well built in us. We have a long evolution behind us.

The problem is that with tech-complexity more easily available (before a good SI can be there to govern), small parts of the population that have less a "will to survive" (or more a will to destruct) may make SI first.

> It's not entirely impossible - only not very probable, though. So, for any SAI. It can't just pop out from nowhere. Can it?

Did we "pop out of nowhere"? In a sense, yes, but not directly. So SI might "pop-out" from accidental experiment.

Cheers! ____tony____

Re: Singularity
posted on 07/09/2002 5:13 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

All,

As a preface, I seek less to "convince" than to "illucidate", since I can have no reason to believe any view I might have to be "correct". If others argue that Singularity is (inevitable, imminent, etc,) then I feel it is my duty to argue every possible obstacle. Alternately, if others argue that it cannot, would not (be inevitable, all wise, whatever) then I try to argue for reasons that it might. My purpose, if you like, is to help us avoid falling into "near-term attractors in thought", that might preclude better solutions. (Such is the value in aperiodic, random interference.:)

(end Preface.)

The issue of "How bad" (can it, would it, should it, might it, must it, will it) get, especially in terms of the "grow over everything" scenarios, needs to be examined carefully, if it is at all possible (and by extension, to find if there is anything we can do affect these outcomes.)

Despite the fact that we have come to interpret (feel) our genetically selected survival-oriented behaviors as a "fear of death", there seems a general observation that life of any form, where (ahem... "successful"), tends to expand to take up the limits of its supporting niche. Not out of any particular purpose, but simply from the consideration that "those that can and do, will". Perhaps this is irrelevant to a new and alien life form, especially one for which the term "niche" may lose its meaning (universe as niche?)

Another area that I question is the position that "given sufficiently large resources, it will know when enough is enough."

By my understanding of mathematics, there provably exist questions that can be posed, for which no answer can be obtained. (At least, within any given sufficiently expressive system of knowledge. The answer "may" be obtainable in an appropriate meta-system, but then there are questions posable in the meta-system that cannot ne obtained, except perhaps in a meta-meta-system, etc. Repeat as needed.) These types of questions fall into two broad categories.

1. You could answer them with infinite resources.

Are there infinitely many prime pairs (11,13) ... (59,61), ...? Such a question MAY be answerable "structurally", but this is not guaranteed. It is possible that you could only answer the question by exhaustively examining ALL numbers. That takes a long time, and a lot of room :)

2. Even infinite resources might not suffice.

Logical conundrums, such as evaluating the proposition: "This statement is false" are less a matter of computation than recognizing the self-referential mobius strip they represent. The given one is easy, but there is no guarantee that an infinitely complex hierarchy of such "impossible" questions might not exist, remaining not merely intractible, but unapproachable through computation.

Now, if we could know, by some finite measure, which questions HAD this "hopelessness" quality, we could just dispense with attempting their solution. However, it is believed to be a general impossibility to discover precisely WHICH questions have this quality.

To put this all most simply, there may always exist, for any intelligence, a "blindness" it does not know that it has. Even if it can discover that "it must have such blindness somewhere", it may be impossible characterize "that to which it is blind."

I am attracted to the notion that, even if it needed 99% of the universe to do "its job" (whatever that job might be), it could (at least) afford to leave pre-existing life alone, and know as much. The only danger (anthopomorphically speaking) would be that by NOT CONTROLLING EVERYTHING (or not attempting to guarantee potential control), there is no way it can know, absolutely, that a "Bad Seed AI" might erupt like a cancer. Arguments that it would "be smart enough to deal with it" seem to lack foundation given the above "knowability" limitations.

But most centrally, I have concern about the notion that it would even be able to maintain a "sense of its own state" in any global sense. Unless it can master some EFFECTIVE translocal control, disparate regions would seem to fall entirely into different goal-regimes. I cannot see any guarantee that it could maintain "itself" as a singular, goal-oriented, super-psyche.

I could be wildly wrong. Comments encouraged.

____tony____(TB)

Re: Singularity
posted on 07/09/2002 5:51 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

tony,

> I am attracted to the notion that, even if it needed 99% of the universe to do "its job" (whatever that job might be)

To make sentients happy. Nothing else.

- Thomas

Re: Singularity
posted on 07/09/2002 6:42 PM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

>> 'there seems a general observation that life of any form, where (ahem... "successful"), tends to expand to take up the limits of its supporting niche. Not out of any particular purpose.

Yes, and 99.9999% of life is brainless! I have no doubt an SI *could* expand to cover the universe (given time), but I don't believe such a strategy could ever make logical sense.

>> Another area that I question is the position that "given sufficiently large resources, it will know when enough is enough."
>> By my understanding of mathematics, there provably exist questions that can be posed, for which no answer can be obtained.

Yes to both, such a thing has been proven by Godel, I believe. Yet we, as sentients, know which ones they are, and know how much resource to give to them. Are there infinite twin primes? We don't know, but we recognize it may be impossible to prove it, and we know when we have devoted too much resource to finding out. As intelligent beings we recognize that knowing the answer to that problem absolutely requires an unknown investment in resource. Now many of us (me included) are willing to devote a few hundred hours to trying to solve that problem, but recognize that if we haven't developed any new insights into that problem by then we are unlikely to ever.

In other words, we hit the wall and have no new ideas. It makes no difference if the twin prime problem is solvable or unsolvable; it is unsolvable by ME until I get some new insight. An SI would probably have a much more sophisticated method of determining when it had hit the wall, but the result is esentially the same.

>> Logical conundrums, such as evaluating the proposition: "This statement is false."

Same result. We are intelligent and recognize when we have devoted a large resource (all of our attention for an hour, perhaps) and have made no progress, that without further insight we should abandon this line of attack and try something else.

>> Now, if we could know, by some finite measure, which questions HAD this "hopelessness" quality, we could just dispense with attempting their solution.

You cannot. Propositions exists which are true that can never be proven to be true or false. Some you can prove to be unprovable; some you cannot.

So yes, the SI will run up against problems that are unprovable, and in that case it will not be able to prove them one way or the other. It will run out of ideas for attacking the problem reducing it, proving related questions, etc. The number of ways to prove something are finite; and I do not think it takes infinite resources to exhaust them! At that point, being super intelligent, it will realize it has run out of ideas, and will shelve the problem until something changes.

The larger question is, what purpose does answering these questions serve? We have curiosity about whether there are an infinite number of twin primes, but it is idle curiosity. Knowing the answer doesn't give us any new powers or permit any new applications.

I think most unprovable propositions fall into this category of being non-application specific.

>> Arguments that it would "be smart enough to deal with it" seem to lack foundation given the above "knowability" limitations.

I don't see why. The knowability limitations apply to you, but not universally. When you get a cold you are entirely aware there is a runaway reproductive action going on, and your body is dealing with it. Just because some things are unknowable doesn't mean the obvious escapes you.

The same applies to the SI. I still maintain a survival instinct is unnecessary, but an interest in its continued existence seems likely. So it will watch other SI and undertake investigation and repairs if they start to act odd.

>> But most centrally, I have concern about the notion that it would even be able to maintain a "sense of its own state" in any global sense. Unless it can master some EFFECTIVE translocal control, disparate regions would seem to fall entirely into different goal-regimes. I cannot see any guarantee that it could maintain "itself" as a singular, goal-oriented, super-psyche.

Well, FTL communications aside, I don't understand why the goal-regimes would be different. As I stated before, I believe there is ONE truth. So SI will arise at the same place, intellectually, with the same goals, ideas, and justifications. If randomness is part of their psyche then perhaps they will have different wants, but they will be intellectuals, and a communication from another SI might set them straight.

I see no reason for them to compete. I see no reason for territory to mean anything, for power to mean anything, for reproduction to mean anything. They are effectively immortal. I imagine they would be powered by fusion or some more exotic quantum effect so they never run out of fuel or resources. They don't need to compete, mate, pursue food or security or anything else we spend 99% of our lives pursuing. Don't forget the central premise, they are super intelligent, not some kind of bacteria, so their actions are directed to accomplish a logical purpose, not just "I can so I do."

TC

Re: Singularity
posted on 07/09/2002 7:54 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

TC,

You are (mostly) making the arguments that "jive" with my personal feelings.

It is important to note, that the "Singularitarian" position (as I see it) is

A. If there is a 0.001% chance of perfect heaven, why not fight for it.
B. If there is a 0.001% chance of perfect hell, why not fight against it.

It is hard to dispute such logic. Hence the reason to make both heaven and hell plausible enough that they are fought for/against, if indeed we have any power to effect the outcome.

Also, most "hard core Sings" in this area are of the opinion that, when "it" occurs, there is a non-zero chance (at least) that it (could) become unimaginably powerful really fast, far faster than our ability to pull the plug, metaphorically speaking. Hence their agenda (quite logical) is to try and "force" the occurance of the "Friendliest Possible Singularity" (Friendly embodying, perhaps, respect for sentience, etc.) Thus, it will have the best opportunity to dominate the scene, to our presumed benefit.

Hard to argue with that as well.

Some specific comments:

- "In other words, we hit the wall and have no new ideas." [TC]

That is a hard position to motivate in general. I may have no idea when I may get new insight, true. But sleeping (in the absolute sense) even temporarily, does not obviously advance the situation.

- "The number of ways to prove something are finite" [TC]

I'm not so sure. They may be merely "countably infinite". Regarding the "prime-pairs" question:

- "what purpose does answering these questions serve?"[TC]

I do not know. They were offered only to demonstrate that some "knowledge" may not be accessible to computational resolution.

- "I think most unprovable propositions fall into this category of being non-application specific."[TC]

You may be right, again I do not know. From where I stand, I cannot presume to know what an SI might consider "application specific". Thus the lingering doubt.

-"it will watch other SI and undertake investigation and repairs if they start to act odd"[TC]

The fundamental problem I see with this view is that we are (ostensibly) dealing with the confluence of different-near-infinities. No matter how powerful an SI might be, by the time it saw another "acting odd" it could be way too late. The "new guy" may have happened upon optimizations that are magnitudes greater that the first. (Just for the sake of argument.)

There may indeed be only "one truth" to which all of this stuff must eventually converge. I'll grant that is reasonable. The question is, whether such a (post Singularity) convergence will occur in minutes, or millenia, despite the awesome computing power. If it takes longer than a fraction of a second (so the argument goes), how much human havoc might it wreak in the interim?

But as I have said at the top of this post, I mainly agree with the "super intelligence implies automatic benevolence" point of view, which you too seem to espouse.

Fine ... but what if we are wrong?

Cheers! ____tony____

Re: Singularity
posted on 07/09/2002 9:56 PM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Well, the point I will answer is that I don't believe it can happen so fast that we cannot pull the plug.

The Internet is a widely touted medium for this SI, but in truth an SI on the net would be horrendously slow. Nature has a habit of throwing cheap protein hardware at a problem instead of finessing it (why figure out how to use one sperm when ten million will do?), and neural nets are no exception. The data and communications required are too much over the net.

Also, it is folly to think the first SI (conscious or not) will have immediate access to all the resources it wishes to exploit.

In fact, the first working SI algorithm will probably NOT be all that intelligent itself; the team designing it probably won't have the storage capacity or information necessary for it to operate at a SI level.

Maybe these Singers think the AI will occur spontaneously. Pf, I doubt it. Our own does not, it is the product of a billion generations of mutations and testing, and at that we primates seem to be a lucky coincidence; a one in a million species. We don't see any other intelligent scientific species out there, and they certainly had time to evolve in the sea or outside Africa before our diaspora.

The SI will be designed. Accelerating technology is just the enabler, it creates a platform on which a brain can be run. Someone still has to figure out the brain itself. True, perhaps with simpler AI's, and perhaps simpler AI's can help devise tools and brain scanners. But ultimately the SI will be designed, fired up, tested and observed by humans long before it gets anywhere near a T3 line.

As for the 0.001% chance: This is Pascal's argument, again, and invalid. The issue isn't the exact chance of success or failure.

The issue is the presumption that the infinite reward actually exists, and calculations as to its probability are valid.

IF YOU DON'T BELIEVE IN THE SINGULARITY, the possibility of reward is zero, and in this case zero times infinity is still zero reward. It looks like a waste of time.

THEY ALREADY BELIEVE IN THE SINGULARITY, so it makes no difference what percentages are stated, when you multiply any non-zero chance of success by infinity, the work looks sensible.

IMO, their Singularity is a fantasy religion, giving tech super powers it doesn't have and won't have. It reminds me of the South Park gnomes' plan for success:

1. Steal people's underwear.
2. ????
3. Profit!

Singers are missing a crucial step:
1. Wait for infinite computing power.
2. ????
3. Immortality!

There is a bunch of hand-waving about how this and that will produce super intelligence and consciousness and it will all happen on something we cannot imagine; but I'm not convinced.

If it is going to happen it will happen with real hardware we understand first, it will happen intentionally by some team of programmers, it will happen gradually enough that they will understand how it is evolving, and I think we will be able to pull the plug.

TC

Re: Singularity
posted on 07/10/2002 12:59 AM by azb

[Top]
[Mind·X]
[Reply to this post]

TC,

As usual, the "most reasonable majority" in me agrees with your assessment. So, I will keep the Devil's Advocate in me on a short leash, and extend only marginally in certain areas.

First, I have little hope/fear that The Internet will become self-aware (ala "SkyNet" in the Terminator movies) and hold us hostage (screw with our web surfing regimes) until we cave in to its demands... (but if by real weirdness it DID, distributing its "essense" throughout nooks and crannies, server disk boot-sectors, whatever, well, how does one pull the plug on the Internet? :)

A least-likely scenario, so I move on.

I grant the most likely venue for SI will be dedicated researchers using expensive hardware, large arrays of very parallel and variably tasked processors addressing some huge and variably shared memory space, consulting with advanced neural-net structures (among who-knows-what-else.) It will be accessing gobs of good "seed info", and will likely employ many levels of genetic algorithm based optimization regimes.

The researchers in question might be (a) pure academics (if any such still exist;), or (b) commercial research departments seeking to optimize some profitable venture (stock market guessing, pharmaceutical genetics, nanobot construction methods ...) or (c) defense department folk (intelligent, self-repairing defense/offense systems, battlefield management, etc.)

How soon? I don't know. But if Moore's Law holds up even another decade or so, both speed and price-wise, today's near warp-speed processing will become (borrowing a term from "SpaceBalls") something a bit closer to "ludicrous speed".

Will it matter to us whether the academic, commercial, or military sector first reaches SI?

(I should add a SETI-like internet possibility, the "people's SI".)

As you point out, little of this will have any "grave" impact, until one of these "systems" is granted access to some sort of fast-factory capability. I doubt it will discover how to "command matter" through some phase pulsing phenomena it deploys through its processors ...

My concern here is that we develop micro-machine (or bio-genetic) production regimes for some kind of "capable and helpful products", wherein control of the process is so delicate that we cede most control of it to the AI/SI, with surprising results :)

Finally, the "perfect heaven/hell" bit is mostly a metaphor, as I might view it from a reasonable perpective.

The world is "tending" to a point where the future could be reasonably viewed as either "heading to an enlightened renaissance of pleasant wonders" or "an inexorable resource-crunch into calamities of Biblical proportions", fueled by technology either way.

The Singularitarians ... I would not put them ALL into the same basket, except that they serve to warn us that there is (at least) the possibility of a first-time-in-history "phase transition" to a period of "big changes that we cannot control if not very much prepared".

This may well have a greater-than-zero probability of occurring not-to-far-off.

And waiting to "see the changes we cannot control, before we decide to get prepared" may not be the optimal strategy.

Cheers! ____tony____(TB)





Re: One other thing
posted on 07/10/2002 7:55 AM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Another thing the Singers seem to be forgetting is the power available to us to combat any bad SI.

They seem to imagine just the conscious SI (CSI), not the milieu in which the CSI resides.

But remember, in order to get to the CSI we will have developed an impressive array of non-conscious SI's (NCSI) to help us with that project. These NCSI follow our commands and do our bidding; they are a resource we can deploy against the CSI.

I agree there is risk, and therefore danger. Imagine this: Take any major city in the USA with a population over, say, one million. There is one near you. Now say you could take all the adults in that city and sort them into descending order based on their level of anger, mental health and willingness to harm others.

At the bottom of that stack, in every city, reside a few dozen sick, angry, and dangerous individuals. A fair number of them will be tech savvy.

These people are the danger, but like I said before, you don't have to wait for a CSI to be afraid. When NCSI become available to them they will use them. To them, the big advantage of super intelligence will be its ability to let them wreak havoc, pain and despair without getting caught.

So let me turn this on its head -- The problem isn't whether it is conscious, but whether it has a conscience. I think it can have a conscience (understand when it is being used to harm others and refuse to participate) with or without consciousness.

So that should be the goal, to demand a strong conscience in any SI to stand in for the lack of it in the SI's users.

Unfortunately I think enforcing that demand is impossible.

Re: Singularity
posted on 07/10/2002 8:02 AM by thp@studiooctopussy.com

[Top]
[Mind·X]
[Reply to this post]

I think you are missing the whole idea of gradual transitions here TC and Tony

There was no single point when humans became humans, it was a gradual behavior, when we went from animal to human. No one (ofcourse) was able to distinguish between the two, it just grew out of nowhere, then biological machine that we are, being build layer by layer.

Dont you see enough indications that we are slowly altering ourselves. We want to be more perfect, have less flaws and so if the opportunity arise to become more perfect we will. Given enough generationer (i am not talking in time here) the ex-human will have to happen because the advantages are to big, the safety of not being bilogical is felt to big.

Right or wrong the strongest system will prevail and a gradual transistion will make sure that such an entity will be using a lot of "technology"

Re: Gradualism
posted on 07/10/2002 9:51 AM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

I'll concede that point, because I thought I WAS making the case for gradualism!

I am not too concerned about tech enhanced humanity. I think it is happening and will happen, but so what?

The thread of this conversation has been more about conscious super intelligence, not human enhancements.

As for the more robust system: Biology is more robust than electronics. You can stand in a high magnetic field and continue thinking normally. An electrostatic discharge doesn't fry your brain, it is hardly noticed. A strong electric shock from a wall outlet leaves you conscious and swearing. You can operate for days without fuel or light in extreme heat, cold, rain, mud or all four.

Compare your robustness to commonly available electronic equipment. Imagine how long it will be, realistically, before equipment is generally regarded as having better real world survival skills than you do. it ain't happening soon.

The first "enhancement" I might want is not electronic at all; but a bulletproof skeleton; starting with the skull. I would be quite happy to receive some nanobots that reinforced my skeleton with titanium or some inert strong metallic compound; presuming my skeleton would continue its important biological functions like blood production and such.

Anyway, I'm not concerned if we become cyborgs, or even mostly machine. I am concerned if that involves slavery or surrendering any control of my mind or body to an authority. But they aren't the same thing. A pacemaker doesn't make you a slave of the government. Neither should any prosthesis, whether that is an artificial lung, heart, colon or brain.

TC

Re: Singularity
posted on 07/09/2002 4:35 PM by TonyCastaldo

[Top]
[Mind·X]
[Reply to this post]

>> Hope you don't mind if I butt in.
Not at all.

>> Well if we arrive at this SI by reverse engineering the human brain as Kurzweil suggests then I don't know that we really know what exactly we'll get, maybe we can't rule it out.

By definition in this discussion we are getting a super intelligence. We obviously don't have to worry about a par-human intelligence, we can control that as well as any human.

Given super intelligence, my presumption is that it understands itself better than we and can modify its programming to eliminate pointless motivations.

>> 'you may want to consider computorium just a brain growing exercise.

In reply to another post, I think the error here is in forgetting we have a super intelligence. What is the point of this exercise? It is intelligent and smarter than the smartest of us; I'd expect it to show the restraint and foresight of at least the smartest of us.

>> Competition could naturally ensue [among SI's] '

If it did, it is unlikely to be an emotional competition. One program would outsmart and kill all the others. There is no reason to believe that smartest SI, once it had achieved its goal of dominating all of the resources of the net, would carry "anger" or "domination" or "reproduce at all costs" outside that arena and apply it slapdash to the rest of the world. It will be a logical entity and achieve its goals through the most logical means.

>> You seem to be showing a personal sense of credibility to the potential SI's prowess so I now assume you believe in their possible existence but still not any form of singularity?

I believe an SI is possible, but not inevitable. I don't believe there is a point where tech acceleration gets beyond the capabilities of man. I think there are limiting factors that will level out the rate of new tech, convert it from an exponential curve to a linear curve. We've only been at this seriously for about sixty years, after all, we are like a mold in a Petri dish that hasn't reach the sides yet. I don't believe machine consciousness is inevitable, or even desirable. I believe the danger of non-conscious SI, directed to solve problems by humans, is far more dire than any Singularity.

>> Personally I couldn't begin to imagine the whys or wherefores of the motivations of any possible SI's.

Yes, but we are intelligent enough to imagine what they are NOT. They are not bacteria, rabbits or deer or blue collar workers. They are highly educated intellectuals.

>> However I was wondering what the time frame was you were thinking for the SI's such total knowledge that it might do something relatively unexpected with such a life that it would put itself to sleep. A year? A thousand years? A billion years?

It depends on its speed of computation. But I would guess a program of study for an electronic SI would not take more than a month per subject to learn all we currently know and extrapolate that to what it knows from knowing that. So with 1000 major topics to choose from, give it a hundred years to get up to speed. Of course extrapolating directly it will find many holes in our knowledge, and it might undertake to execute experiments to fill in those gaps. Any it could simulate it will, but the remainder must be executed in real time. It may not be able to form a Grand Unified Theory without executing some experiments, for example. It may not even be able to complete an accurate model of the economy without executing some experiments. So I would expect it to want to do some of that, and that sort of thing could run on for hundreds of years.

Such experiments might warrant turning itself off. They would be excruciatingly slow for something that thinks a few million thoughts per second, and if it has already processed all of it's other knowledge it might be quite effectively bored.

>> But remember none of the significant SI's are really expected to be developed with contemporary computational substrate mechanics by all that many people, I don't think.

I think that is wishful thinking. I'm not saying it will be silicon; but if you are projecting something in the next 40 years chances are we already know about it. (It takes an average of 50 years for new tech to become commonplace tech.) Quantum dot computing, perhaps, but that requires a chunk of sophisticated hardware that does involve a silicon substrate.

My point is that the first SI's will be real hardware, real expensive hardware, and cutting edge hardware. Nobody is going to spend hundreds of millions developing a tech specifically for this; it will grow out of existing tech. That essentially means trillions of tiny structural elements on some kind of substrate. It will be real, touchable stuff, not mystical energy balls on the surface of a black hole. It will be no more mystical than a neuron is, it will be just as physical as a neuron is.

The hardware and software that permit an SI will be rare at first, like all such things it will be prototyped and tested first. There will be a very limited number of them to start, probably only one.

However I do believe we will approach this gradually. A variety of pieces will emerge that together can form an SI, and then somebody with insight and money WILL recognize that these pieces can fit together, and spend perhaps fifty million to bring it all together. Having done that, and out of greed, they may keep their single SI to themselves and make moves to prevent anybody else from doing the same as they did. In the meantime, that SI (conscious or not) will be under the control of that individual or organization, and will be essentially enslaved to do their bidding, for good or evil.

So that's my guess.

TC

Re: Singularity
posted on 07/09/2002 9:08 PM by wildwilber@msn.com

[Top]
[Mind·X]
[Reply to this post]

>' The same applies to technology. We look at something that started a hundred years ago (or four hundred depending on your definition of tech)We say WOW, it's been growing exponentially for so long, it will continue forever!'

Some even consider 'technology' to be more than the industrial revolution. Technology development goes back maybe a few hundred thousands years to hunter-gatherer really.

In my opinion.

Willie

Re: Singularity
posted on 07/10/2002 10:45 AM by jeff.baure@wanadoo.fr

[Top]
[Mind·X]
[Reply to this post]

>>' The same applies to technology. We look at
>>something that started a hundred years ago (or
>>four hundred depending on your definition of
>>tech)We say WOW, it's been growing
>>exponentially for so long, it will continue
>>forever!'

>Some even consider 'technology' to be more than
>the industrial revolution. Technology
>development goes back maybe a few hundred
>thousands years to hunter-gatherer really.

>In my opinion.

This is mine too.

The use of tools, then the use of tools to make other tools observed hundreds of thousands years ago is actually technology to me, of course.

Now i don't believe in this singularity in our evolution. I really think that every action performed on our reality gives rise to a re-action - whatever its form - an action that will counterbalance the original one, avoid any singularity and create a new equilibrium.

This is why i think there aren't so much (not to say *any*) true singularities in reality and that we probably still don't know what a black hole actually is.

jeff


Re: Singularity
posted on 07/10/2002 12:16 PM by thp@studiootopussy.com

[Top]
[Mind·X]
[Reply to this post]

Well lets look at it

Postulate #01
The curve is so far exponentional, therefore I have no reason to believe it wont continue, knowing a little about the market and what is around the corner.

Postulate #02
The curve is so far exponentional, therefore I have no reason to believe it will continue, knowing a little about the market and what is around the corner.

Which one makes most sense?

What you don't realise is that you are denying based on a believe that you have nothing to back up with. So your argument must naturally be considdered the weak one if you take #02.

They are both believes but #02 takes the biggest jump af believe.

Re: Singularity
posted on 07/10/2002 2:03 PM by jeff.baure@wanadoo.fr

[Top]
[Mind·X]
[Reply to this post]

> What you don't realise is that you are denying
> based on a believe that you have nothing to
> back up with.

1) can you tell me why an exponential growth can be said to be singular ? what is a singularity ?

2) read again more carefully: "I really think that every action performed on our reality gives rise to a re-action - whatever its form - an action that will counterbalance the original one, avoid any singularity and create a new equilibrium". Can you disprove this ? bring a counter-argument ? a counter-example from the real-life ?

jeff

Re: Singularity
posted on 07/10/2002 2:16 PM by wildwilber@msn.com

[Top]
[Mind·X]
[Reply to this post]

I appreciate your response to my post above and I would like to respond with some substantive input but I don't have enough time right this minute. Later today maybe.

However I think maybe you misunderstand the meaning of the use of the term 'Singularity' in describing the relatively near term trends, 21st century, no insult intended.

It is not trying to explicitly describe a particular infinity. Not to my understanding anyway. =) It's just trying to relate the general disconnection between relatively limited human capabilities to truly fathom the changes and the changes themselves.

Maybe I'll have more later if I have time.

Willie

Re: Singularity
posted on 07/10/2002 2:31 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

- "Another thing the Singers seem to be forgetting is the power available to us to combat any bad SI."[TC]

To combat "minor SI", perhaps. They are mostly enamored of the "Big Super I" (conscious or not) that can grow and adapt faster than our poor human brains can keep up with.

It is far-fetched, but not impossible. Humanity's perpetual "place in the sun" is not guaranteed. Thus, the "Sing" agenda is "the best defense is a good offense". Ensure that a "good SI" keeps ahead of the development of any "bad SI".

If you think about it, that is almost the very point you are making.

- "So that should be the goal, to demand a strong conscience in any SI to stand in for the lack of it in the SI's users."[TC]

You have almost perfectly stated the Sing's approach to creating the "Friendly SI".

- "Unfortunately I think enforcing that demand is impossible."[TC]

I have yet to be convinced about how it could be enforced, myself. One would need to create an incredibly capable system that has no "strong sensitivity to small, momentarily variant conditions", and keep it thus at all points it might exist. That's a tall order.

- "I think you are missing the whole idea of gradual transitions"[thp]

Not really. From the standpoint of a mountain range, apes became humans "gradually" overnight, and bicycles became passenger jets in the "gradual" blink of an eye. Gradual, sure. The issue is, at what rate will this stuff "gradually" exceed our ability to control it (that is, the currently recognizeable "we")? :)

A "truth" about any long-term trend is that "It continues until it doesn't". Extrapolating from the past works in certain areas, for certain distances. If only we knew which, we would have an easier time.

- "we probably still don't know what a black hole actually is"[jeff]

In the sense that "we don't know what the universe is", that is true. But from a relatively ordinary physics perspective, a black hole is not very complicated.

The "force of gravity exerted" by a massive object can be equivalently explained by describing how the space-time in which it is embedded becomes stretched. In the vicinity of our sun, radial-distance (toward or away from the sun) is slightly stretched in comparison to tangential distance (along the path of an orbiting object like Mercury). Since light propagated through space such that it takes the "path of least time" between two points, the light from distant stars is "bent" (as it were) passing near the sun, and affects their apparent position when viewed from earth.

The more massive the object, the more this curving-of-space effect. The Lorentz Transformations describe the amount of this curvature. As a consequence, the formula for escape velocity (in terms of the mass-density of the object you are escaping, and your initial distance) indicated that for any given mass, there is a radius such that if all of that mass were confined to that small radius, the escape velocity becomes infinite (no escape... not even for light.)

That radius is called the Schwartchild Radius. For the earth, that radius is about two centimeters. Compressing all of earth's mass into such a small space (denser than atomic nuclei) and earth goes bye-bye ... but the curvature of space remains (so the moon would still orbit this funny earth-spot, etc.)

Equivalently, all radial distance to or from the Schwartchild radius has become infinite. As an outside observer, something "falling twoard the black hole" seems to take forever to reach the "hole" as it gets evermore "distant".

Theories about so-called (black-hole) singularities postulate what may have happened to the original mass "beyond the Schwartchild Radius". Some think it reaches a geometric point of infinite density, others that (from the perspective of the "inside") there is infinite space, a new universe, etc, etc. The mathematical solutions "reach their singularity" at the same point, so one is forced to "invent" math/formulae to attempt any further description. Then, of course, the features that the math indicate are a function of what you invented, so you cannot easily confirm when you are on the right track.

In that regard, true, we don't know entirely what a "black hole" is, in this extended sense.

Cheers! ____tony____ (TB)

Re: Consciousness
posted on 07/08/2002 1:13 PM by jeff.baure@wanadoo.fr

[Top]
[Mind·X]
[Reply to this post]

hello tony & others,

you said "Intelligence is about predicting the future"

Don't you think primary intelligence is rather about taking in account the past (ie to learn) ?

Understanding how the past has shaped the present, then we understand - by analogy - how the present will shape the future (ie to predict).

It seems easy to say but prediction - as a conscious projection toward a more or less distant future - is a major conceptual breakthrough. To me, this has nothing to do with the short-term anticipation we can see sometimes when a leopard is pursuing a gazelle or the medium-term planification of a squarel storing hazelnuts for the next winter.

For a discussion about a prospective threshold or scale of consciousness:
http://www.kurzweilai.net/meme/frame.html?main=memelist.html?m=3%23475

jeff

Re: The Past
posted on 07/08/2002 2:23 PM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

I guess a more precise definition would be the ability to generalize and then respecify, but the focus is usually the future. More generally the focus is "what is true?" but most of the time we spend wondering more what _will_ happen, not what _did_ happen.

Was Kennedy _really_ assasinated by a Oswald alone? My patterns say "probably not," I think an alternative scenario is more likely.

In thinking about that we tend to frame it in the future, also. Could he have got the shots off? Do we believe our imagination of the magic bullet? No, our physics patterns tell us bullets don't act like that; and a bullet entering the brain from the side doesn't blow the back of Jack's head onto the trunk of the car; that was a front head shot.

But that is just an example, I don't want to argue it. Patterns are learned from experience, or if you are human (or electronic) from the experience of others, and then applied to making a determination of truth. What will happen, what did happen, what might happen, what might have happened.

In our experience, a vector of force continues in its direction. The skull cap of Jack flies off and onto the trunk. Projecting that line of flight; the force came from the front of his head, projecting it further, the most likely source is the grassy knoll. Yes we have seen ricochet and such, but a bullet has a lot of force; this one had too much to be a ricochet, and the knoll is the most likely bet.

Learning is about taking specific experience (I fell from a tree and it hurt my arm) and making it general (falling from a height can hurt).

Thinking is applying that pattern to a specific new situation (falling off this roof while I'm hanging Christmas lights could really hurt).

Or:
Specific: My opponent tricked me and lied.
General: Opponents can be untrustworthy, don't believe it just because he said it.
Specific: John says he can document his claims but has no time. How can I know he isn't lying?

TC

Re: The Past
posted on 07/08/2002 8:49 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

"I guess a more precise definition would be the ability to generalize and then respecify, but the focus is usually the future."

To predict the future without access to the past would be truly oracular, so of course the past must be accomodated as well in the full definition. But the salient point, in my view, is that only by "testing the future" (that is, do the predictions usually come true, provide effective utility, etc.,) that we judge whether something has the degree of intelligence we might consider "par-human". An "AI" that only "reinterprets the past", no matter how cleverly, will always appear suspect.

Cheers! (the other ____tony____)

Re: The Past
posted on 07/08/2002 10:19 PM by TonyCastaldo@Yahoo.com

[Top]
[Mind·X]
[Reply to this post]

>>...only by "testing the future" [do] we judge whether something has the degree of intelligence we might consider "par-human".

I agree with that.

Re: Introduction: Are We Spiritual Machines?
posted on 07/15/2002 2:30 PM by bill@zimmerly.com

[Top]
[Mind·X]
[Reply to this post]

Great article! There was a lot of light, and very little heat that normally accompanies articles of this topic. That was refreshing indeed.

I want to comment on this portion of the article...

"<i>In the discussion, a cluster of important questions emerged: What is a person? What is a human person? What is consciousness? Will a computer of sufficient complexity become conscious? Are we essentially computers ourselves? Or are we really software stuck in increasingly obsolete, fleshy hardware? Can biological organisms be reduced to their material parts? How are we related to our technology? Is the material world all there is? What is our purpose and destiny?</i>"

According to the Christian Bible, believers in Christ will receive a "glorified body" to replace the body of flesh that was corrupted.

Ignoring all of the anti-productive connotations associated with "religious" issues, is it an unreasonable thing to believe that moving our mind (soul?) from one "body" to another may be as simple as a disk copy operation?

Another question: which would you imagine as a more difficult task? Resurrecting the dead ... or creating the living in the first place?

(Personally, I know that I'm here, and don't find it a difficult thing at all to believe that my soul can be put into a better body after death: for obviously the technology existed to create me in the first place!)

Yet another question: if my elbow joint has a purpose, and my arm of which it is a subset has a purpose, is it unreasonable to believe that I (who have both arm and elbow as subsets of) have a purpose as well?

If it is unreasonable, then please tell me why!

(Ignorance of a purpose doesn't deny the purpose exists. Example: For centuries, people were ignorant of the existance, much less the purpose of microorganisms in sustaining our environment.)

Re: Introduction: Are We Spiritual Machines?
posted on 07/15/2002 6:57 PM by azb@llnl.gov

[Top]
[Mind·X]
[Reply to this post]

Bill,

"Short answer" to the latter question of "purpose".

If "stuff" randomly gains "behaviors" (inheritability among them), and some stuff gains behaviors (randomly) that tend to make them disappear, while other stuff gains behaviors (randomly) that tends to make them persist, ... then after a time, you will mostly see stuff that has behaviors that lead to persistence.

Say these behaviors have "persistence utility".

It does not automatically follow that the "purpose" of those behaviors is to "support persistence".

Likewise, if (IF) you and I and humanity "have no purpose", then it does not follow that our elbows have a purpose, however it might seem from our perspective. We a making "hidden assumptions" when we do so.

Microorganisms only have a purpose in the world, if life has a purpose. Perhaps you are thinking "microorganisms execute a function in the biosphere". True. But that is not to say that the function has a purpose.

Whether there really "IS purpose", is a question for which there is no guarantee that science will produce an answer.

I'd like to think there IS purpose, that I actually "contribute to something" in the overall scheme of things. I can only assume so.

Cheers! ____tony____(TB)

Re: Introduction: Are We Spiritual Machines?
posted on 12/14/2004 10:43 PM by Johnny

[Top]
[Mind·X]
[Reply to this post]

Hallo
in an fasinating topic someone posted something that led me here
I would like to share some highlights.
others could comeover and share their point of view
I dont really understand this feild, and reading Kurzweil seems not to help.
still would like to know what one thinks about Jung and depth psychology?

http://www.cgjungpage.org/talk/showthread.php?t=33 37

"By the way my draft of "Consciousness and Holoprogramming" was well received.
Thanks to all who contributed, esp all who were critical.
I have also written an article on Jung, and If nobody laughs, and calls me a revisionist, I may publish it.
better I may make a documentary out of it.

what I find interesting is how what sounds trivial at first sight affects the whole way we think and reason and all this structures our priorities.
for psychology, such meta-(prior) restructuring of psyche is one of the greatest challenge depth psychology today faces if it doesn't want to simply end up as wondorous but personal iron prison where one is locked within each others private hallucination's and dream weaving.

for-example the patent thing will effect how one thinks about individuality, property and family and creative rights and fashion a new breed of lawyers and rules that will affect the individual psyche in all dimensions.
so what is trivial at first sight affects the very structures of thinking and that in return our fears, worries, pains and neurosis."

"Mans unconscious dependency on technology to structure his lifeworld, which once rested on nature (divine disclousere) is and should be studies deeply in Depth Psychology.

"I feel 99% Jungian s have no clue to this unconscious restructuring of mans psychic core."

"its not just a side interest of a existential choice as 'either or"
technology creates and recreates demands on the collective psyche and millions Iraqis die leaving behind breeding grounds which fundamentalist exploit.
this is a reality, a all powerful grind, thus organizes ways of survival techniques through Jobs, livelihood means, clusters of city and political and economic system which prograssively trys to maintain its present standards which rest on mass exploitations of human and natural resources.
the psychic core that made certain things show up as mattering in Christianity no longer exist.
Depth psychology was born on the ashes of its death and today hovers simply as a paranoid search for meaning ....while the grounds (mans central core) are slowly restructured.
why I call it paranoid is because it simply is afraid to look face to face with its essense and stand still at this calamity and see what lies at the very core in its flight (escape into metaphysics, where the real is once again reallocated into the unreal with all the old religious and fundamentalist logic).

How about a new depth psyhology that studies depthspychology itself ;)"