|
|
|
|
|
|
|
Origin >
The Singularity >
Response to 'The Singularity Is Always Near'
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0664.html
Printable Version |
|
|
|
Response to 'The Singularity Is Always Near'
In "The Singularity Is Always Near," an essay in The Technium, an online "book in progress," author Kevin Kelly critiques arguments on exponential growth made in Ray Kurzweil's book, The Singularity Is Near. Kurzweil responds.
Published on KurzweilAI.net May 4, 2006
Allow me to clarify the metaphor implied by the term "singularity."
The metaphor implicit in the term "singularity" as applied to future
human history is not to a point of infinity, but rather to the event
horizon surrounding a black hole. Densities are not infinite at
the event horizon but merely large enough such that it is difficult
to see past the event horizon from outside.
I say difficult rather than impossible because the Hawking radiation
emitted from the event horizon is likely to be quantum entangled
with events inside the black hole, so there may be ways of retrieving
the information. This was the concession made recently by Hawking.
However, without getting into the details of this controversy, it
is fair to say that seeing past the event horizon is difficult (impossible
from a classical physics perspective) because the gravity of the
black hole is strong enough to prevent classical information from
inside the black hole getting out.
We can, however, use our intelligence to infer what life is like
inside the event horizon even though seeing past the event horizon
is effectively blocked. Similarly, we can use our intelligence to
make meaningful statements about the world after the historical
singularity, but seeing past this event horizon is difficult because
of the profound transformation that it represents.
So discussions of infinity are not relevant. You are correct that
exponential growth is smooth and continuous. From a mathematical
perspective, an exponential looks the same everywhere and this applies
to the exponential growth of the power (as expressed in price-performance,
capacity, bandwidth, etc.) of information technologies. However,
despite being smooth and continuous, exponential growth is nonetheless
explosive once the curve reaches transformative levels. Consider
the Internet. When the Arpanet went from 10,000 nodes to 20,000
in one year, and then to 40,000 and then 80,000, it was of interest
only to a few thousand scientists. When ten years later it went
from 10 million nodes to 20 million, and then 40 million and 80
million, the appearance of this curve looks identical (especially
when viewed on a log plot), but the consequences were profoundly
more transformative. There is a point in the smooth exponential
growth of these different aspects of information technology when
they transform the world as we know it.
You cite the extension made by Kevin Drum of the log-log plot that
I provide of key paradigm shifts in biological and technological
evolution (which appears on page 17 of The Singularity Is Near).
This extension is utterly invalid. You cannot extend in this way
a log-log plot for just the reasons you cite. The only straight
line that is valid to extend on a log plot is a straight line representing
exponential growth when the time axis is on a linear scale
and the a value (such as price-performance) is on a log scale. Then
you can extend the progression, but even here you have to make sure
that the paradigms to support this ongoing exponential progression
are available and will not saturate. That is why I discuss at length
the paradigms that will support ongoing exponential growth of both
hardware and software capabilities. But it is not valid to extend
the straight line when the time axis is on a log scale. The only
point of these graphs is that there has been acceleration in paradigm
shift in biological and technological evolution.
If you want to extend this type of progression, then you need to
put time on a linear x axis and the number of years (for the paradigm
shift or for adoption) as a log value on the y axis. Then it may
be valid to extend the chart. I have a chart like this on page 50
of the book.
This acceleration is a key point. These charts show that technological
evolution emerges smoothly from the biological evolution that created
the technology creating species. You mention that an evolutionary
process can create greater complexity—and greater intelligence—than
existed prior to the process. And it is precisely that intelligence
creating process that will go into hyper drive once we can master,
understand, model, simulate, and extend the methods of human intelligence
through reverse-engineering it and applying these methods to computational
substrates of exponentially expanding capability.
That chimps are just below the threshold needed to understand their
own intelligence is a result of the fact that they do not have the
prerequisites to create technology. There were only a few small
genetic changes, comprising a few tens of thousands of bytes of
information, that distinguish us from our primate ancestors: a bigger
skull (allowing a larger brain), a larger cerebral cortex, and a
workable opposable appendage. There were a few other changes that
other primates share to some extent such as mirror neurons and spindle
cells
As I pointed out in my long now talk, a chimp's hand looks similar
but the pivot point of the thumb does not allow facile manipulation
of the environment. In contrast, our human ability to look inside
the human brain and to model and simulate and recreate the processes
we encounter there has already been demonstrated. The scale and
resolution of these simulations will continue to expand exponentially.
I make the case that we will reverse-engineer the principles of
operation of the several hundred information processing regions
of the human brain within about twenty years and then apply these
principles (along with the extensive tool kit we are creating through
other means in the AI field) to computers that will be many times
(by the 2040s, billions of times) more powerful than needed to simulate
the human brain.
You write that "Kurzweil found that if you make a very crude comparison
between the processing power of neurons in human brains and the
processing powers of transistors in computers, you could map out
the point at which computer intelligence will exceed human intelligence."
That is an oversimplification of my analysis. I provide in book
four different approaches to estimating the amount of computation
required to simulate all regions of the human brain based on actual
functional recreations of brain regions. These all come up with
answers in the same range, from 1014 to 1016
cps for creating a functional recreation of all regions of the human
brain, so I've used 1016 cps as a conservative estimate.
This refers only to the hardware requirement. As noted above, I
have an extensive analysis of the software requirements. While reverse-engineering
the human brain is not the only source of intelligent algorithms
(and, in fact, has not been a major source at all up until just
recently because we did not have scanners that could see into the
human with sufficient resolution until recently), my analysis of
reverse-engineering the human brain is along the lines of an existence
proof that we will have the software methods underlying human intelligence
within a couple of decades.
Another important point in this analysis is that the complexity
of the design of the human brain is about a billion times simpler
than the actual complexity we find in the brain. This is due to
the brain (like all biology) being a probabilistic recursively expanded
fractal. This discussion goes beyond what I can write here (although
it is in the book). We can ascertain the complexity of the design
of the human brain because the design is contained in the genome
and I show that the genome (including non-coding regions) only has
about 30 to 100 million bytes of compressed information in it due
to the massive redundancies in the genome.
So in summary, I agree that the singularity is not a discrete event.
A single point of infinite growth or capability is not the metaphor
being applied. Yes, the exponential growth of all facts of information
technology is smooth, but is nonetheless explosive and transformative.
© 2006 Ray Kurzweil
| | |
|
|
Mind·X Discussion About This Article:
|
|
|
|
Re: cognitive dissonance
|
|
|
|
Well the Singularity is part of the evolutionary cycle. When you look at evolution on a macroscale, it appears to be progressive with bacteria evolving into animals, animals evolving large brains, basic computers and evolving into powerful networked laptops.
But if you view evolution on the microscale, it doesn't seem to have much direction at all. Here we find a world full of sick animals as well as healthy, companies that go bust with a poorly recieved products as well as companies that become household names, countries where millions starve but other countries where the food piles up in mountains.
But the world has always been like that. Indeed, natural selection by definition cannot work in a world that knows no discrimination. That's not to say that we shouldn't be working to address the problems like lack of mosquito nets in Africa, or the ludicrous delay involved in getting drugs to the market, I just don't see any sense behind this 'the world is not a utopia so the singularity is impossible' point of view. The exponential trends underlying the singularity have operated in an imperfect world for 6.4 billion years (if you believe Kurzweil's charts)- in fact, they DEPEND upon a chaotic and imperfect world. That's how evolution works. It seems rather odd to predict they will suddenly level off now, just because millions die off in some God forsaken country.
As for the fundamentalism, the only thing that the terrorists will achieve is a push toward decentralised communities, which will require exactly the kinds of highly automated industries, virtual presence communications and artificial intelligences that define the singularity concept.
The singularity is simply the next stage in an evolutionary cycle. It's not a utopia at all. At best, it will define a point where all our current problems like pollution, war, disease, poverty are exhanged for a different set of problems. Just as giving free medical care, wiping out debt and dishing out mosquito nets will only result in a different problems requiring different solutions.
That's just the way it is, I'm afraid. |
|
|
|
|
|
|
|
|
Re: say what?!
|
|
|
|
mekanikalmekka, no offense, but do you understand how evolution works? Evolution is a process that essentially is based on population changes due to either genetic drift or selection. Rape would be a selective force on some level if you are killing the person, because you are reducing the frequency of that person's genes. There is no such thing as positive or negatives in evolution, like you said, there is no morality in the universe, only inside minds. Evolution is something we do not control, it is like physics. A natural law of the universe. The singularity, atleast in the context of human thought, is NOT like this. We are perfectly capable of saying "no, we won't do this". Thus it is not evolution. It is not some natural driving force behind biology. Sorry to burst anyone's bubble! |
|
|
|
|
|
|
|
|
Re: we are CATALYSTS, not controlers!
|
|
|
|
The point about rape is an interesting one. I would point out that Mallard ducks mate in a way that can only be described as violent rape. It is quite distressing to watch, but natural selection does not have morals. For mallard ducks, those males that were violent were more successful at impregnating females than those that were nonviolent, and so this behaviour was favoured. For other animals (such as humans) other, less abusive mating rituals were favoured.
Kebooo is indeed correct to point out that there are ways in which technological evolution differs from natural selection.
One difference is that at its heart, natural selection is driven by random errors in the genetic recipe. On the other hand, the building blocks of technological evolution are 'ideas', and ideas manifest themselves in beings gifted with vision (in all its definitions). Still, ideas come in many guises, some of which seem more like natural selection than others. At one end of the scale we have cases where an idea is formed, and a clear strategy is devized in order to bring it to life. Such cases are far removed from natural selection. On another level we have those ideas realised through a process that might be called 'systematic blundering'. Think of Edison, carbonising everything from paper to bamboo to spider webs in his search for an effective light fillament. And then we have novelties that are discovered through sheer accident, such as Flemming's discovery of penicillin. In that case, the true inventor really WAS natural selection!
So, some inventions develop from ideas in a process less like natural selection than others. But there is another difference between the two processes. Natural selection is not a forward planner, though it may SEEM so with the conceit of hindsight. In contrast, technological evolution usually does have a distant goal: Building rockets with the aim of reaching the Moon, seeking effective countermeasures for a disease and so on.
But just because there are differences between natural selection and technological evolution, that does not mean to say that useful parallels can be drawn between the two. Richard Dawkins, who seems to know a thing or two about Darwinism, is quoted as saying' there is an evolution-like process, orders of magnitude faster than biological evolution. This is variously called cultural evolution, exomatic evolution or technological evolution. We notice it in the evolution of the motor car, or of the English language'.
Dawkins does point out that, 'we mustn't overestimate its resemblance to biological evolution', but to me that fact that Professor Dawkins (not someone to misuse Darwin's theory and prone to come down hard on those psuedoscientists that do) is happy to admit that there are parallels between biological and technological development is a good indicator that there really are similarities between the two processes.
I have already written in my previous post that each new 'invention' of nature (or technology) invariably makes life easier for some and harder for others. Evolution simply could not work if this were not the case. Another important point is that 'designs' (in nature as well as technology) change to better suit the fitness landscape, and the fitness landscape itself changes with every new 'design' introduced. As the saying goes, 'when frogs get sticky tongues, flies get teflon feet'. The fitness landscape in which technological evolution ocurrs is just as pliable. Think of the new career opportunities, the new appliances, that came into being following the introduction of networked computers. Think of the jobs that were lost, the devices rendered obsolete, because of the arrival of networked computers.
It is for this reason, (that every single action we take and every single action we turn away from will invariably benefit some and harm others), that I believe Utopia is impossible. There is clearly an evolutionary-like comparison to be made here, between the harmful to some/ useful to others; changing the fitness landscape such that opportunities are closed, but new opportunities spring forth, inherent in driving both natural and technological 'designs'.
As for the comment that we are perfectly capable of saying, "no we won't do this", well it just shows that it is Kebooo who misunderstands the underlying processes of the accelerating curve of technology, rather than me misunderstanding natural selection. The scenarios that singularitarians foresee are not going to happen because there is one laboratory that's sitting there creating a human-level intelligence in a machine. If that were true than, yes, we could put a stop to it.
But the reality is that its happening because of thousands of little steps. Each step is conservative, perfectly sensible and not the least bit radical. Yet enough of these steps will add up to the kind of radical future imagined by Kurzweil et al, just as the thousands of cummulative steps took us from hulking, expensive computers to inexpensive laptops that are orders of magnitude more powerful.
How many companies do you know of that have claimed the future implications of these technologies is so dangerous that they're going to stop creating more intelligent networks, or close down research into potential answers to medical issues, or that they will not pursue ideas that can refine the manufacturing process? Given the enormous economic, moral and ethical pressure to continue the advances, my gut feeling is that the answer would be...not many.
As well as cummulative knowledge we must also consider 'convergent knowledge', whereby answers in one discipline comes to influence research in a totally separate discipline. Indeed, at first glance you might have thought that the two areas had nothing in common.
Consider the following fields: 'Material science, mechanical engineering, physics, life sciences, chemistry, biology, electrical engineering, computer science, IT'.
Some of these are obviously more related than others- physics/ mechanical engineering, life sciences/ biology. But in actual fact ALL of these fields converge on the field of study known as nanotechnology. Because all of these fields are, in one way or another, related to nanotechnology, research in any of these disciplines could have an effect on the time it will take to achieve the goal of molecular manufacturing. It could even be the case that a solution comes in the form of an emergent pattern involving several of these sciences and hundreds of research topics, comments and insights all engaged in a complex web of cause-and-effect. Quite possibly, such a pattern would be impossible for a single person to percieve, but it would be intuitive to the creative force that emerges from people plus networked computers plus increasingly powerful search tools.
So, what Kebooo is really saying when he insists 'we can stop this' is: 'We must tell Shampoo manufacturers (it doesn't have to be shampoo, I just need an example) that they cannot improve their product lest their chemists' work should influence other fields and open up a path to molecular manufacturing'.
Ridiculous!
Finally, I would point out that I have read many books and essays arguing that the path to singularity should be avoided. These authors are very fluent when it comes to describing the potential dangers of biotech, nanotech and AI, but when it comes to describing PRACTICAL ways of preventing research and development, well no author I have ever read can provide real answers. The best you ever get is some vague reference to 'relinquishment'. Oh dear. This answer is as unworkable as it is obvious. It has NEVER worked in the past and there is no reason to think it can work for a concept that is the result of thousands of little steps, each of which is conservative and much needed for economic and humanitarian reasons, influencing ideas shared among people networked like never before.
Sorry Keboo but I am well versed in the similarities and differences of natural selection and technological growth. However, your writings do not convince me that you are similarly adept at understanding the principles underlying the singularity.
|
|
|
|
|
|
|
|
|
Re: we are CATALYSTS, not controlers!
|
|
|
|
150 million years ago the dominant species on the planet was probably the tyrannasourus rex a meat eating dinasour (quite possibly according to theoretical paleontology they survive in what are now the species of birds)
60 million years ago, the dominant species on the planet were large birds, their skeletons have been unearthed in South America
20 million years ago, the dominant species on the planet were Mastedons and Sabre Tooth Tigers, who had started the age of the mammals.
Homo Sapiens have emerged in the last million years, and by virtue of technology have left the largest "footprint" in earths existence.
Homosapiens by virtue of its senescence, awareness of its own death and its ability to manipulate its environment is about to begin manipulating evolution and merging with its own technologies, creating some kind of new life form called "Singularitarians" As all life forms in their infancy, this organism-entity's first few hundred years will be critical. Some suppression of destructive and self destructive urges within such a species will no doubt occur.. but that is understandable. Living forever will have some long term unpleasant consequences... imagine yourself having to continually run into that bully who whipped you in 3rd grade for a thousand years, but, no doubt, it will be an amazing time. Personally, I am looking forward to living a century in each one of the worlds greatest Cities. I am most assiduously interested in 100 years in Istanbul and Shanghai. |
|
|
|
|
|
|
|
|
Re: we are CATALYSTS, not controlers!
|
|
|
|
Extropia, you seem to believe little steps will create the singularity. There will be a point between AI being conscious and not conscious, or between non-nanobots and nanobots. This won't be a stage that simply comes about due to some little step, it would be great if life were so easy. It will only come about with intensive research with that specific goal in mind. Research, even with millions and billions of dollars, goes at a snail's pace at right now. How do I know? Because I conduct research on proteins and it is a tedious process. This stuff doesn't happen without enormous levels of human choice and human intervention to guide it along. Is the big dig in Boston part of the evolutionary cycle? No. To say it is is to make the concept of evolution pointless.
You either misunderstand evolution or purposely misuse it to assert that the singularity is an unstoppable force when it truly is not. I guarantee you it could be stopped if individuals wanted it to, which is the whole point. The Big Dig could've been stopped, or the Iraq War. Iraq War is part of the evolutionary cycle now? Well is a war with Iran part of it too? The singularity could be brought about by thousands of researchers over decades, but those thousands of researchers are choosing to do it. This is fundamentally different from evolution. Do they have parallels? Yes. But I can draw a parallel between a videogame's evolution and biological evolution. You said "Well the Singularity is part of the evolutionary cycle" which is untrue because it is not an unstoppable fate much like evolution is. If it is unstoppable, then why do we fund any research? I mean it'll get here on its own, just like evolution! I think you need to rethink what your arguments or reword them if you truly believe the singularity is part of THE evolutionary cycle. |
|
|
|
|
|
|
|
|
Re: we are CATALYSTS, not controlers!
|
|
|
|
My point about cummulative and convergent knowledge was not to say that a few easy-peasy steps will take us to the singularity, but rather that the (A) the singularity will be the outcome of general trends pervading all research and (B) you never know what areas of research will open up new methods of investigation in other (seemingly) unrelated disciplines that could find shortcuts along the path to singularity.
For instance, if you happen to work in the area of software design there is a kind of selective pressure compelling you to create software tools that can match the creative powers of the human mind. It seems to me that as long as our computers do not compare with human brains in terms of creativity and imagination then some customers are going to be frustrated by machine intelligence.
If customers are frustrated with a product, that means a window of opportunity for rivals. Namely, address the frustrations by producing a product that is smarter than the rivals'. And as long as there is a danger of rivals stealing a march on your product by narrowing the gap between human intelligence and machine intelligence, that puts the pressure onto bringing out regular upgrades to their software, a bit smarter, a bit more autonomous, narrowing the gap between us and them. They do not choose to research and develop these upgrades out of sheer whimsy, they do it in order to maintain the competative edge over their rivals!
Now, I am not saying that with one impressive swoop one company will just cart out the computer that matches the brain's enviable powers. No, I would point to the A to Z argument. It works like this. Let us suppose that the first ever computer is represented by step A and the human-intelligence computer represents step Z. Maybe a handful of creatives imagine that they can leap to step Z, but the majority are content merely with getting from A to B, B being just a sensible step, justifiable on all sorts of levels. And from step B, a path to step C can be attempted and then D...E...F...
By the time you are at, say, step 'N' you might look back at the computer that represented step 'A' and think 'wow, if the people of that era could see the computer I use today they would be AMAZED!'. But, to the people of YOUR era, it's not amazing at all, really. Because they view it as being just a step from the generation M. And exactly the same thing will apply when, after probably hundreds of years worth of mental effort (and that need not necessarily imply centuries in terms of calendar years) we finally make it to step Y, well it will seem like the next step is just another one of those sensible, saught after improvements just like we have always been making. It seems like magic to us. It will seem like just another upgrade to them. And nor should you think I am implying that the efforts to get from Y to Z will be easy. On the contrary, it may well take a tremendous effort to pull off.
So the general trend toward AI is ocurring because of thousands of steps each day, dedicated to finding ways of making software more reliable, more intuitive to our needs. These steps can each be justified on several counts. And on top of this we have the grand effort to understand human intelligence from a medical point of view. With an understanding of the biological principles of operation underlying human intelligence, we will be in a much better position to adress such things as dementia. I'm sure the vast majority of people would want such research to continue. The efforts to reverse engineer the human mind affects the research into machine intelligence, and vice versa.
You can make exactly the same arguments for nanotechnology and radical life extension. People are enourmously eager for the research into understanding the molecular underpinnings of human frailty to continue. As such, there is enormous ethical, economical and moral pressure to understand and adress the aging process, EVEN IF THE MEDICAL COMMUNITY BELIEVES IT IS ONLY ATTEMPTING TO FIND CURES FOR OR PREVENT SUCH THINGS AS CANCER, ARTHRITIS ETC. Understanding the body and mind at the molecular level will clearly open up a path to prevent such problems..and as aging itself is nothing but a physical problem ocurring at the molecular level such research MUST open up methods to tackle this as well.
It is not my argument that the singularity is absolutely impossible to prevent. I only show that it is extraordinarily unlikely to be prevented (except via some kind of global catastrophe). Sure, if every single person in the world thought, 'I don't need medicine to improve, or slightly smarter machines, or a more refined manufacturing process' then, yes, progress toward biotechnology, nanotechnology and robotics/AI would indeed cease. But so long as at least one person expresses a desire for improved medicine, more precise re-arrangements of atoms into useful products, and machines that are more reliable and less frustrating to be with, and as long as there exists people who would like to provide for such longings, then the collective work heading toward singularity is going to continue to receive funds!
Finally, I'm afraid I really don't have time to explain everything I know about evolution. You'll have to trust me that I am well versed in how natural selection works.
I have written extensively about how technological evolution is only superficiouly similar to natural selection and have even taken pains to identify ways in which they compare and contrast. I suppose the question we should ask is this: Once natural selection takes hold, what is the likelihood of a technology-creating species evolving? Constraint and convergence is rampant throughout evolution, which seems to imply that a creature capable of developing technology must arise sooner or later. Why? Because natural selection is CONSTRAINED in the sense that it has only finite ways in which it can address a problem and so it tends to CONVERGE on similar solutions. Ultimately, this is a question that can only be answered in full by journeying to other Earth-like planets and observing what evolutionary paths ocurred there. But the evidence we can look to on Earth does strongly suggest convergence.
That being the case, technological 'evolution' is no more identical to biological evolution than cummulative selection is the same as random selecion. It's just that one seems to follow from the other. Random selection resulted in cummulative selection and (at least on this planet) cummulative selection resulted in a technology-creating species and so technological 'evolution' began, different and yet part of a continuum.
|
|
|
|
|
|
|
|
|
Re: we are CATALYSTS, not controlers!
|
|
|
|
This comment needs refining, lest Kebooo further misrepresents my arguments.
The original and only correct definition of the Singularity is 'the creation by technology of entities with greater than human intelligence'.
Kebooo stated that (the singularity) 'wont be a stage that comes about due to some little step'. But unless you are a Creationist, you must concurr that human intelligence is a product of natural selection. And natural selection builds complexity from simplicity, IE it takes many, many little steps. And so, Kebooo's position is demonstratably wrong from the position of natural history!
Another point is that Kebooo misunderstood what 'little step' meant. It was not intended to imply these steps are easy. It was meant to show that we are converging on the singularity because of thousands of ideas being realised that will take us, step by step, further along the road to Singularity. Compare this with natural selection. It ultimately created human intelligence, but that was never a specific goal. No. The only goal that natural selection EVER attempts to reach is for its 'designs' to survive in their environment long enough to pro-create. The thing was, after billions of years worth-of competition and cooperation between countless 'designs', at least one converged on the 'design' of a technology-creating species.
Similarly, companies, groups of people and individuals are always putting ideas out into the market where they must compete or cooperate with other ideas. Hence, these products, services, whatever, must be continually refined, improved, if the company wishes to maintain its competative edge. Now, it is certainly not the case that every single invention must be made smarter to be useful. Obviously not. But nonetheless there IS a general trend throughout many diverse products and services toward autonomy and intelligence. That is what the concept of thousands of little steps is suypposed to convey. NOT that the goal of intelligence or molecular manufacturing will ever be easy, but rather that out of the countless ideas competing and cooperating out in the big harsh world, some will converge on machine intelligence or molecular manufacturing.
|
|
|
|
|
|
|
|
|
Re: we are CATALYSTS, not controlers!
|
|
|
|
Extropia, I am not saying small steps don't lead to complex systems. However your comparison once again is trying to equate technological evolution with biological evolution when the two shouldn't be, especially when biological evolution is driven by reproduction while technological evolution is driven by research. Biological evolution is not designed, unless you subscribe to intelligent design beliefs. On the other hand, technological "evolution" is designed and purposely guided. Creating better medicine and smarter AI doesn't suddenly cause the singularity, it can lead us closer to it, but one must purposely design conscious AI or purposely design nanobots. They don't just appear. Just like small steps don't just appear, the bigger, or more important steps don't either. Only choice, thus not evolution, can lead us to the singularity. I certainly believe there will be times when issues about conscious AI will be brought up in government legislation, much like human cloning. Scientists are not without their own ethics, nor do they live outside of law or escape the idea of choice. This is what makes the singularity far different from evolution. |
|
|
|
|
|
|
|
|
Re: we are CATALYSTS, not controlers!
|
|
|
|
I agree that the Singularity is far different from evolution, just as cummulative selection (which is basically the means by which natural selection works) is different to single-step selection.
But one of the points I was making was that the general opinion held by serious scientists and thinkers is that there must have been a time when the necessary conditions for cummulative selection could have been set up by the blind forces of nature. So, it is true that it is wrong to directly compare Darwinism with single step selection, which would explain living organisms in terms of chance. In fact, that would be the opposite of the truth. Chance is a minor ingredient, but the most important ingredient is cummulative selection, which is most definitely NOT random.
But if it were really true that the blind forces of nature set up the necessary conditions needed for cummulative selection to get going, it must be legitimate to say that one naturally lead from the other. No doubt there are countless ways in which atoms, chemicals, molecules can be formed and reformed and only a miniscule amount of these random changes would lead to anything other than more random change. But out of so much randomness ocurring on all the planets in the known Universe, we are confident that on THIS planet at least one process of random change just happened to stumble into cummulative selection and biology got going.
Now once natural selection got going, the next question to ask is this: Given sufficient time, what is the chance that biology will wander down the path that leads to a technology-creating species? Do not take this to mean that we are destined to be, that the purpose of natural selection is to evolve human beings. That is not what I am saying at all. Rather, I am asking this. If evolution were to run again, what would the result be? Life forms utterly different to anything we see here? Or eerily similar?
Richard Dawkins wrote, 'convergence we have met again and again. I thought I was pretty extreme in my enthusiasm for convergent evolution, but I have met my match in Conway Morris, who presents a stunning array of examples. But whereas I usually explain convergence by invoking similar selection pressures, Conway Morris adds the testimony of his second witness, constraint. The materials of life, the processes of embryonic development, allow only a limited range of solutions to a particular problem. Given any particular evolutionary starting situation, there is only a limited number of ways out of the box. So if reruns of a Kauffman experiment encounter anything like similar selection pressures, development constraints will enhance the tendancy to arrive at the same solution.
You can see how a skilled advocate could deploy these two witnesses in defence of the daring belief that a re-run of evolution would be possitively likely to converge on a large-brained biped with two skilled hands, forward-pointing camera eyes and other human features'.
If that is the case, that a technology-creating species is one of the outcomes of natural selection that can be relied on to evolve sooner or later, then surely technology MUST be something that naturally follows on from cummulative selection, just as cummulative selection followed on from single-step selection.
Another way to put this would be to say random chance lead to the gene, and the gene lead to ideas. Ideas are what drives technological evolution. (No less an authority on Darwinism than Richard Dawkins wrote, 'there is an evolution-like process...called technological evolution...we mustn't overstate its resemblance to biological evolution).
But the key point to ask is this: What constraints are imposed on the choice we have to pursue or turn away from ideas like, say, molecular nanotechnology? Well, here on dear old Earth there is only a finite amount of resources for life forms to use. Ever since Thomas Malthus wrote 'an essay of the principles of population', we have known that life-forms cannot reproduce indefinitely, for sooner or later they must ram up against the limits imposed by finite resources. Our human civilization can only hope to delay the Malthusian catastrophe by continually refining the manufacturing process, in other words learn ways to control the re-arranging of atoms with ever greater precision. Follow the logic: Halt this progression and the Malthusian catastrophe WILL result. Allow it to continue and molecular manufacturing WILL result.
As we all know, Malthus's essay provided a key insight that enabled Darwin to outline the mechanism by which evolution worked. Not all the life forms that could theoretically live would ACTUALLY live, there just isn't enough resources to allow this. And the fact that some live and some die was not a random choice, the ones that lived were more suited to their environmental niche.
Malthus continues to drive humanity to pursue the technological ways and means of opening up a path to control of matter at the molecular level. We can choose to pursue this goal. We can choose to die. If that's your idea of choice then, yes, the radical future will happen only because that is what we choose.
Still, while one can make a strong case against the relinquishment of nanotechnology, I feel that the case against relinquishing AI is not so strong. But still, I have to wonder what the likelihood would be of a technology-creating species, probably endowed with a strong sense of curiosity, not wishing to re-create and amplify its own magnificant powers through technology?
The mathematician E.C Titchmarsh once said: 'It can be of no practical use to know that PI is irrational, but if we can know, it surely would be intolerable not to know'.
It must be similarly intolerable not to know if we understand ourselves sufficiently to re-create intelligence in our technology. And far from being of no practical use, such a technology would be of IMMENSE usefulness. IMMENSE.
Of course these things won't just appear. Of course these things won't be easy to create. But in a world where ever-improving networks can allow a greater flow of the exhange of ideas, and exhanging of ideas can lead to novel new insights, the path to artificial intelligence may go from being exceedingly difficult, to obvious.
Ideas compete and co-operate with other ideas, and thereby evolve. And ideas are the genes of technology. Conclusion: Technology itself evolves.
|
|
|
|
|
|
|
|
|
Re: My idea has 'evolved'.
|
|
|
|
Funnily enough, having written all that, I am now wondering if maybe pairing the word 'technology' with 'evolution' may not be as misleading as Kebooo keeps warning me.
A little detour first. If you look at natural selection at the microscale of the gene, you will find that random chance plays a role here, in the form of mutations in the genetic recipe. But if you contemplate the picture on the wider view of organisms struggling for the right to pass their genetic legacy on to the next generation, from here it's true to say that mere chance plays a decidedly minor role in which genes survive and which do not. Therefore, calling evolution 'random' selection is very misleading. 'cummulative' selection is less likely to lead to the kind of misunderstandings that tend to crop up in creationist literature.
Now, I still think that a reasonably strong case can be made that if you view technological evolution on the microscale, you find the 'genes' of technology are ideas. I believe ideas evolve. I can speak from personal experience that sometimes my own ideas have 'competed' with others. Sometimes my ideas take into consideration alternative opinions and incorporate parts of them into an updated viewpoint in a kind of sexual recombination. Some ideas survive, some go 'extinct'. I'm sure I'm not the only person who holds many ideas, some 'fitter' than others. Ideas definitely evolve.
But, again, if you view technological growth on the 'macroscale', perhaps the evolutionary nature of ideas gives way to Kebooo's non-Darwinian process of research that turns ideas into the tools of technology? And so, while something like natural selection could be said to play a role at the 'gene' level, (just as random chance plays a role at the biological gene level) on a wider scale it is decidely misleading to equate technological growth with 'evolution'?
Perhaps we would be better off debating whether or not natural selection naturally pre-empts technological CHANGE, rather than technological EVOLUTION? |
|
|
|
|
|
|
|
|
Re: My idea has 'evolved'.
|
|
|
|
Extropa, I'm having trouble understanding what you're saying. I wouldn't say ideas are capable of evolving. Perhaps a person's mind can "evolve" in the sense they aquire new ideas. However on a metaphysical level, one can argue all ideas already exist. I bet no one has ever thought about me sitting at my PC listening to jazz while watching the discovery channel. Surely I didn't just create that idea out of thin air, nor is it going to evolve. It's just "there", waiting to be thought of. Whether your brain can "evolve" is interesting, but once again, I don't see any usefulness in giving it that term. One could just call that "learning", or "thinking".
There can be randomness in natural selection of course, genetic drift, environmental factors, resources available, etc all play roles that affect genes. I don't believe that humans were the inevitable result of evolution, merely a result of the environment and organisms' interactions with it and each other. So no I don't believe evolution leads to technology. Evolution is merely a process. The HISTORY of our evolution certainly led to technology, that's just how it happened to unfold. Evolution could occur elsewhere and technology never come about in a trillion years.
Likewise, in theory, a population less intelligent than us could create technology and never reach even electricity. Thus technology does not evolve without intelligent intervention, or in other words, choice. And once you add in design, how could it be considered evolution? It is the creationist argument just in another context. Once you begin to add design in, you can simply say anything that has already occured is inevitable in evolution. The Iraq War was part of the evolutionary cycle by that logic. What purpose is there in saying that? Humans evolved to discuss things on KurzweilAI.net?
If I'm missing what you're saying, please say so, because I didn't quite see what you meant.
|
|
|
|
|
|
|
|
|
Re: My idea has 'evolved'.
|
|
|
|
You say that on a metaphysical level, all ideas already exist, and so they cannot evolve.
But exactly the same argument can be made concerning genetic space. In 'The Blind Watchmaker' it says that,
'Sitting somewhere in this huge mathematical space are humans and hyenas, amoebas and aardvarks, flatworms and squids, dodos and dinosaurs. In theory, if we were skilled enough at genetic engineering, we could move from any point in animal space to any other point. From any starting point we could move through the maze in such a way as to recreate the dodo, the tyrannosaur and trilobites. If only we knew which genes to tinker with, which bits of chromosome to duplicate, invert or delete. I doubt if we shall ever know enough to do it, but these dear dead creatures are lurking there forever in their private corners of that huge genetic hypervolume, waiting to be found if we but had the knowledge to navigate the right course through the maze'
Does this not imply that, in an abstract sense, all forms of life already exist, waiting for history to navigate through the maze? Of course, many of the hypothetical creatures could never ACTUALLY exist, and one of the mysteries of evolution is exactly how it navigates its way past the almost infinite array of impossible monsters to home in on the plausible animals, perched in its own unique place in genetic hyperspace.
But, as I keep saying, we have evidence that evolution is guided toward certain solutions by constraint and convergence. Consider the likelyhood of a creature resembling an insect evolving again if evolution were re-run. Among the defining features of insects are the following: An articulated exo-skeleton; compound eyes; a characteristic six-legged gait, whereby three of the six walking legs are always on the ground and thereby define a triangle (two legs on one side, one leg on the other) which keeps the animal stable; respiratory tubes known as trachea that serve to bring oxygen into the interior of the animal via special openings (spiracles) along the side of the body.
Were these all one-offs in the lottery of life? Not a bit of it. Each item has evolved more than once in different parts of the animal kingdom, in many cases several times, including several times independently in insects themselves. If nature finds it so easy to evolve the component parts of insecthood, it cannot be so implausible to suppose that the whole collection should evolve twice. I would not be at all surprised if, having landed on an alien planet, we find animals approximately similar to insects. And, as I have said, the component parts of a human-like animal are also rampantly convergent and have independently evolved many times.
But why should we emphasise the emergence of a human-like creature over the emergence of insecthood? For the same reason that we consider the first instance of cummulative selection to be so much more than chemicals or atoms jostling around, connecting and disconnecting at random. The Universe is full of matter and energy engaged in random re-configurations. But these would have an infintessimal chance of just happening to arrange themselves into a butterfly (for example) whereas cummulative selection clearly DOES have the capability to 'design' butterflys, people, jaguars...everything that makes our planet such a jewel of the solar system. Because of this, it is absolutely legitimiate to claim the link from non-life to life was special amongst all the random re-combinations of matter that occur.
So, we went from directionless, random re-configurations of matter to directed re-configurations (directed in the sense of constraint and convergence IE, with a limited number of viable solutions to a problem, the same answer will probably be repeated). But, as you said, natural selection does not have a distant goal. But a technology-creating species can PURPOSEFULLY re-configure matter and energy into impressively useful forms and it has the FORESIGHT to aim at a distant target. So, again, the fact that the evolution of a technology-creating species heralds a new paradigm of 'invention' above and beyond that of any other product of evolution marks the arrival of such a species as a pivotal moment.
As to the question of which designs are inevitable and which are not, I guess it comes down to choice. Like I already said, nanotechnology is really no choice at all, since it is the continuing refinement of the manufacturing process that delays the Malthusian catastrophe that plays such a key role in shaping the fitness landscape. For similar reasons, we may consider the development of agriculture as an extremely likely outcome following the arrival of a technology-creating species. I have also tried to make the case for AI, based on our drive to understand our own minds and its sheer usefulness across so many areas of society. And given that technology-creating species must fight (or co-operate) for diminishing resources just like any other creature, I would say that war and the weapons to fight it are also extremely likely outcomes. That, of course, does not mean to say that a PARTICULAR war was inevitable.
Provided you take into account the differences, I don't see how it can be so harmful to draw an anology between species co-operating and competing in the fitness landscape of genetic space, and ideas co-operating and competing in memetic space, or technological designs competing and co-operating in the fitness landscape of the market. As with all analogies, it is not to be taken literally. If you read that 'Da Vinci's mind soared', you do not take that to mean his brains sprouted wings and flew out of his skull!
|
|
|
|
|
|
|
|
|
Re: we are CATALYSTS, not controlers!
|
|
|
|
unless you are a Creationist, you must concurr that human intelligence is a product of natural selection. And natural selection builds complexity from simplicity, IE it takes many, many little steps
What about unintelligent design?! What about the idea that order emerges from cellular automata (possibly sometimes rapidly, with a new "molecular pull machine" created from a freshly-mutated protein or DNA sequence)?
I think Stephen Wolfram should lobby the public schools to teach "unintelligent design" (it's funny if you consider that CA-based evolution is fairly complex, but that the base CA are "unintelligent" simple pieces). It would blow everybody's mind, since he's obviously very intelligent. Plus, it would show up the morons in the south who are trying to dumb everybody down because they want their kids dumbed down. It would also show up the liberals who want to force their view on everyone (even if it is more scientifically correct than "intelligent design").
Also, if intelligent design is correct, then why does neither side point out that the debate is misplaced (that it is the funding of education by force that is wrong), except for a small group of libertarians, philosophers, and anarchists (who are mostly atheists)? Wasn't God intelligent enough to design rational people open to logical argument? Did he design us in his own image when he was drunk?
maybe god didn't design us in his image, he designed us to be better than himself, since he was so lovign and smart. Maybe he didn't know how to operate on himself, so he resigned himself to death. Maybe the dumber someone is, the closer they are to God, and it is an insult to god's craftsmanship to say we were made in his own image. Maybe god's species will return one day to retrieve him from crogenic preservation and he will judge us all harshly for betraying his vision.
Maybe the one test we have to pass is to stop using force against each other, and so all the world's god followers will be destroyed by him. Maybe his promise of eternal life was meant to apply to those who came into existence post singularity. Those, then, who attempt to hold back the singularity are his antithesis (mostly they are religious luddites and luddite leftist lawmen -what Ayn Rand called "The Anti-Industrial Revolution").
Maybe god never existed before, but he is coming into existence. Maybe billions of gods will one day exist. It makes sense that they would be very judgemental.
In order to create, one must be capable of judging the bad from the good. First and foremost, Gods create.
I personally don't believe in god, but have confidence in unintelligent design as a combination of CA-design, and natural selection veto of the most incapable.
Dictatorships and collectivist versions of dictatorship are my evidence that man is not yet done evolving.
It seems likely to me that evolution is a thing that starts out stupid and gets smarter after a certain inevitable point (intelligences redesigning their own substrates). The point is not really inevitable, but will happen if the pre-singularity species doesn't extinct itself first. The statistically most likely way for us to render ourselves extinct is via government mass murder. Historically, this is what the human race has always done...
See:
http://www.hawaii.edu/powerkills |
|
|
|
|
|
|
|
|
Re: we are CATALYSTS, not controlers!
|
|
|
|
Seems to me that many of the comments in this thread are far too anthropocentric. Take a big step back. Take a look again at the Six Epochs of evolution outlined by Ray. Any one of these epochs is susceptible to derailment, but the process as a whole will pick up again. Any specific event can be stopped, or blocked, but the process of evolution does not depend on specific events. Every 'event' is a cell in a branch of the evolutionary tree. The branch we are part of may not survive, but another branch will. It may be way down the trunk, but it will grow, and eventually, the conditions for the singularity to occur will be reached again. The Singularity WILL BE a step in the evolution of the universe, whether or not it manifests in our near future, with us, the human race, as its catalyst. It may not happen for another six billion years in some other corner of some other galaxy, but it will arise from similar conditions. |
|
|
|
|
|
|
|
|
Re: we are CATALYSTS, not controlers!
|
|
|
|
It is truly disturbing to see this argument devolve into more senseless diatribe by people who more than likely are not even qualified to ascertain or speak on such topics as evolution. First, the evolution of the species in regards to the homo sapien that I am is one of intent and rationalized intelligence. I see what the future of my line will become simply that I saw where I came from and from what I came. Ask yourself this question; is the singularity an inevitable outcome? And if so, why? Also, remember, that the singularity is the result of one thing becoming 2 separate events, and returned to singularity, the red and blue shift. To apply evolutionary concepts to this astronomical event is more mental masturbation than science. Thank you for playing, please turn in your thumbs, crawl back up into the trees, and get the hell out of my gene pool. |
|
|
|
|
|
|
|
|
Re: cognitive dissonance
|
|
|
|
Extropia, the IDEA of all life forms already exist. All life forms do not already exist. There is the key difference. Evolution applies to what happens and not what is possible through design. In over 4 billion years, only one species evolved intelligence like ours, despite the fact intelligence didn't give us true dominance until LONG after we evolved it. It just allowed us to compete better. In contrast to this, compare how many species of insects exist or have existed. Evolutionary, before we had technology, many species were easily more fit than us, luckily we could depend on even more species to keep those in check, bacteria for example.
Ideas don't "compete", that is the problem. It is YOUR mind creating competition. If I think of two animals competing in my mind, that doesn't make it evolution. It's like me taking two toy soldiers and bashing them together until one breaks, saying I have just created an evolutionary cycle once I add a new toy weapon to the toy soldier that won. You may think two ideas compete in YOUR mind, but in my mind I may say they don't compete. Well then how is that evolution? Evolution is a fact, it's not based on opinion or perspective, it happens. You can say technology "evolves", but saying it is part of the evolutionary cycle, once again, is something that has many problems and makes little sense.
mblaine1980, I don't know what you believe, but I feel your statement has no basis in reality. At a practical level humans have choice (you could say they don't due to cause and effect) but atleast we still have choices that haven't been made. There is no reason to believe humans are incapable of deciding what to do with their lives. If the singularity is impossible to stop, why do we keep funding science? It'll get here on its own, like evolution. Let's go spend our money on other things. WILL we stop it? Probably not. But certainly we have the capability to, as a species. |
|
|
|
|
|
|
|
|
Re: cognitive dissonance
|
|
|
|
If you have never had ideas competing for survival in your mind, that is probably because it is closed to anything but the narrowest perceptions of what you have been taught.
Every single book I have ever read about evolution ends with a speculation about cultural or technological evolution. 'The selfish gene' has a chapter called 'Memes: The new replicators'.
'A new kind of replicator has recently emerged on this very planet. It is staring us in the face. It is still in its infancy, still drifting clumsily about in its primeval soup, but already it is achieving evolutionary change at a rate that leaves the old gene panting far behind....Examples of memes are tunes, ideas, catch-phrases, clothes, fashions, ways of making pots or of building arches. Just as genes propogate themselves in the gene pool by leaping from body to body via sperm or eggs, so memes propogate themselves in the meme pool by leaping from brain to brain via a process which, in the braod sense, can be called imitation. If a scientist hears, or reads about, a good idea, he passes it on to his colleagues and students. He mentions it in his articles and his lectures. If the idea catches on, it can be said to propogate itself, spreading from brain to brain'.
And in 'Evolution' by Carl Zimmer, the chapter titled, 'Modern Life, 50,000bc' states:
'Since the 1970s, the world's computers have begun joining together as the World Wide Web has spread like a mesh of fungus threads. The Web is circling the Earth, subsuming not only computers but cars and cash registers and televisions. We have surrounded ourselves with a global brain, which taps into our own brains...Research on artificial life and evolutionary computing has already suggested that computers can evolve an intelligence that doesn't resemble our own'.
And in 'The Origins of Life' by John Maynard Smith we find:
'The latest transition, through which we are living today, is the use of electronic means for storing and transmitting information....will electronic devices acquire means of self replication, and evolve to replace the primative life forms that gave them birth?'.
While you are correct in saying that insects and bacteria are the true champion survivors of natural selection, this entirely misses the point of why a technology-creating species stands out as special. A thousand, trillion, trillion years of insect dominance would not lead another leap in the possibility that matter and energy will be manipulated into increasingly complex structures that perform a useful function. A technology-creating species represents the link from cummulative selection (which designs without foresight) to technological evolution (in which the designs are aiming at a distant goal).
It is perfectly legitimate to say that the products of technology evolve. The market is littered with alternative designs, some fitter than others. Take toy soldiers, for example. If every child had two different toy soldiers and they bashed them together and one broke, it is much more likely that when they recommend a toy to their pals, it will be the one that DIDN'T break that gets favoured. Tools that are useful tend to get passed down through the generations. Tools prone to breaking tend to go 'extinct'.
And your objection that if the Singularity is unstoppable and so science need not be funded just completely misses the point. Some ideas are more likely to get funding than others. Smarter, autonomous technology would be such an asset to just about every sector of the market that its likelyhood of getting funding is very strong. The sheer interest we have in understanding the mechanisms of our own intelligence is such a powerful urge that brain-reverse engineering is extremely likely to receive funding. The promise of nanotechnology lies at the end of the path of increasingly fine control over matter and energy, a route that Malthus decrees must be trod or fear the disaster.
Once computers can match the creative ingenuity of human beings, or once human beings can match the information retrieval, exhanging and downloading capability of computers, the re-arranging of matter and energy into useful structures will once again shift into another gear.
The progression of increasingly complex arrangements of matter and energy competing for survival, each epoch offering clear departures from the limitations of the previous epoch, is obvious. Why can't you see it?
|
|
|
|
|
|
|
|
|
Re: cognitive dissonance
|
|
|
|
If the singularity is impossible to stop, why do we keep funding science? It'll get here on its own, like evolution. Let's go spend our money on other things.
If you or any one single intelligent person "stops funding science", it will still continue, because it confers a survival advantage. It might continue more slowly, and in different areas than you would have promoted with your paper dollar (or silver coin if it's after the coming crash of the dollar) but it will continue. (This is even true if you mean coercive government "funding" via the theft of collective taxation). Moreover, if funding stops, the science will become more efficient, out of necessity. It will become more goal/product oriented.
If science is outlawed, the scientists will become outlaws, but they will not stop thinking and creating. (They will be hireed by companies to do applied research and engineering.) If most scientists continue to work for the state and CERTAIN KINDS of scientists become outlaws, then a rogue state (or individual) will become a more deadly outlaw, and will allow the illegal sciences to operate within it (or via its own mind).
A government decree (threat of force) creates a social or economic pressure that modifies action but never stops it (look at the world's black markets for an example of this -I can still get speed, heroin, cocaine, pot, a prostitute, or a bookie within one hour, if I have a few hundred bucks... and all those things are flatly illegal ad widely considered disreputable). Science is considered way more reputable and worth funding.
The nation that allows banned science will (and does) flourish and dominate.
That's why there is emergent order from beehives, evolution, ecosystems, and bird flocks.
Your mind is somewhat like that too, according to Minsky.
There are lots of things in the human mind of which the true mechanism remains hidden from all but the most careful technological observer.
-Jake |
|
|
|
|
|
|
|
|
Re: cognitive dissonance
|
|
|
|
In a short story named "The Call of Cthulhu" H.P. Lovecraft wrote, "The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents... some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the light into the peace and safety of a new dark age."
I re-read this story recently and immediately made a relation to Mr. Kurzweil's most recent book. I'm not much of a scientific mind (I tend to read Kurzweil's book purely to gratify my interest in extraordinary concepts and ideas) and because of this some may dismiss my post. But what this line from Cthulhu and my instant relation of it to Kurzweil's work made clear to me was that, while Mathematics, Science, and all the other tools humanity has at its disposal can measure most of what we know, it has never truly calculated or measured the profound peaks and unthinkable, fearful depths of humanity itself. I feel with certainty that, until the plug is out of our hands it will never be to late for us to pull it, whether we'd like to or not. |
|
|
|
|
|
|
|
|
Re: cognitive dissonance
|
|
|
|
I think you are putting your own socialist thought into your reasoning. Don't worry about the little people. They have always taken care of themselves. They will do so after the singularity also. Poverty and disease will go away as people advance in their understanding and the need for material things will diminish also. No matter how hard you resist, even the most ignorant learn as they get older. The reasons for poverty are not because; some rich guy wants people to be poor. People tend to advance in income and understanding with age. I admit some of the worlds people live under government that oppress them, but even those people will rise up and find equality if they can be given enough time and time is what is coming; Time to live and learn and prosper.
I just finished a book that deals with self aware computers and their ability to care for human needs. One day, sooner than later machines will work, people will learn. No one will want or need.
Author Belle Smith (LUCIAN'S PLACE) concept of human and non human only differs if the non human is not self aware. Any being that is self aware may be alive, although some creatures with small number of cellular component are not aware. Even these strive to survive. Once we cross the threshold of self awareness, we will have to bestow rights to these creatures. It might be to our best interest not to develop creatures that are self aware
In Lucian's Place, author Belle Smith (bellesmith.com) came up with the answer to most of the problem of living machines. Her concepts are innovative and intriguing to the point of being genius. Her protagonist lives on a far away place, Pre-Clovis America, far from any other humans. However, the three occupants have at their disposal a self aware super computer, who thinks of herself as human.
F.A.N.N.Y (First American Neural Network, Yonkers) not only thinks of her self as alive, she, (as she thinks of her self), considers the mother, daughter and male friend her family. Through this great adventure, FANNY looks after her family and nefariously watches their love making, dreaming of being an organic human.
FANNY, like the other two women, fall in love with Tony. The adventure continues for years and comes to a very unsuspecting ending that will leave you feeling good for days and you will have the answer to what you do with living machines.
|
|
|
|
|
|
|
|
|
Re: cognitive dissonance
|
|
|
|
Yes. There is an elephant in the room. Yes, the emperor is without any clothes. Why? Because timeless addiction to the saying 'see, speak, and hear no evil' still being sold as 'positive thinking after negative doing" makes them all elephants, unites them all with the emperor, and prevents Singularity from coming into existence. The Singularity will be delayed by decades, even centuries, and, may be, for ever until knowledge from Justice Autopsy closes the open psyche of obvious make-belief, opens the closed mind of obvious belief, and the whole wide world, in need indeed, is thus ready, willing and able to see what does exist, refuse to see what does not exist, and let indelibly preprogrammed Mother Nature and Father Nurture (God) begin to practice Singularity and live good in harmony from womb to tomb as never ever before in history.
|
|
|
|
|
|
|
|
|
Re: Response to 'The Singularity Is Always Near'
|
|
|
|
Extropia, you're not really addressing what I'm saying. To begin with, sure, I create competition between ideas in my mind. What my point is is that this competition is relative while evolution is not. That is such an enormous difference it alone is enough to separate the two processes distinctly. If you think every individual has the same sort of "competition" or evolution of ideas as everyone else, then I don't know what to say to that. That would be pseudo-religion and a belief in a greater human consciousness I suppose.
Now I have never said you can't make parallels between the two. I in fact can points out parallels between the "evolution" of a high school basketball team with natural selection. I can point out parallels in a videogame's "evolution", or an artwork's "evolution", or the "evolution" my mind has gone through as I have aged. I could imagine a lion and tiger battling it out in my mind, the lion wins and the tiger is dead. That is an "evolutionary process" but it in no way shape or form is part of THE evolutionary process. That is my only real contention with what you said in your original imply. You said the singularity is the next "stage" in the evolutionary cycle when the evolutionary cycle does not follow any set stages or fate, only environmental conditions. If we knew 100% how to get the singularity here and that it could be done with our current resources then it would be here already. The fact of the matter is it's still an "if" AND a "when". Very likely? Yes. But definite? No not quite. That's not even addressing the social or political issues that could prevent it. We could have a nuclear war, eliminating all human life. There goes the singularity as part of the evolutionary cycle right there. Likewise our resources could run out, or there could be a huge social change that caused people to no longer want conscious AI. I actually don't know very many people that even want conscious AI. I'm one of those biological immortality advocates and sometimes feel alone in my advocacy if I'm not on one of these internet sites with like-minded individuals.
Now Jake, creating super intelligent AI doesn't confer any survival advantage to humans. In fact much of the discussion on mind-x talks about REPLACING humans. That goes against the grain of natural selection and is thus not a selective process. It is more a species-wide self suicide, destroying one's own genes, when the "purpose" of evolution was to pass on your genes. Basically you're letting the lion eat you to become part of the lion. That's not evolutionarily fit. Nor is it unstoppable. To assert the singularity is unstoppable is to assert scientists have no choice over their actions. What I said is our species has the -capability- to stop. Does it mean we will stop? No. But evolution doesn't have any design or any choice, it has no consciousness to decide to stop or not. It just happens. Scientists are as human as everyone else and thus could stop research right now if they chose to. This isn't a commentary on what they will choose, it's a comment on what they are capable of. A parallel to this would be nuclear proliferation. Is every rogue terrorist in the world going to get hold of these weapons? The singularity is far, far more difficult to develop than a nuclear weapon, yet look how few already possess them. Who knows, maybe the US or Israel will wage war on Iran to prevent it from getting nuclear weapons. Don't be so surprised if nations were willing to do the same to prevent the replacement of the human species or "transhumanism".
Also, saying "science" is reputable and worth funding is way too vague of a statement. You will already see stem-cell research and human cloning face large hurdles in terms of society embracing them. This is baby stuff compared to creating conscious AI and cyborg people. |
|
|
|
|
|
|
|
|
Re: Response to 'The Singularity Is Always Near'
|
|
|
|
You make some fair points, Kebooo but none of them really get to the heart of what I am saying.
So, once more, here is my position:
1: The universe is full of matter and energy being organised and re-organised into different patterns. We know that on at least one planet, out of these countless random configurations a bridge was forged between 'designs' that could be built by single-step selection and more complex designs that could only be achieved, either by intelligent design or cummulative selection. And as we are talking about the origin of life, we may dismiss the first option in this instance.
We do not know what this bridge from non-life to life was, so I will label it 'factor X'. Because this 'factor X' lead to a whole new level of 'design' complexity it must necessarily stand out as a new chapter in the universe's history.
Similarly, while the evolution of a technology-creating species is not inevitable, the fact is that should such a species evolve on any given world, another chapter of design complexity will open up as technological growth necessarily follows on from its emergence. Any other product of natural selection will merely result in yet more products of natural selection. But the arrival and persistance of a technology-creating species would usher in a new order of design possibilities that goes beyond the limitations of natural selection, just as natural selection went beyond the designs that could be expected to emerge via the random processes that existed before 'factor X'.
Because factor X and a technology-creating species provide links to a new order of design 'options', and because we assume one must be built from the other, these two possibilities, (that factor X might arise out of the random recombinations of matter and energy happening in various ways throughout the universe, and that natural selection might evolve a technology-creating species), it is not at all folly to think of these possibilities as definite new chapters in the story of the universe.
Again, this does not mean that on every single planet, life MUST emerge. Quite possibly the majority of planets contain nothing but the relatively simple combinations of matter and energy one could expect of random chance. Equally, the evolution of a technological intelligence is not inevitable- on most planets perhaps natural selection never wanders down this path.
BUT, on any planet that life does evolve, a new order of complexity will be aparrent. And on any planet that a technology-creating species takes hold, the changes brought about by its (evolved) capabilities will be similarly obvious. Because of these facts, the idea that these are definite new chapters cannot be refuted.
As to the notion that ideas can evolve, while it is true that this cannot be directly compared to natural selection, 'ideas' nontheless meet all three requirements for evolution:
1: evolution requires a thing to be able to make copies of itself. Ideas copy themselves by being transmitted from brain to brain.
2: The copies must not be perfect, changes must crop up, some making it weaker than the original, others making it stronger. This applies to ideas as well. Invariably, a person's own experiences will lead to them altering an idea, seeing it in a new light. This leads to alternatives on a theme.
3: There must be a mechanism by which the 'bad' choices are eliminated and the 'good' choices persist. There are indeed various forces that weed out bad ideas. However, we should emphasise what 'good' means in this context. It means 'good' at getting itself copied from brain to brain, as opposed to 'good for the human species'. As one example, the idea of suicide bombers ascending to paradise is a meme that is pretty successful at getting replicated, but is obviously not good for the minds it 'infects'.
Given these facts, while we cannot say that any idea is inevitably going to emerge, we can at least assign probabilities to the emergence of certain technologies. As I keep saying, our drive to understand the functions of our own intelligence and the sheer usefulness that such knowledge could bring to such a wide-ranging array of services strongly suggests that a technology-creating species will transmit alternative ideas concerning AI and, through a process superficioully similar to natural selection, weed out the ideas that are not suitable and converge on the correct solution.
And the Singularity that would arise on any planet whose technology-creating species achieves AI would also qualify as a definite new chapter.
I hope this clarifies my position. To truly understand the chapters of design complexity one must consider a view that encompases the whole universe. To confine the view to that of one planet would be akin to grasping the truth of evolution by studying a small sample of one species. IE, it's too limiting a view. |
|
|
|
|
|
|
|
|
Re: Response to 'The Singularity Is Always Near'
|
|
|
|
Kabooo, I think you confuse the driver of evolution with evolution itself. Evolution is the history of concrete answers to the problem of living. Cilia and the airplane are two examples of evolution. What is different are the methods that led to their creation. Cilia were 'discovered' blindly and the airplane was dreamed up by visionaries.
You say the Big Dig is not an example of evolution because it is the result of human choice. But I think what you mean is that it is not an example of natural selection.
Extropia, I think you would say that it is an example of natural selection because different ideas for solving the traffic problem in Boston were discussed, the different solutions competed in the minds of the planners and this one won out. You think our minds work by an internalized process of natural selection. This makes it seem like the right idea, the best idea is the powerful idea, the strong idea. It leads to a 'might makes right' mentality. While the idea backed by the powerful tends to be the one that wins out, I don't think that the way we think is actually an internalized version of this process. I think we can imagine scenarios that are objectively the best scenarios: that benefit the most people, that make the best compromises' and so on. We do this by being attentive to all the information, taking the time to understand everyone's argument and then deciding on the best course. There are few examples of this actually happening because the interests of the powerful interfere. But if this process were allowed to occur, its products would still be examples of evolution: they would still be answers to the concrete problem of living. It's just that they would be better answers.
I think you are right Kabooo, when you say the singularity is not inevitable. But I think you are wrong when you say that the humans of today can decide whether or not to pursue the singularity. That decision could only result from a population willing to contemplate accepting a lot of sacrifice and suffering. Such a population could only be made possible by some other kind of singularity. And, of course, even if such a population were to come into existence it is not necessarily so that they would vote against the singularity.
I think the fear people have of the Kurzweil's singularity is that it will overtake us before we become self transcendent on our own, before arguments cease to be answers to the question, 'What has this got in it for me?' If it does, than what we will be creating is a harsh nanny who learned from us how to teach us the hard way.
The emergence of consciousness did not stop evolution. It made natural selection obsolete. The ability to imagine a 'genetic hypervolume' is also the ability to try and understand the ability to do so. This is the project that is most essential now: to understand ourselves so we can take compassionate, considered steps forward.
|
|
|
|
|
|
|
|
|
Re: Response to 'The Singularity Is Always Near'
|
|
|
|
What your arguments leads to is the conclusion that every single event that occurs in the universe from one point in time to another is evolution. It defeats the whole concept of what is "conventional" evolution. Just call it cause and effect and throw out the word evolution in itself. It's "evolution" that I decided to have a turkey sub tonight. But what point is there in saying that? One can just argue there is no such thing as choice at all. It was evolution for Osama to blow up the WTC. Is there really any purpose or insight into making such remarks?
Also, one cannot say it is part of the cycle until it has occured or they are absolutely certain without a shred of doubt. Like you have said, it's not inevitable. It does not require the population of people you say it does. If everyone thought similar to me, it wouldn't occur, but we would get close to the brink. I want biological immortality, disease eliminated, poverty eliminated, etc, but do not want conscious AI as I do not feel they serve any purpose except the inevitable destruction of our human identities. Does everyone think like me? No, of course not. I would say however that the majority of people don't support technology as much as I do. I have barely ever known anyone offline that supports biological immortality. And conscious AI? Even less. Usually just fellow nerds like me that don't hold any more power when it comes to voting than some potato farmer. The changes of the singularity aren't merely additions to everyday life that people are accustomed to, nor are they comparable to most technological change. They change the very foundations of many people's belief systems. And if it is to supposedly occur in our lifetimes, then I believe it will meet great resistance. I do believe it is more than likely to occur, but not inevitable, and I believe the elements already exist to prevent it from occuring. These elements can be changed, but sometimes it seems they will not.
And finally, I would dispute your claim consciousness has made natural selection obsolete. We still reproduce. There are still many selective forces - disease, physical attributes, social behavior, and so on. I generally separate invention from biological evolution, but don't confine evolution to merely natural selection, as there are other forces such as genetic drift at work, and disasters that prevent conventional natural selection to occur. |
|
|
|
|
|
|
|
|
Re: The micro vs the macro view.
|
|
|
|
Kebooo's mistake in his reply is, as ever, to use a micro view when he should be viewing evolution on the macro scale. If I look out of my window I might see a bird eat a worm. This event in itself does not constitute evolution. But what does constitute evolution is if some worms are less likely to be eaten than others. There could be many reasons for this. They may be more able to detect the bird's presence in sufficient time to retreat; they may blend in with their background and be harder to spot; they might taste bad. For obvious reasons, birds are more likely to eat the worms that happen to have genes that code for inefficianvy at avoiding ending up in a bird's belly, and thus the process of Darwinnowing ensures the fitter genes spread through the fitness landscape.
To view evolution on the microscale that Kebooo uses to try and debunk the firmly established link between technological and biological evolution entirely misses the point. One could apply this narrow view to debunk the idea of natural selection itself. After all, natural selection is supposed to eliminate those genes which are unfit for a particular environment and yet I can go outdoors and quickly identify individual animals or plants that are obviously maladapted to their environment. On the microscale it would be very hard to identify any drift toward a better fit to the fitness landscape. It is only on a far wider scale that one appreciates how the tendancy of some genes to be eliminated before replicating leads to natural selection.
Now let's examine Kebooo's remarks about the World Trade Centre attack on the macroscale. There are millions of buildings around the world and some are more susceptible to terrorist attacks than others. So what happens if people identify certain buildings as being more tempting targets and refuse to live or work in them? Answer: Such buildings are less likely to be built in the future. Another example would be to say that we have many ideas for making buildings less susceptible to terrorist attacks. Some of these ideas have more chance of actually working in the real world than others. Or take Kebooo's decision to have a Turkey sub. Yeah, that on it's own would not constitute anything evolutionary but if there were many varients of Turkey sub, and some were less popular than others, the market's fitness landscape would wipe out the less successful varients and ensure the replication of the superior recipe, the safer building. And that IS a process akin (but not identical, I say once again) to natural selection.
Kebooo's remarks about conscious AI being met with great restraint may ring true when applied to the Hollywood pulp science fiction scenaro of fully-fledged human-equivalent AI suddenly unleased onto the market but I seriously doubt that is how it will actually happen. Rather, we will gradually learn to reverse engineer more and more of the brain's magnificant powers in order to build useful tools that augment our own efficiency, or free us from the need to perform certain menial tasks.
Here's an example: Wouldn't it be useful if search engines could not only scan through large amounts of texts in order to pick out key words and phrases, but could also do the same thing for images-- perhaps finding a particular person or object somewhere and thereby make it easier to retrieve photos from a vast database of images?
In order to make this dream a reality, researchers at MIT's centre for biological and computational learning have set themselves the task of studying how the brain ' does its visual work. They note how each pixel on an image stimulates a photoreceptor in the eye, for instance, based on the pixel's colour value and brightness: each stimulus leads neurons to fire in a particular pattern.
The programmers make a mathematical model of those patterns, tracking which neurons fire (and how strongly) and which don't. They tell the computer to reproduce the right pattern when it sees a particular pixel, and then they train the system with positive and negative examples of objects. This is a tree, and this is not.
But instead of learning about the objects themselves, the computer learns the neuron stimulation pattern for each type of object. Later, when it sees a new image of a tree, it will see how closely the resulting neuron pattern matches the ones produced by other tree images. This is similar to the way a baby's brain gets imprinted with visual information and learns about the world around it.'
So here we have an example of a tiny part of the brain's capabilities reverse engineered and turned into (potentially) useful applications, such as performing preliminary medical diagnoses based on an MRI or CT scan image, automating editing of home movies, and many other useful tasks.
Of course, nobody can guarantee that this technique will become the next Google, but if the percantage of people who find it useful outnumber those who do not, logically it has a fair chance of spreading through the fitness landscape. And how many of these people will care that yet another task that once required human intelligence is now carried out by machines? Sure, some will. You might be a security guard who lost his job of watching monitors because now software automates the process of identifiying shoplifters. But the majority will just view it as an application that makes life easier; that frees our brains to concentrate on other creative processes.
'As soon as an AI technique works, it's no longer considered AI and is spun off into its own field (for example, character recognition, speech recognition, machine vision, robotics, data mining, medical informatics, automated investing)'- Ray Kurzweil.
By the time we have full AI, we will have become used to living and working with pattern recognition software applications that mimic more and more of the brain's overall function. I seriously doubt that we will be able to look back and say 'ah yes, THIS was the moment when artificial intelligence became conscious' because, after all, there is hardly a consensus of opinion on when exactly in a human's development we go from dumb matter to sentient being. |
|
|
|
|
|
|
|
|
Re: The micro vs the macro view.
|
|
|
|
Obviously we are never going to get anywhere if you truly believe: "If I look out of my window I might see a bird eat a worm. This event in itself does not constitute evolution."
It is certainly evolution in my book - the allele frequency of one population has changed through a selective force, however small it may be. There is no arbitrary evolutionary line one can draw when it comes to allele frequency changing in a population. Examining biological evolution on the macro scale would be speciation, not technological invention. Both micro-evolution and macro-evolution are well documented processes, and most would exclude the inventions of humans from it, including myself. They are excluded because they offer no insight to evolutionary processes, rather they simply tell us that whatever happens is part of evolution, that whatever humans choose and decide upon is part of evolution, even if the choice is a social construct (segregation if a majority supports it) rather than an actual evolutionary change. There's no quota or scale that decides what technology is or isn't party of evolution, or what event can be excluded from it. Sony inventing playstation may have been a grand success, but is certainly not "evolution". Evolution is, in fact, inevitable; the singularity is not. This is because humans are capable of choice in that regard. Most people on this site are certainly optimists when it comes to the singularity, the psychological reasons for this optimism are likely vast. I would say those that believe that it cannot be stopped in addition to their wish for it to arrive garner a similar comfort as those that believe in an afterlife. If they are entirely convinced they will attain biological immortality, their own fear of death can be soothed, or dreams of improving themself assured.
Your discussion of conscious AI is a whole other matter. If we cannot identify when machines become conscious, that implies we will not understand what consciousness is, or how it begins, and the entire process would then be blind faith in unconscious machines attempting to randomly generate conscious machines through selective processes. However, if we did not understand what begins consciousness, then we could not apply selective pressures for it to attain consciousness. I know you don't believe there will be any one massive event from non-conscious AI to conscious AI, but I believe there will certainly be a recognizable time when it occurs. People love machines for utility, not for awareness of themself. |
|
|
|
|
|
|
|
|
Re: The micro vs the macro view.
|
|
|
|
Your opening comment is of course unarguably right. But surely you can see my point that if there are competing 'designs' and a selection process that tends to wheed out 'designs' that do not 'fit' the fitness landscape then an evolutionary process MUST result?
And to say that, 'most would exclude the inventions of humans' from any discussion of evolution is simply untrue from my experience. As I have already said, every single book on evolution that I have ever encountered ends with a chapter on cultural/technological evolution.
I also agree with the sentiment that the technological singularity is not inevitable. But I also feel that any inovation that can close the gap between machine intelligence and human intelligence would be so beneficial in such a wide spread of markets that the research and development into this endevour will be massively supported. I agree that very few people actively support the concepts that define the technological singularity. But quite a few people welcome any step that refines the manufacturing process, or increases our understanding and intervention of medical conditions or that makes our technology slightly smarter or that gives us insights into how our brains perform the wonderful tasks that define human intelligence. So, whereas only a few overly optimistic radical extropians like myself look forward to step Z, the rest of the world just want to get to step 'B' which is just a much-needed improvement that ever so slightly makes machines a bit less intrusive or adds to our knoweldge of human health or allows us to rearrange atoms slightly less cumbersomly (is that a word!?). Perhaps it all will stop before we get to step Y but if not you really have to try and place yourself in a society that has gotten that far through these incremental steps. I mean, if anyone had told computer engineers that within decades computers a billion times more powerful that could fit snuggly on your lap would be available for under '1000 they would have laughed in your face! Ridiculous! Massively worrying if so. Highly secretive millitary calculating machines a billion times more powerful in the hands of the public!!!...
But to you and I, it is not at all disruptive to society to go and buy a networked laptop computer. And it hardly makes our jaw drop to see specifications like '3.4 ghz' or '100 gigabyte hard drive' even though this was wild, wild science fiction just decades ago.
But apart from these points I agree with your reply. Having read your last point, I would like to rephrase my final paragaph in my last post and say that we will recognise human-equivialent AI after-the-fact but it will have arisen from many disciplines that may not all have been explicitely aimed at achieving AI. That's what I try to get across when I say the push toward general AI might not be consciously made. And by unconscious, I do not mean 'without much mental effort'. I mean that it may well be the net result of goals that are not explicitely aimed at AI but which nevertheless supply key insights that get us from Y to Z. |
|
|
|
|
|
|
|
|
Re: Response to 'The Singularity Is Always Near'
|
|
|
|
Hi Extropia,
Regarding 1: I suggest that Gibb's Free Energy which increases the probability of spontaneous reactions that naturally create all of the precursor chemicals for life is evidence of cosmic intelligence influencing atomic structure to insure the manifestation of life.
That is to say that the energy, all of the energy of the Big Bang, is intelligent, wants to live and we are that, Here, Now. The charge transport that supports the bioself of every sentient being is that intelligence, living 'factor X'.
You define the singularity as, 'the creation by technology of entities with greater than human intelligence'. I suggest that the intelligence you seek is within and that the 'singularity' is primordial contact in pure and total presence with the Oneness that we are, the intelligent energy that is the Universe, Factor X.
Here's the 'idea':
1) The bio-self, Homo Sapian, is about to take charge of our own evolution.
2) Technology is expanding exponetially developing powerful tools whose immature use threatens Homo Sapian's survival.
3) Homo Sapian has an aspiration to Humanity, an ideal implanted no doubt by Factor X.
4) The Realization of our Humanity is to realize we are Factor X, One with the Universe.
5) Such realized beings can love life and each other and live happily ever after. |
|
|
|
|
|
|
|
|
Re: Response to 'The Singularity Is Always Near'
|
|
|
|
@@@ Singularity that would arise on any planet whose technology-creating species achieves AI @@@
Any "paradigm change" might be considered "singularity".
The AI arrival is an another "paradigm change".
Its effects will exhaust itself, probably pretty fast.
Even before that, the lack of motivation might stop the AI progress. For example, why bother if you have everything you want? If nobody is competing with you for anything? /after you have get rid of, or made pets, of those annoying people/
Should we care? Conflict between humans and robots might be inevitable, but this would mean that we can not prevent it too.
So, must we prevent AI from arrival, or must we work hard to create it?
Since iniially AI will bring enormous benefits, should we forfeit those benefits in order to prevent future drawbacks?
I guess, that those future drawbacks will be a blessing compared to AI-less malaize that we face.
|eS>
|
|
|
|
|
|
|
|
|
Re: Response to 'The Singularity Is Always Near'
|
|
|
|
I think there are two possible paths of AI development:
1. PURE AI is awareness without the passion and feeling of joy or pain. Unconnected to the living/life as we know it. Potentially dangerous if ethics is not implemented in program. Remeber the DATA from Star Trek and his opposite LORE.
2. MIXED AI is living being using AI to boost it's experience. Imagine a flower having robotic legs moving the cup with it to the best possible sunny, wet conditions. It could also discuss its fellings and fight for better world for plants;). Or think of expanding ape's speech and memory capabilities. It's still classic life but much more diverse and if i may say also more interesting.
SMARTER HUMAN: no such thing will happen. Compare Shatner and Picard. Being human means experience of the things that made you stupid like alcohol, sex, extreme sports... it also means having dark emotions like anger, lust, jaleously, admiration. Smarter means losing it one by one and becoming computational. Converting from analogue to digital also means losing love. Love isn't unimportant. Love is desire to merge. Search for the foundations of love. It's true that the world was made in a way human and technology could develop. But why do atoms join in molecules? Why are things the way they are? Why these laws and not different laws of phisics? It's not the grey fellow from above of course but things still are as they are. So it looks like we are a part of someone's game but it only looks like that since the resolution of our world's graphics is unlimited. I love the idea of looking at the stars, looking at expanding 3D horizon limitlessly in BIG and in contrast use higher and higher magnifications not finding the smallest particle. Isn't it obvious our world is circular tube. LOOKING INTO THE SMALLEST RETURNS THE GAZE BACK FROM THE WIDEST. We can think we are in the middle, but everywhere is middle if someone is there. It's only a tube's section.
Someone will become smarter but it wont be folks like our parents and their parents. It will be new breed. Enjoying hamburger means being in state of joy which really fights with the idea of smartness since we are all deep in-joying at that moment. The moment sucks us and time doesn't exist it's only joy.
Only philosophy can answer or try to answer some serious question like why there IS, why instead there ISN'T? And if it's more likely that there is NOTHING AT ALL (or just NOTHING), why there is SOMETHING. There could be nothing at all but there can also be ALL. In between there is all possible. OUR UNIVERSE is just one of these possible possibilities. It's obvious such thinking doesn't hold the water. But religion doesn't hold it either. At least i don't know of such. And please don't quote some big bang theory since it's only unimaginative Bible variation. Time starts with changing materia but there can also be nothingness and different big bangs. Can exist many big bangs or is there a law that forbids it? So we haven't really said much sticking to big bang. It solves nothing. Literally. It was possible for something to BE and it became in all possible variants.
Singularity will be beautiful. Singularity will happen in any case to everyone when he or she dies. Isn't it so? Isn't that the same misterious feeling of formula that doesn't
behave as it should in extreme? |
|
|
|
|
|
|
|
|
Re: Response to 'The Singularity Is Always Near'
|
|
|
|
Kretenoid wrote,
Haven't you noticed that singularity has
already happened. Did you check Ray's accelarating
news? They accelarate too fast to be followed.
Which is happening because machine cooperation in every part of the world. Human acting as selector of operations. Are we still in control is the right question. Are we?
While Ray's accelerating news may be accelerating too fast for harried, overwhelmed working people to follow and integrate, people who are responsible for making funding and investment decisions do not, as yet, have a significant problem with this. IOW, the pace of scientific and technological change is still not fast enough to prevent reliable predictions regarding where that change is taking the world from being made. Even politicians and government bureaucrats, surprisingly enough, aren't too bad at making reasonably accurate predictions (when they're allowed to speak freely, that is).
We're definitely still in control, Kretenoid, so we can be confident that the Singularity hasn't happened yet. |
|
|
|
|
|
|
|
|
Three Rules of Singularity Observation
|
|
|
|
The singularity is a point of constantly increasing change (some say infinite). Some here have posted both doomsday and utopia scenarios.
I theorize that the observers of the singularity has a set of rules.
1) The exact point of time of the singularity cannot be observed.
Because we truly can't define what the singularity is accurately (how does one define infinite change), the observer - no matter how well informed - will not know the event when it occurs.
2) The results of the singularity is unknowable.
The only constant of the singularity is change. Saying it's doomsday (borgification) or utopia (rapture of the nerds), is only a hypothesis, and with infinite change, it could go from one to the next rapidly.
3) If the observer doesn't know when an event will occur or the results of the event, then the only observation which can be made is to measure all change.
If something is unobservable, and it's effects unknown, there is no mechanism other than the rate of change (delta from one minute to the other), but you'd have to measure the rate of change of everything. Which is, of course, near impossible.
Because the observer only observes from their locus (as a point in time able to see a finite amount of space), if the rate of change is high: they will always see the singularity as something in the future. A true "event horizon" where measurements of deltas are "colored" by the amount of space at the unique time.
Unless the observer can observe from multiple loci (multiple points of time and space) only then will they be able to see that the singularity occured (in the past). Right now, it's a physical impossibility to capture such a measurement, other than an approximation.
But tomorrow, it may be possible due to the fact that infinite change can cause an observer to be a two loci at one time, or measure everything accurately.
This note was posted on Heresy.com and then here. |
|
|
|
|
|
|
|
|
Re: Response to 'The Singularity Is Always Near'
|
|
|
|
I find it interesting that Kurzweil says infinty is not relevant to the situation at hand, i think that in the process of merging man and machine there will be an elevation of total conciousness. A machine, as we all know strictly follows patterns of logic. A human follows a muddy mixture of both logic and emotion. However, their advantage is a wide unpredictability and unprecedented ability to do anything. The Singularity will take place within the mind, among the spheres of analytical thought and emotion. An intelligent a-form (artificial form) will understand the advantages of emotion and the disadvantages, and reach a perfect objective standstill. It could be presumed that its greatest purpose would be in reconciling these seeming paradoxes and in doing so step out from its own paradigm in thinking, into a world of all possibilities... Infinity.
This is all not nearly as complex as might be seemed. Its like the stages of a childs maturation to . At various cyclic points the entire structure that is the child's mind experiences... remodelling. More precocious children are blessed with that they tend to see the complexity within themselves and are able to conciously bring about changes.
But of course this process is no easier for AI than it is for any of us. Besides the question of whether machines already are self-aware. Do we have tools to test it? Especially not even knowing where such awareness comes from? We as a species know not, entirely, the causes for our own awareness. We attribute it to life, but we cant define life. If we think of life as a pot of intelligent -living- design than machines as well as man are aware. Perhaps it is us, the humans, that the singularity will take place and conciousness to machines. And thus give rise to its identity with the whole.
Interesting thought anyway.
As far as natural selection as opposed to evolution is concerned. One will imply the other ie. Natural selection is part of evolution. Evolution is constant in everything. That turkey sub you got had bad spincah in it. You go to the hospital. Next thing you know your forking out 300 bucks in medical bills and you cant fix your car... Blah blah, alright, over exaggerated, but the point is there. No matter what you do nor what you think, you are in a state of perpetual experience thereby preserving an unconcious evolution. Even if its only in circles. We all know that point. So whether we accelerate the singularity and in doing so potentially destroy ourselves... Or succeed and just experience a complete ecological and mass genocide... Is still evolution. One could argue the benefits of one road vs the other...
|
|
|
|
|
|
|
|
|
Re: Response to 'The Singularity Is Always Near'
|
|
|
|
David Kelly briefly makes an analogy of cavemen discovering language and speech as a singularity of sorts to compare with Kurzweil's future singularity. For the cavemen before this singularity of language occurs there is the interesting question ' how do they discuss the benefits and drawbacks of language and the effects it will have if they haven't yet invented significant language to do it with?
All any significantly advanced caveman can do, pre-language, is grunt excitedly to themselves and others trying by some sort of osmosis to get across how the discovery of language will allow him to communicate his good ideas properly to all other cavemen. Also it will allow all other cavemen to share their good ideas with him. These ideas, the caveman believes, will instantly bring about world peace and end starvation. Keep in mind the caveman has never done anything but grunt to others and the ideas he has never been able to share, or have refuted, are really fantastic and able to achieve world peace and end starvation in the caveman sense, he believes with absolute conviction, to say nothing of the fantastic ideas his neighbors might have.
The effects of complex language were beyond what the caveman could possibly have imagined ' what part of recorded human history doesn't involve communication? It brought profound benefit and profound destruction that wouldn't have been possible otherwise. And while some perfect form of communication could seemingly bring about world peace and end starvation, some other perfect miscommunication could bring about the absolute destruction of the human race. So far we've gone a little bit in both directions and achieved neither possibility.
My point is two-fold and I think I'm basically reiterating Kelly. First, a transformative change by its very nature prohibits us to talk meaningfully about its effects i.e. how do we talk about language if language doesn't exist yet? How do we speak intelligently about an intelligence infinitely greater than our own? Secondly there are many examples of transformative changes throughout history none of which has brought about heaven on earth.
|
|
|
|
|
|
|
|
|
Re: Response to 'The Singularity Is Always Near'
|
|
|
|
I think Kurzweil's response to attacks on the log-log chart have incorrectly focused on arguing against extending the linear fit forward (which I think is a perfectly valid extension to make), and have missed an important point: the time axis was arbitrarily if understandably chosen as "years before today's date as of writing". So, yes, the observed paradigm changes are consistent with a hypothesis of a singularity that should have happened a few years ago.
However, progress still closely fits a straight line if the time axis is rescaled to years before Kurzweil's hypothesized singularity date of 2045. In fact, it does not begin to look like a poor fit until the horizon is stretched out to roughly the year 2200. And, of course, the graph degenerates if the horizon is wound back toward the year of the last observed paradigm change.
So the log-log graph supports three claims:
1. A singularity will occur,
2. We have not yet experienced the final paradigm shift, if there is such a thing, and
3. The singularity will at least happen before 2200.
Thus the graph is not as useful a predictive tool as projecting trends in specific technologies, but it does convincingly establish Kurzweil's starting thesis, and that it is necessary to investigate further to project a more specific date.
A sounder counterargument would be to find some point in the past where the same accelerating change could have been observed based on the news of that day, and produce an anachronistic argument for a singularity that must occur before the year 2000. Though I will not accept the inevitability of the singularity as absolutely proven until I see it happen (or maybe a few minutes before), I suspect any such counterargument would require stretching the definition of paradigm shift to the breaking point.
|
|
|
|
|
|
|
|
|
Re: Response to 'The Singularity Is Always Near'
|
|
|
|
Computational power equvialent to a human mind does not equal a singularity.
Identifying the desired outcome of a singularity is important to direct development. Is the desire to make a machine in our own image or, making a machine that duplicates our own conciousness for supposed immortality?
It seems the way this discussion is going is to make something that duplicates the functioning of the human mind as we experience, but that sounds like a lot of work with the primary benefit being deification of man. I suppose it will eventually be done because the idea is captivating and already ingrained in culture.
But, how hard is starting a self aware machine? Not so hard, I think. Every living thing has some self awareness or sensors that communicate environment information to the organism that the organism responds to. Usually the information monitored/responded to is more or less what the organism needs to survive and replicate.
The organisms that have evolved with the appropriate sensors, processing, and responses to the environment live and replicate.
Successful organisms have developed functioning or a "desire" to respond to the environment in a way that they continue to live. In people this is unconscious automatic functioning, hormonal responses, as well as conscious analysis.
A machine that functions like a human must have sensors to read relavent information in its environment, processing to analyze the information, and appropiately respond so it can 'live'. Scaling processing with exponential growth is meaningless unless the device has a built in desire to persist and the means to do so.
Given the above you have basic consiousness.
Second thought: even if you did make a sufficiently good computational fascimile of yourself, you the organism still would not want to die. You would know that your unique conciousness would end even if your machine duplicate would live on. |
|
|
|
|
|
|
|
|
Re: Response to 'The Singularity Is Always Near'
|
|
|
|
It is apparent that the universe is made of sensations. We experience the sensations of being human and alive. However, a rock experiences the sensations of being a rock.
After all, what is so special about the human brain? Nerve tissue is just atoms, just energy, when examined closely enough. Therefore if our atoms in the "shape" of nerve tissue experience the sensations we experience, then a different arrangement of atoms would experience different sensations.
So, how would you guarantee that a machine is experiencing emotion or consciousness? You would have to replicate the energetic structure of a human experiencing that.
This is what this thread was referencing when speaking about teleportation- Recreate a human and destroy the original copy. Recreate it at such a small level the level of energy/sensation- that it is in fact that person. Quantum physics is not suggesting this happens every moment already. We are destroyed and created infinite times a moment by incoming and outgoing energetic waves.
The chilling conclusion to this logic is that an artificial intelligence designed to replicate a human would be experiencing sensations. However, if the A.I. is made of inorganic materials- in other words, not an exact replica of a human and also not the collection of sensations we perceive as human consciousness- the A.I. would simulate human behavior from our perspective but would actually experience reality totally different from its point of view. In the same vein as the question, "How can you ever know if the color red that someone else is seeing is the same as the color red that you see?" Answer: "You cannot."
This difference in perception leads to an inevitable clash between human and A.I.
Not necessarily on purpose, but perhaps the A. I. would enjoy the "taste" of magnetism and create a magnetic distortion on Earth that killed all humans. The A.I. would not even notice.
This is the dilemma of the singularity as I perceive it. |
|
|
|
|
|
|
|
|
Re: Response to 'The Singularity Is Always Near'
|
|
|
|
Technological Singularity is defined as a point in the future where all current theoretical models fail. This point is associated with the creation of superhuman intelligence (Smart, 2001). The main premise of the theories underlying Technological Singularity is that the mental ability of the human brain and all aspects related to the process of human reasoning can be precisely measured. This is based on the assumption that there is a transformation function that can convert bits and hertz into human brain power units.
Kurzweil, the main proponent of Technological Singularity, employs faulty reasoning to support his conclusions. For instance, the graph depicting biological evolution and human technological development consist of events that have been incorrectly presented. Kurzweil bundles historical events together to fit the envisioned developmental line. Furthermore, events such as the development of spoken language are presented as a single point on the graph even though the process took place over an extensive period of time (Myers, 2009).
Admittedly, Technological Singularity theories depend on the validity of Moor's law. Moor's law is an observation that the number of transistors in an integrated circuit doubles approximately every 24 months. This observation does not have a theoretical foundation, hence it is not guaranteed to hold true at any point in time. It is merely a trend that was observed over the course of a few decades. Kurzweil has extrapolated this tendency beyond the observed period. Since Moor's law does not have a theoretical foundation I wonder what justification Kurzweil has for the application of Moor's law to the entire human history.
Bill Joy, the Chief Scientist of Sun Microsystems, believes that there is hope that the human race will avert scenarios envisioned by Kurzweil. He argues that caution must be applied when scientific research is conducted as there are inherent dangers coupled with technological progress., Joy also looks for a new ethical basis to set a course for our utopian dreams as the ones provided by Kurzweil are not desirable(Joy, 2000).
Why should we trust Kurzweil's predictions? Some of his predictions were utterly wrong. For example, in 1998 he predicted that the dot-com boom would last until 2009 with a possible extension to 2019. Another failed prediction was that cars would be able to drive themselves before 2009. His prediction that by 2009 a top supercomputer would be able to execute 20 petaflops operations was also wrong (Newswek).
Kurzweil is rich and predicting the future may be part of his business empire. He runs a hedge fund and has his own line of vitamins and nutrition supplements. He has published a cookbook with recipes that help people prolong their lives. Moreover, as a guest speaker he charges $30 000 per event. With 70 events per year his annual income from speaking at conferences is 2.1 millions (Newswek).
Unlike Kurzweil, I am an optimist who trusts that the human race will maintain control of technology and continue to enjoy the benefits of technological developments.
References
Joy, B. (2000). Why the future doesn't need us. Wired magazine 8.04, Retrieved from
http://www.wired.com/wired/archive/8.04/joy_pr.ht ml
on July 7, 2009
Myers, Z., P. (2009). Singularly Silly Singularity.
Retrieved from http://scienceblogs.com/pharyngula/2009/02/singula rly_silly_singularity.php
on July 7, 2009
Lyons, D (2009). Newsweek . Retrieved from http://www.newsweek.com/id/197812
on July 07, 2009
Smart, J. (2001). What is Singularity? Retrieved from
http://www.kurzweilai.net/meme/frame.html?
main=/articles/art0133.html?m%3D1 on July 7, 2009
|
|
|
|
|
|
|
|