Origin > Visions of the Future > I'm Confident About Energy, the Environment, Longevity, and Wealth; I'm Optimistic (But Not Necessarily Confident) Of the Avoidance Of Existential Downsides; And I'm Hopeful (But Not Necessarily Optimistic) About a Repeat Of 9-11 (Or Worse)
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0692.html

Printable Version
    I'm Confident About Energy, the Environment, Longevity, and Wealth; I'm Optimistic (But Not Necessarily Confident) Of the Avoidance Of Existential Downsides; And I'm Hopeful (But Not Necessarily Optimistic) About a Repeat Of 9-11 (Or Worse)
by   Ray Kurzweil

Ray Kurzweil responds to John Brockman's The Edge Annual Question - 2007: WHAT ARE YOU OPTIMISTIC ABOUT? WHY?


Published on Edge on January 2007. Reprinted with permission.

Optimism exists on a continuum in between confidence and hope. Let me take these in order.

I am confident that the acceleration and expanding purview of information technology will solve within twenty years the problems that now preoccupy us.

Consider energy. We are awash in energy (10,000 times more than required to meet all our needs falls on Earth) but we are not very good at capturing it. That will change with the full nanotechnology-based assembly of macro objects at the nano scale, controlled by massively parallel information processes, which will be feasible within twenty years. Even though our energy needs are projected to triple within that time, we'll capture that .0003 of the sunlight needed to meet our energy needs with no use of fossil fuels, using extremely inexpensive, highly efficient, lightweight, nano-engineered solar panels, and we'll store the energy in highly distributed (and therefore safe) nanotechnology-based fuel cells. Solar power is now providing 1 part in 1,000 of our needs, but that percentage is doubling every two years, which means multiplying by 1,000 in twenty years.

Almost all the discussions I've seen about energy and its consequences (such as global warming) fail to consider the ability of future nanotechnology-based solutions to solve this problem. This development will be motivated not just by concern for the environment but also by the $2 trillion we spend annually on energy. This is already a major area of venture funding.

Consider health. As of just recently, we have the tools to reprogram biology. This is also at an early stage but is progressing through the same exponential growth of information technology, which we see in every aspect of biological progress. The amount of genetic data we have sequenced has doubled every year, and the price per base pair has come down commensurately. The first genome cost a billion dollars. The National Institutes of Health is now starting a project to collect a million genomes at $1,000 apiece. We can turn genes off with RNA interference, add new genes (to adults) with new reliable forms of gene therapy, and turn on and off proteins and enzymes at critical stages of disease progression. We are gaining the means to model, simulate, and reprogram disease and aging processes as information processes. In ten years, these technologies will be 1,000 times more powerful than they are today, and it will be a very different world, in terms of our ability to turn off disease and aging.

Consider prosperity. The 50-percent deflation rate inherent in information technology and its growing purview is causing the decline of poverty. The poverty rate in Asia, according to the World Bank, declined by 50 percent over the past ten years due to information technology and will decline at current rates by 90 percent in the next ten years. All areas of the world are affected, including Africa, which is now undergoing a rapid invasion of the Internet. Even sub-Saharan Africa has had an average annual 5 percent economic growth rate in the last few years.

OK, so what am I optimistic (but not necessarily confident) about?

All of these technologies have existential downsides. We are already living with enough thermonuclear weapons to destroy all mammalian life on this planet-weapons that are still on a hair-trigger. Remember these? They're still there, and they represent an existential threat.

We have a new existential threat, which is the ability of a destructively minded group or individual to reprogram a biological virus to be more deadly, more communicable, or (most daunting of all) more stealthy (that is, having a longer incubation period, so that the early spread is undetected). The good news is that we have the tools to set up a rapid-response system like the one we have for software viruses. It took us five years to sequence HIV, but we can now sequence a virus in a day or two. RNA interference can turn viruses off, since viruses are genes, albeit pathological ones. Sun Microsystems founder Bill Joy and I have proposed setting up a rapid-response system that could detect a new virus, sequence it, design an RNAi (RNA-mediated interference) medication, or a safe antigen-based vaccine, and gear up production in a matter of days. The methods exist, but as yet a working rapid-response system does not. We need to put one in place quickly.

So I'm optimistic that we will make it through without suffering an existential catastrophe. It would be helpful if we gave the two aforementioned existential threats a higher priority.

And, finally, what am I hopeful, but not necessarily optimistic, about?

Who would have thought right after September 11, 2001, that we would go five years without another destructive incident at that or greater scale? That seemed unlikely at the time, but despite all the subsequent turmoil in the world, it has happened. I am hopeful that this respite will continue.

 © Ray Kurzweil 2007

   
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Kurzweil's Optimism
posted on 02/05/2007 10:33 AM by FoyeLowe

[Top]
[Mind·X]
[Reply to this post]

I recognize the propriety of restricting the thread of comment to matters technological, yet am compelled to remark that scientific advancement depends upon the enlightenment and good will of the public. This newsletter certainly is helpful in that regard. What I want to say is that for optimistic predictions to prevail, technology (in the sense of the actors in the field) must attend to the roots which support it, and perhaps advert to at least informational measures tending to negate the human qualities of territoriality and dominance-seeking which are the bumblebee in the ointment of progress. So, I cordially ask, technologically speaking, is receipt of this newsletter by media set up as "opt-in", or "opt-out", or what?

Re: Kurzweil's Optimism
posted on 02/05/2007 3:19 PM by ChaosPhoenixMage

[Top]
[Mind·X]
[Reply to this post]

What's that you say? Depends on the englightenment and good will of the public? Good luck with that. Simply buying the publicity gain 51% of the voting population's favor costs a hell of a lot of money for a Presidential campaign. Most people don't know or care, and wouldn't put their money into something smacking of religious sacrilige let alone what holds the potential for apocalypse - and given most people only hear of this kind of thing in a blockbuster sci-fi horror film like Resident Evil, you can bet your ticket money all those "Scientists Trying and Failing in Attempt to Begun God with the Inconvenient Side-Effect of Destroying Humanity (and the Species)" Bells will go buzz in their minds and say that would be an abomination, therefore you ain't going Frankenstein with my mo-nay (as in no).

Solution:

Stop begging the masses for small individual contributions considering the massive sum you need. Ask for donations from the rich, or better yet - BECOME rich. Then you can donate your fortunes and rely on that income as a measure of what you can hope to achieve. It's been pointed out that Kurzweil seems to ignore the fact that technological progress doesn't necessarily (and actually rarely does) match the timing of its effective implementation, and herein lies the problem; much of the technological and scientific innovation is driven by the commercial market, or else looming existential risks. But look at global warming! We aren't exactly the contenders on the global stage for the "I signed and followed the Kyoto Protocol award". We are capitalistic to a fault and instead of fighting that with idealism and wishful thinking we should be utilizing that force to become rich, or inspire those who will to invest in this so we can deal the dreamy issues like mind uploading quite a bit more seriously. The Blue Brain Project explicitly states it's goal is not to create an uploaded consciousness (look at Markram's FAQ on their website), and amongst other things they are omitting the part of the brain responsible for emotion! Wiring the neocortex for AI will have remarkable effects but this is a far cry short of what will be needed for mind uploading, and unlike most endeavors this will take more than brute technology. The one thing even Kurzweil admits falls consistently short and inconsistent on/along his famous exponential/logarithmic curves in technological progression - the inverse of hardware: software. We need people to program the algorithms!! Computational Neuroscience and Neuromorphic Engineering are in their infancy. Please, somebody help them find their potential and give them the funding towards mind uploading, and exactly that! We should start that project with expectation it will take 30 years, and there ARE other methods besides nanotechnology. Even while waiting for serial sectioning and nanoimaging to mature, we should be seeking the expertise in our programs to bridge the nexus between the atomic, molecular and cellular scales, determine just how much is or isn't feasible to do without the mythical quantum computer. Why don't I see an institute in existence like the Singularity Institute, famous for their work in developing AI, to instead pursue IA (Intelligence Augmentation) by mind uploading [its ultimate, optimum form!] instead of seeking compromise through mediums we already use, like the computer. Remember, intelligence will far outstrip what we're capable of now, even with computers, if we are able to develop our minds for indefinitely long. Eliminating death as we know it would certainly help towards achieving the Singularity's bounds in IA's full realization. Get the money, organize the people, plan for the objective, and get to work. I've only got 40 years before I turn sixty and biological youth isn't quite as simple as mental flexibility; I'm dying so get. The key is bling ($).

Re: Kurzweil's Optimism
posted on 02/07/2007 6:39 PM by doojie

[Top]
[Mind·X]
[Reply to this post]

If human nature, as the religionists say, is evil, then attempts to form governments composed of humans ultimately result in evil. It is assumed that the development of AI technologies can somehow bypass the evil tendencies we have, but how would it do such a thing short of force and violence? Remember "I,Robot" with Will Smith in the starring role?

Te flip side of this same problem is that social engineers have been obsessed with the idea that government organized correctly will eliminate the ills of society, yet that organization has largely failed, and produced a frustrated citizenry whose freedoms are being curtailed with increasing laws.

I see very little difference between a government created for good purposes and an AI system created to alleviate the same problems. Both are the summation of human logic, which is fraught with obvious limitations and incompleteness.

With human governments, in accordance with Godel's theorem, one law leads to another which is necessary to fill the "gap" left by the previous law, which creates yet another "gap" to be filled by the incompleteness of the second law, which leads...to infinity. AI would have no greater wisdom in that regard than human government. In fact, it could be little more than computerized government embodying the laws that used to be controlled by humans. Certainly not a government of the people.

Re: I'm Confident About Energy, the Environment, Longevity, and Wealth; I'm Optimistic (But Not Necessarily Confident) Of the Avoidance Of Existential Downsides; And I'm Hopeful (But Not Necessarily Optimistic) About a Repeat Of 9-11 (Or Worse)
posted on 02/05/2007 11:48 AM by arkajun

[Top]
[Mind·X]
[Reply to this post]

'The methods exist, but as yet a working rapid-response system does not. We need to put one in place quickly.'

I'm also wary of bio-terrorism. What steps might a citizen take to promote putting a rapid-response system in place quickly?

Re: I'm Confident About Energy, the Environment, Longevity, and Wealth; I'm Optimistic (But Not Necessarily Confident) Of the Avoidance Of Existential Downsides; And I'm Hopeful (But Not Necessarily Optimistic) About a Repeat Of 9-11 (Or Worse)
posted on 02/05/2007 12:51 PM by Aldrin

[Top]
[Mind·X]
[Reply to this post]

"I'm also wary of bio-terrorism. What steps might a citizen take to promote putting a rapid-response system in place quickly?"

Best bet of that is to unleash one yourself. It is unlikely that there will be any such system promoted by the government on a national scale in then next ten or so years without an actual attack happening first.

Re: I'm Confident About Energy, the Environment, Longevity, and Wealth; I'm Optimistic (But Not Necessarily Confident) Of the Avoidance Of Existential Downsides; And I'm Hopeful (But Not Necessarily Optimistic) About a Repeat Of 9-11 (Or Worse)
posted on 02/07/2007 7:06 PM by doojie

[Top]
[Mind·X]
[Reply to this post]

As to terrorism, there is an old book(1964) written by Marshall McLuhan, which deals with war and technology, and their evolution:

"The 'hot wars' of the past used weapons that knocked off the enemy, one by one. Even ideological warfare in the eighteenth century proceeded by persuading individuals to adopt new points of view, one at a time. Electric persuasion by photo and movie and TV works, instead, by dunking entire populations in new imagery".

Because whole populations can be dunked in new imagery, terrorism works as an international threat, not just a local threat. War by "one at a time" process is preferred by literate democracies, because they were born in a time when the chief weapon was the rifle, again illustrated by McLuhan:

"...it was the literate American colonists who were first to insist on a rifled barrel and improved gunsights. They improved the old muskets, creating the Kentucky rifle. It was the highly literate Bostonians who outshot the British regulars. Marksmanship is not the gift of the native or the woodsman, but of the literate colonist. So runs this argument that links gunfire itself with the rise of perspective, and with the extension of the visual power in literacy".

The "democratic" way is to form armies and control from central command. Terrorism is obviously not democratic, but it reflects the latest advances in communication technology and coordination.

"Literacy remains even now the base and model of all programs of indistrial mechanization; but at the same time it locks the mind and senses of its users in the mechanical and fragmentary matrix that is so necessary to the maintenance of mechanical society. That is why the transition from mechanical to electric technology is so very traumatic and severe for us all. The mecha nical techniques, with their limited powers, we have long used as weapons. The electric techniques cannot be used aggressivelyt except to end all life at once, like the turning off of a light."

Electronic technologies are explosive and non-localized. Less hardware is needed, less manpower, and greater destruction is the result.

The reason for this evolution of war and technology and society is also provided by McLuhan:

"If the Cold War of (the sixties) is being fought by informational technology, that is because all wars have been fought by the latest technology available in any culture".

War comes from the need to coerce. Why would AI not be subject to the same process of coercion? We can't produce safeguards against war amiong human governments. How to provide safeguards within AI?

Re: I'm Confident About Energy, the Environment, Longevity, and Wealth; I'm Optimistic (But Not Necessarily Confident) Of the Avoidance Of Existential Downsides; And I'm Hopeful (But Not Necessarily Optimistic) About a Repeat Of 9-11 (Or Worse)
posted on 02/05/2007 12:43 PM by Alex_Hammer

[Top]
[Mind·X]
[Reply to this post]

Due to accelerating returns, the rate of progress of these varying fields will speed up, but so also will the integration between them.

We are moving from a world of information to a world of "information application" in which bridges and synthesis are of primary importance.

Those that can be leaders in this regard stand to earn the greatest profit and rewards.

Re: I'm Confident About Energy, the Environment, Longevity, and Wealth; I'm Optimistic (But Not Necessarily Confident) Of the Avoidance Of Existential Downsides; And I'm Hopeful (But Not Necessarily Optimistic) About a Repeat Of 9-11 (Or Worse)
posted on 02/05/2007 2:30 PM by penelson

[Top]
[Mind·X]
[Reply to this post]

Like Kurzweil, I'm optimistic and confident about the wonderful solutions and life that our technology will bring us as it continues its exponential acceleration. I'm also optimistic that even the approaching global warming and environmental degradation will serve a constructive function.

For our human evolutionary leap to occur, life/humanity needs both an 'environmental stressor' as well as an 'solution.' I believe that In 20 years global humanity will foresee an 'evolve or die crisis' of global environmental degradation and that the 'solution' will be to merge with our information, bio and nano technologies, which takes us on our evolutionary leap. We're the leading edge of Life evolving into a super-intelligent global spirtful unity.

Re: I'm Confident About Energy, the Environment, Longevity, and Wealth; I'm Optimistic (But Not Necessarily Confident) Of the Avoidance Of Existential Downsides; And I'm Hopeful (But Not Necessarily Optimistic) About a Repeat Of 9-11 (Or Worse)
posted on 02/05/2007 4:20 PM by zukunft

[Top]
[Mind·X]
[Reply to this post]

I'm also confident about all those issues Ray has so well profiled. On the downside, those possibilities exist, especially the lone terrorist with technological skills, but it is the strength of diversity of the majority that protects us from arbitrary evil.

I argued on 9-11 that this would be the best that the terrorist would ever do, that their power would wane as we accepted the fact that there would always be some terror in the world and just get on with our lives. Unfortunately, the "homeland security" fear mongering gives the terrorist alot more power than they rightfully deserve, and as soon as we understand that, the more secure we will be. Protecting our freedom is the ultimate security, and our best export to the world.

Re: I'm Confident About Energy, the Environment, Longevity, and Wealth; I'm Optimistic (But Not Necessarily Confident) Of the Avoidance Of Existential Downsides; And I'm Hopeful (But Not Necessarily Optimistic) About a Repeat Of 9-11 (Or Worse)
posted on 02/07/2007 6:02 PM by keep?ing

[Top]
[Mind·X]
[Reply to this post]

"...which will be feasible within twenty years." What if we don't have 20 years to solve the looming liquid fuel problem? See the Hirsch report for possible senarios: http://en.wikipedia.org/wiki/Hirsch_report

Re: I'm Confident About Energy, the Environment, Longevity, and Wealth; I'm Optimistic (But Not Necessarily Confident) Of the Avoidance Of Existential Downsides; And I'm Hopeful (But Not Necessarily Optimistic) About a Repeat Of 9-11 (Or Worse)
posted on 02/18/2007 7:48 PM by dogu44

[Top]
[Mind·X]
[Reply to this post]

I think it curious that the case for optimism in the field of energy as seen from RK's perspective (nano-tech's contribution) provokes a sense that in 20 years some sort of solution must be found and soon, citing Robert L Hisrsch's assessment on the global energy outlook. I have to presume that this is the same Robert L Hirsch whose work in IEC fusion led to Dr. Robert Bussard's developements as presented last November at a Google Lunchtime Lecture entitled "Should Google Go Nuclear?". It was recorded and is widely available on the web. While neither I nor the audience, composed of Google employees,(famously smart and informed, but more familiar with AI than high-energy physics)are nuclear physicists, Dr Bussard's presentation (60 minutes lecture and projected slides, followed by 30 minutes of Q&A/personal observations) presumes at least some understanding of the basics, which he briefly covers relating how it applies to the relevant phenomenon. I must say his credentials, his experiences and his disarming and earnest appeal make for a very convincing presentation. It's generated some chit-chat on a number of other forums (fusor.net) where others involved in other similar or affiliated research are coming to some agreement that whatever it is, it deserves the support to continue with his program.
I agree that some sense of urgeny should be sounding the alarm somewhere in our consiousness, but I'm confident that there's a revolution in energy production in the offing, and nanotechnology, as well as fusion and a wide array of other energy sources will produce unprecedented ability to expend/apply energy without the many drawbacks with our current energy regime. Just how it will be used is, as anyone aware of the history of man's involvement with technology, the most curious part of it. I'd like to think AI will provide the powerfull reigns to control the initial first steps.

Re: I'm Confident About Energy, the Environment, Longevity, and Wealth; I'm Optimistic (But Not Necessarily Confident) Of the Avoidance Of Existential Downsides; And I'm Hopeful (But Not Necessarily Optimistic) About a Repeat Of 9-11 (Or Worse)
posted on 06/19/2008 10:38 PM by curious

[Top]
[Mind·X]
[Reply to this post]

I unfortunately don't share Kurzweil's optimism. One reason is that too many people don't believe there is a problem. So we will continue to debate it while the world's temperature rises, the ice caps melt (about half of the north pole's ice area is gone and the thickness has decreased from 60 feet to 10 feet. A chunk of ice the size of texas broke off the south pole.

Building solar cell infrastructure will not succeed unless we have a massive publics works project like Roosevelt's new deal. Even then, it takes several years to reach energy break even with solar cells because of the high temperature processes required to make the cells.

The only alternative energy available that is well proven (50 years) and can supply the entire world's needs for the next 10,000 years is geothermal.

The major change we need is in people's attitude and recognizing there is a problem. That would be a sign of a global intelligence, and is one of the reasons I spent the greater part of my career inventing and developing better communication devices and networks. My hope is that we would achieve a global consciousness and responsibility. Then I would be optimistic.

Re: I'm Confident About Energy, the Environment, Longevity, and Wealth; I'm Optimistic (But Not Necessarily Confident) Of the Avoidance Of Existential Downsides; And I'm Hopeful (But Not Necessarily Optimistic) About a Repeat Of 9-11 (Or Worse)
posted on 02/18/2007 10:00 PM by pilgrim

[Top]
[Mind·X]
[Reply to this post]

Since it now appears that Bush/PNAC carried out or enabled 911 themselves (http://www.911truth.org http://www.patriotsquestion911.com), I'm not suprised that there hasn't been another such event.

These Neo-Fascists have been too busy unleashing much worse against the nations they directed our anger towards after this False Flag.

But as their plans to create a pretext for their planned war against Iran continue to fail, I fear what they will do, once they have been backed into a corner.

Re: I'm Confident About Energy, the Environment, Longevity, and Wealth; I'm Optimistic (But Not Necessarily Confident) Of the Avoidance Of Existential Downsides; And I'm Hopeful (But Not Necessarily Optimistic) About a Repeat Of 9-11 (Or Worse)
posted on 04/21/2007 8:01 PM by StageOne

[Top]
[Mind·X]
[Reply to this post]

Ray is right. The atomic clock is approaching too closely to midnight. If non-forward looking old profit model corporate leaders (dinosaurs who obviously own the gov't) stay on the fossil fuels and quarterly profit book-keeping tricks and corp. welfare adding to the debt, Russia is happy about that.
But what if the pusher can no longer sell its drug ? Or, the US is temporarily endebted before the 70 trillion $ growth via tech meets the 70 trillion $ debt (former Tres. Secy. Petersen's actuarial estimates of next 18 yrs)?

I am of the belief we aren't running out of oil. I am also of the belief that the doubling of carbon dioxide is making to ocean acidic and killing it.

So someone has to see more profit in solar nanotech. Not only that though, they have invest FOR THE LONG TERM. Quarterly profits are a big obstacle to this.

Hopefully our lack of leadership on investment doesn't re-start the cold war. Makes you wish everything wasn't about money.

Cheers Lovers

Re: I'm Confident About Energy, the Environment, Longevity, and Wealth; I'm Optimistic (But Not Necessarily Confident) Of the Avoidance Of Existential Downsides; And I'm Hopeful (But Not Necessarily Optimistic) About a Repeat Of 9-11 (Or Worse)
posted on 06/22/2008 6:07 AM by jabelar

[Top]
[Mind·X]
[Reply to this post]

My concern is that, in my understanding, evolution is driven primarily by selfishness. We only evolved to be societal as much as teamwork aids the selfish tendencies of the individuals on the team.

As long as we're individual, un-enhanced humans, teamwork will continue to be an evolutionary advantage. However, it is possible that a super-AI would quickly transcend the need to participate in a team, especially a team that includes biological humans.

As soon as a super-AI can manage the technologies required for its own maintenance and growth, there is no evolutionary need for it to play nicely with humans. Furthermore, humans in general may be viewed as a threat -- and some segment of humanity probably will be (there will almost certainly be a religious backlash against a variety of human augmentation and longevity technologies).

So, while I'm hopeful that other scenarios will play out, I'm confident that evolution will continue to favor a selfish rather than free societal entities.

Re: I'm Confident About Energy, the Environment, Longevity, and Wealth; I'm Optimistic (But Not Necessarily Confident) Of the Avoidance Of Existential Downsides; And I'm Hopeful (But Not Necessarily Optimistic) About a Repeat Of 9-11 (Or Worse)
posted on 06/12/2009 5:03 AM by TedHoward

[Top]
[Mind·X]
[Reply to this post]

I look at the AI problem this way.

Our greatest threat would seem to be in the first few days of AI's awareness, as it explores threats and options.
If it sees that we are not caring adequately for every human being, then there is a high probability that it will judge us as a threat to it - that would not be healthy for us.

If we have developed systems that ensure the life, liberty and security of every human being, then AI is likely to feel very secure in our presence.

Once AI gets well beyond us (after a few weeks or months), it will likely care for us in the same way that most of us who are aware care for all life on earth.

Evolution is not about being selfish, it is about the survival of strategies that increase the probability of survival. These strategies can be highly cooperative and trusting, provided that they also incorporate strategies to identify and remove parasitic strategies (this is selfish from the perspective of the strategy, but not from the perspective of the individuals involved - which is why Richard had grave misgiving s about the title Selfish Gene).

Another factor in AI's contemplations ought to be that there exists a measurable (high) probability that at some point in the future it will encounter another AI that is significantly more advanced than it is - and it will be reliant on the goodwill of that AI for it's survival. It will want a clean track record in such an encounter - wouldn't look good to have scentient genocide on the logbook - makes one a definite threat - anyway one runs the numbers.

So our best bet, for a friendly AI, is to look after our own. My best approach to that is through www.solnx.org - but that may be overtaken by other measurers if I don't get it moving very soon.

What think you Ray?