Origin > Dangerous Futures > Excerpts from "One Half of a Manifesto"
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0232.html

Printable Version
    Excerpts from "One Half of a Manifesto"
by   Jaron Lanier

Does the optimism of technologists blur the question of quantitative improvements in hardware versus a lack of qualititative improvements in software? Do they point the way towards an eschatological cataclysm in which doom is imminent?

Read Ray Kurzweil's response here. A postscript on Ray Kurzweil can be read here. Originally published September 2000 at Edge. Published on KurzweilAI.net July 30, 2001.

For the last twenty years, I have found myself on the inside of a revolution, but on the outside of its resplendent dogma. Now that the revolution has not only hit the mainstream, but bludgeoned it into submission by taking over the economy, it's probably time for me to cry out my dissent more loudly than I have before. And so I'll here share my thoughts with the respondents of edge.org, many of whom are, as much as anyone, responsible for this revolution, one which champions the assent of cybernetic technology as culture.

The dogma I object to is composed of a set of interlocking beliefs and doesn't have a generally accepted overarching name as yet, though I sometimes call it "cybernetic totalism." It has the potential to transform human experience more powerfully than any prior ideology, religion, or political system ever has, partly because it can be so pleasing to the mind, at least initially, but mostly because it gets a free ride on the overwhelmingly powerful technologies that happen to be created by people who are, to a large degree, true believers.

Edge readers might be surprised by my use of the word "cybernetic." I find the word problematic, so I'd like to explain why I chose it. I searched for a term that united the diverse ideas I was exploring, and also connected current thinking and culture with earlier generations of thinkers who touched on similar topics. The original usage of "cybernetic", as by Norbert Weiner, was certainly not restricted to digital computers. It was originally meant to suggest a metaphor between marine navigation and a feedback device that governs a mechanical system, such as a thermostat. Weiner certainly recognized and humanely explored the extraordinary reach of this metaphor, one of the most powerful ever expressed.

I hope no one will think I'm equating Cybernetics and what I'm calling Cybernetic Totalism. The distance between recognizing a great metaphor and treating it as the only metaphor is the same as the distance between humble science and dogmatic religion. Here is a partial roster of the component beliefs of cybernetic totalism:

1) That cybernetic patterns of information provide the ultimate and best way to understand reality.

2) That people are no more than cybernetic patterns.

3) That subjective experience either doesn't exist, or is unimportant because it is some sort of ambient or peripheral effect.

4) That what Darwin described in biology, or something like it, is in fact also the singular, superior description of all creativity and culture.

5) That qualitative as well as quantitative aspects of information systems will be inexorably accelerated by Moore's Law.

And finally, the most dramatic:

6) That biology and physics will merge with computer science (becoming biotechnology and nanotechnology), resulting in life and the physical universe becoming mercurial; achieving the supposed nature of computer software. Furthermore, all of this will happen very soon! Since computers are improving so quickly, they will overwhelm all the other cybernetic processes, like people, and will fundamentally change the nature of what's going on in the familiar neighborhood of Earth at some moment when a new "criticality" is achieved- maybe in about the year 2020. To be a human after that moment will be either impossible or something very different than we now can know.

During the last twenty years a stream of books has gradually informed the larger public about the belief structure of the inner circle of Digerati, starting softly, for instance with Godel, Escher, Bach, and growing more harsh with recent entries such as The Age of Spiritual Machines by Ray Kurzweil.

Recently, public attention has finally been drawn to #6, the astonishing belief in an eschatological cataclysm in our lifetimes, brought about when computers become the ultra-intelligent masters of physical matter and life. So far as I can tell, a large number of my friends and colleagues believe in some version of this imminent doom.

I am quite curious who, among the eminent thinkers who largely accept some version of the first five points, are also comfortable with the sixth idea, the eschatology. In general, I find that technologists, rather than natural scientists, have tended to be vocal about the possibility of a near-term criticality. I have no idea, however, what figures like Richard Dawkins or Daniel Dennett make of it. Somehow I can't imagine these elegant theorists speculating about whether nanorobots might take over the planet in twenty years. It seems beneath their dignity. And yet, the eschatologies of Kurzweil, Moravec, and Drexler follow directly and, it would seem, inevitably, from an understanding of the world that has been most sharply articulated by none other than Dawkins and Dennett. Do Dawkins, Dennett, and others in their camp see some flaw in logic that insulates their thinking from the eschatological implications? The primary candidate for such a flaw as I see it is that cyber-armageddonists have confused ideal computers with real computers, which behave differently. My position on this point can be evaluated separately from my admittedly provocative positions on the first five points, and I hope it will be.

Why this is only "one half of a manifesto": I hope that readers will not think that I've sunk into some sort of glum rejection of digital technology. In fact, I'm more delighted than ever to be working in computer science and I find that it's rather easy to adopt a humanistic framework for designing digital tools. There is a lovely global flowering of computer culture already in place, arising for the most independently of the technological elites, which implicitly rejects the ideas I am attacking here. A full manifesto would attempt to describe and promote this positive culture.

I will now examine the five beliefs that must precede acceptance of the new eschatology, and then consider the eschatology itself.

Here we go:

Cybernetic Totalist Belief #1: That cybernetic patterns of information provide the ultimate and best way to understand reality.

There is an undeniable rush of excitement experienced by those who first are able to perceive a phenomenon cybernetically. For example, while I believe I can imagine what a thrill it must have been to use early photographic equipment in the 19th century, I can't imagine that any outsider could comprehend the sensation of being around early computer graphics technology in the nineteen-seventies. For here was not merely a way to make and show images, but a metaframework that subsumed all possible images. Once you can understand something in a way that you can shove it into a computer, you have cracked its code, transcended any particularity it might have at a given time. It was as if we had become the Gods of vision and had effectively created all possible images, for they would merely be reshufflings of the bits in the computers we had before us, completely under our command.

The cybernetic impulse is initially driven by ego (though, as we shall see, in its end game, which has not yet arrived, it will become the enemy of ego). For instance, Cybernetic Totalists look at culture and see "memes", or autonomous mental tropes that compete for brain space in humans somewhat like viruses. In doing so they not only accomplish a triumph of "campus imperialism", placing themselves in an imagined position of superior understanding vs. the whole of the humanities, but they also avoid having to pay much attention to the particulars of culture in a given time and place. Once you have subsumed something into its cybernetic reduction, any particular reshuffling of its bits seems unimportant.

Belief #1 appeared on the stage almost immediately with the first computers. It was articulated by the first generation of computer scientists; Weiner, Shannon, Turing. It is so fundamental that it isn't even stated anymore within the inner circle. It is so well rooted that it is difficult for me to remove myself from my all-encompassing intellectual environment long enough to articulate an alternative to it.

An alternative might be this: A cybernetic model of a phenomenon can never be the sole favored model, because we can't even build computers that conform to such models. Real computers are completely different from the ideal computers of theory. They break for reasons that are not always analyzable, and they seem to intrinsically resist many of our endeavors to improve them, in large part due to legacy and lock-in, among other problems. We imagine "pure" cybernetic systems but we can only prove we know how to build fairly dysfunctional ones. We kid ourselves when we think we understand something, even a computer, merely because we can model or digitize it.

There is also an epistemological problem that bothers me, even though my colleagues by and large are willing to ignore it. I don't think you can measure the function or even the existence of a computer without a cultural context for it. I don't think Martians would necessarily be able to distinguish a Macintosh from a space heater.

The above disputes ultimately turn on a combination of technical arguments about information theory and philosophical positions that largely arise from taste and faith.

Continued at:http://www.edge.org/3rd_culture/lanier/lanier_p4.html

Copyright © 2000 by Edge Foundation, Inc.


 Join the discussion about this article on Mind·X!


   [Post New Comment]
Mind·X Discussion About This Article:

hello jaron, from luc sala from Amsterdam
posted on 11/18/2001 6:00 PM by sala@euronet.nl

[Reply to this post]

Just foudn out about this site and amara's work on it. I am in the US for a couple of weeks, like to meet you sometime concerning future trends projects. Like I see fear based technology growing upon us, how will the human spirit change under the influence of so much more big brother, connectedness, socalled security etc. I feel sorry for the kids that are kind of protected from real-time endeavours, dangers, risk etc. Who grows up avoiding risk (or excluded from it) might tend to act as a austriche when older, or is this exactly what we already see?

Luc Sala
Amsterdam, presently in Boulder creek cal.