Origin > Visions of the Future > The Age of Intelligent Machines > The Age of Intelligent Machines: A Technology of Liberation
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0299.html

Printable Version
    The Age of Intelligent Machines: A Technology of Liberation
by   George Gilder

George Gilder is the author of eight books on issues of technology and society, including Wealth and Poverty (1981) and The Spirit of Enterprise (1983), both best sellers. His most recent book is Microcosm (1989), a history and prophesy of the age of VLSI microchips. Gilder is a regular contributor to the Wall Street Journal and lives in Tyringham, Massachusetts with his wife, four children, and four computers.

From Ray Kurzweil's revolutionary book The Age of Intelligent Machines, published in 1990.


Futurists have long seen computers as Big Brother's crucial ally on the road to 1984, George Orwell's chilling vision of technocracy. Placed on pedestals in the central computing rooms of large institutions, computers were large, expensive, and arcane. They did not understand English; to use them, you had to learn what were called their higher-level languages. As one expert predicted, "There will be a small, almost separate society of people in rapport with the advanced machines." This elite will tend to control the state and master the commanding heights of the economy.

The year 1984 came and went, and the prophecies of 1984 were fulfilled only in nightmares and totalitarian gulags. One of the prime reasons for the failure of the prophecy was the success of computer technology. Contrary to all the grim predictions, intelligent machines empower individuals rather than large institutions, spread power rather than centralize it.

Crucial to the liberating impact of computers was the very nature of computer technology. In volume, anything on a chip is cheap. But moving up the hierarchy from the chip to the circuit board to the network and to the telecommunications system, interconnections between components grow exponentially more expensive. So a first law of the technology is to concentrate components and connections-and thus computing power-on single chips. Concentrating components on a chip not only enhanced their speed and effectiveness but also vastly lowered their price. Finally, in the form of the microprocessor, the computer on a chip costs a few dollars and outperforms the computer on a pedestal. Rather than pushing control toward Big Brother at the top, as the pundits predicted, the technology, by its very nature, constantly pulled power down to the people. The ultimate beneficiary, the individual with a personal computer or workstation, gained powers of creation and communication far beyond the kings of old.

The individual was not only the heir to the throne of the technology; he also was its leading creator. Although made possible by hardware innovations from around the world, the move to small computers was chiefly an American revolution driven by the invention of new software. As fast as the Japanese and others could expand the capacity of computer memories, American entrepreneurs filled them with useful programs. Some 14 thousand new U.S. companies, many of them led by teenagers and college hackers, launched a vast array of software packages and changed the computer from an arcane tool of elites to a popular appliance. As a result of this software, ranging from spreadsheets and word processors to data bases and video games, the United States pioneered and propagated the use of small computers, and the U.S. share of the world software market rose from under two-thirds to more than three quarters.

Analysts focusing on fifth-generation computer projects, mainframe systems, and supercomputers dismiss these personal computers as toys. So did the experts at IBM a few years ago. But at the same time that the United States moved massively into microcomputers, small systems surged far ahead in price performance. In terms of cost per MIPS (millions of instructions per second), the new personal computers are an amazing ninety times more cost effective than mainframes. With the ever growing ability to interconnect these machines in networks and use them in parallel configurations that yield mainframe performance, microcomputers are rapidly gaining market share against the large machines.

Once believed to be a bastion of bureaucratic computing, IBM itself has become a prime source of the redistribution of computer power. As IBM's machines become smaller and cheaper and more available to the public, they also become more effective and more flexible. The trend will continue. According to Ralph Gomory, IBM's chief scientist, the constraints of interconnections mean that supercomputers of the future will have to be concentrated into a space of three cubic inches. The industrial revolution imposed economies of scale, but the information revolution imposes economies of microscale. Computer power continually devolves into the hands and onto the laps of individuals.

The advance into the microcosm is now accelerating. Propelling it is a convergence of three major developments in the industry, developments that once again disperse power rather than centralize it. The first is artificial intelligence, giving to computers rudimentary powers of sight, hearing, and common sense. True, some of these AI devices still do a pretty limited job. It has been said that the computer industry thrived by doing well what human beings do badly. Artificial intelligence often seems to thrive by doing badly what human beings do well. But you can understand the significance of AI advances by imagining that you are deaf, dumb, and blind. If someone gave you a device that allowed you to see and hear even poorly, you would hail him as a new Edison. Computer technology has long been essentially deaf, dumb, and blind. Reachable only through keyboards and primitive sensors and programmable only in binary mathematics, computers remained mostly immured in their digital towers. Artificial intelligence promises to vastly enhance the accessibility of computers to human language, imagery, and expertise. So the computer can continue to leave ivory towers and data-processing elites behind and open itself to the needs of untrained and handicapped individuals, even allowing the blind to read and the disabled to write.

The second major breakthrough is the silicon compiler. Just as a software compiler converts high-level languages into the bits and bytes that a computer can understand, the silicon compiler converts high-level chip designs and functions into the intersecting polygons of an actual chip layout. This technology allows the complete design of integrated circuits on a computer, from initial concept to final silicon. To understand the impact of this development, imagine that printing firms essentially wrote all the books. This was the situation in the computer industry: in order to author a chip, you essentially had to own a semiconductor manufacturing plant (a silicon printing press), which cost between $50 and $200 million to build on a profitable scale. But with silicon compilers and related gear, any computer-literate person with a $20,000 workstation can author a major new integrated circuit precisely adapted to his needs. If mass production is needed, semiconductor companies around the globe will compete to print your design in the world's best clean rooms. In a prophetic move a few firms are now even introducing forms of silicon desktop publishing. For $3 million, for example, Lasarray sells manufacturing modules that do all essential production steps from etching the design to assembling the chips. Dallas Semiconductor acquired an advanced new chip-making facility for $10 million. Contrary to the analyses of the critics, the industry is not becoming more capital intensive. Measured in terms of capital costs per device function (the investment needed to deliver value to the customer) the industry is becoming ever cheaper to enter. The silicon compiler and related technologies moves power from large corporations to individual designers and entrepreneurs.

The third key breakthrough is the widespread abandonment of the long cherished von Neumann computer architecture with its single central processing unit, separate memory, and step-in and fetch-it instruction sets. Replacing this architecture are massively parallel systems with potentially thousands of processors working at once. This change in the architecture of computers resembles the abandonment of centralized processing in most large companies. In the past users had to line up outside the central processing room, submit their work to the data-processing experts, and then wait hours or days for it to be done. Today tasks are dispersed to thousands of desk tops and performed simultaneously. The new architecture of parallel processing breaks the similar bottleneck of the central processing unit at the heart of every computer. It allows the computer itself to operate like a modern corporate information system, with various operations all occurring simultaneously throughout the firm, rather than like an old corporate data processing hierarchy, which forced people to waste time in queues while waiting access to the company mainframe. Promising huge increases in the cost effectiveness of computing, parallel processing will cheaply bring supercomputer performance to individuals.

Any one of these breakthroughs alone would not bring the radical advances that are now in prospect. But all together they will increase computer efficiency by a factor of thousands. Carver Mead of Caltech, a pioneer in all three of these new fields, predicts a 10,000-fold advance in the cost effectiveness of information technology over the next ten years. The use of silicon compilers to create massively parallel chips to perform feats of artificial intelligence will transform the computer industry and the world economy.

An exemplary product of these converging inventions is speech recognition. Discrete-speech talkwriter technology already commands available vocabularies of nearly one hundred thousand words, active vocabularies in the tens of thousands, learning algorithms that adapt to specific accents, and operating speeds of over 40 words per minute. To achieve this speed and capacity combined with the ability to recognize continuous speech on conventional computer architectures would require some four hundred MIPS (millions of instructions per second). Yet the new speech-recognition gear will operate through personal computers and will cost only some $5000. That is just $15.00 per MIPS. IBM mainframes charge some $150,000 per MIPS, and the most efficient general-purpose small computers charge some $3,000 per MIPS. By using parallel chips adapted specifically to process the enigmatic onrush of human speech, these machines can increase the cost effectiveness of computing by a factor of hundreds.

The talkwriter is only one of hundreds of such products. Coming technologies will increase the efficiency of computer simulation by a factor of thousands, radically enhance the effectiveness of machine vision, create dramatically improved modes of music synthesis, provide new advances in surgical prosthesis, open a world of information to individuals anywhere in the world, all at prices unimaginable as recently as three years ago. As prices decline, the new information systems inevitably became personal technologies, used and extended by individuals with personal computers.

With an increasing stress on software and design rather than on hardware and materials, the computer industry symbolizes the age of information. Real power and added value in the modern era lies not in things but in thoughts. The chip is a medium, much like a floppy disk, a 35-millimeter film, a phonograph record, a video cassette, a compact disk, or even a book. All of these devices cost a few dollars to make; all sell for amounts determined by the value of their contents: the concepts and images they bear. What is important is not the medium but the message.

Microchip entrepreneur Jerry Sanders once declared that semiconductors would be "the oil of the eighties." Some analysts now fear that giant companies will conspire to cartelize chip production as OPEC once monopolized oil. They predict that by dominating advanced manufacturing technology and supplies, a few firms will gain the keys to the kingdom of artificial intelligence and other information technologies. Yet unlike oil which is a substance extracted from sand, semiconductor technologies are written on sand, and their substance is ideas. To say that huge conglomerates will take over the information industry because they have the most efficient chip factories or the purest silicon is like saying that the Canadians will dominate world literature because they have the tallest trees.

Contrary to all the fears and prophecies, the new technologies allow entrepreneurs to economize on capital and enhance its efficiency, mixing sand and ideas to generate new wealth and opportunity for men and women anywhere in the world. The chief effect can be summed up in a simple maxim, a hoary cliche: knowledge is power. Most people agree that this statement conveys an important truth. Today, however, knowledge is not simply a source of power; it is supremely the source of power. The difference is crucial. If knowledge is power in this vital sense, it means that other things are not power. The other things that no longer confer power, or radically less power than before, include all the goals and dreams of all the tyrants and despots of the centuries: power over natural resources, territory, military manpower, national taxes, trade surpluses, and national currencies.

In an age when men can inscribe new worlds on grains of sand, particular territories have lost their economic significance. Not only are the natural resources under the ground rapidly declining in relative value, but the companies and capital above the ground can easily pick up and leave. Capital markets are now global; funds can move around the world, rush down fiber optic cables, and bounce off satellites at near the speed of light. People-scientists, workers, and entrepreneurs-can leave at the speed of a 747, or even a Concorde. Companies can move in weeks. Ambitious men no longer stand still to be fleeced or exploited by bureaucrats.

The computer age is the epoch of the individual and family. Governments cannot take power by taking control or raising taxes, by mobilizing men or heaping up trade surpluses, by seizing territory or stealing technology. In the modern world even slaves are useless: they enslave their owners to systems of poverty and decline. The new source of national wealth is the freedom of creative individuals in control of information technology. This change is not merely a gain for a few advanced industrial states. All the world will benefit from the increasing impotence of imperialism, mercantilism, and statism. All the world will benefit from the replacement of the zero-sum game of territorial conflict with the rising spirals of gain from the diffusion of ideas. Ideas are not used up as they are used; they spread power as they are shared. Ideas begin as subjective events and always arise in individual minds and ultimately repose in them. The movement toward an information economy inevitably means a movement toward a global economy of individuals and families. Collective institutions will survive only to the extent that they can serve the men and women who comprise them.

All the theories of the computer as an instrument of oppression misunderstand these essential truths of the technology. In the information age, nations cannot gain strength by coercing and taxing their citizens. To increase their power, governments must reduce their powers and emancipate their people on the frontiers of the age of intelligent machines.

 Be the first to comment on this article!

Photo by Jonathan Becker



 
 

[Post New Comment]