Origin > Visions of the Future > The Age of Intelligent Machines > The Age of Intelligent Machines, Chapter Eight: The Search for Knowledge
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0311.html

Printable Version
    The Age of Intelligent Machines, Chapter Eight: The Search for Knowledge
by   Raymond Kurzweil

From Ray Kurzweil's revolutionary book The Age of Intelligent Machines, published in 1990.


A man is sent to prison for a ten-year term. The dining hall also serves as an auditorium, and there is a stage at one end. After supper, one of the prisoners runs up onto the stage and hollers, "Four hundred and eighty-seven." Everyone starts laughing. The next day, it's the same thing: After supper someone jumps onto the stage, yells, "Two thousand six hundred and twenty-two," and all the prisoners crack up. This goes on for a couple of weeks, and finally the man asks his cellmate what's going on. "Well," says the cellmate, "It's like this. The prison library has a big fat book called The Ten Thousand Best Jokes, and we've all been here so long that we know the book by heart. If somebody wants to tell a joke, they just shout out the number of the joke in The Ten Thousand Best Jokes, and if it's a funny one, everybody laughs."
At dinner that night, the man decides to show the other prisoners that he's a good guy. Before anyone else can get to the stage, he dashes up there and shouts, "Five thousand nine hundred and eighty-six!" But to his horror, nobody cracks a smile. There are even a few groans. He slinks back to his cell to consult with his cellmate.
"Nobody laughed! Isn't the five thousand nine hundred and eighty-sixth joke a good one? "
"Sure it's a good one," says the cellmate. "Old five thousand nine hundred eighty-six is one of the best."
"So why didn't anyone laugh?"
"You didn't tell it right."
An old joke as retold in Mind Tools by Rudy Rucker

Knowledge and Expert Systems

Knowledge is not the same as information. Knowledge is information that has been pared, shaped, interpreted, selected, and transformed; the artist in each of us daily picks up the raw material and makes of it a small artifact-and at the same time, a small human glory. Now we have invented machines to do that, just as we invented machines to extend our muscles and our other organs. In typical human fashion, we intend our new machines for all the usual purposes, from enhancing our lives to filling our purses. If they scourge our enemies, we wouldn't mind that either ....

The reasoning animal has, perhaps inevitably, fashioned the reasoning machine. With all the risks apparent in such an audacious, some say reckless, embarkation onto sacred ground, we have gone ahead anyway, holding tenaciously to what the wise in every culture at every time have taught: the shadows, however dark and menacing, must not deter us from reaching the light.
Edward A. Feigenbaum and Pamela McCorduck, The Fifth Generation

What is knowledge?

Facts alone do not constitute knowledge. For information to become knowledge, it must incorporate the relationships between ideas. And for the knowledge to be useful, the links describing how concepts interact must be easily accessed, updated, and manipulated. Human intelligence is remarkable in its ability to perform these tasks. However, it is almost more remarkably weak at reliably storing the information on which knowledge is based.1 The natural strengths of computers are roughly the opposite. They have, therefore, become powerful allies of the human intellect in their ability to reliably store and rapidly retrieve vast quantities of information, but conversely, they have been slow to master true knowledge. The design of computer data structures that can represent the complex web of relationships both within and among ideas has been a quest of artificial intelligence from its inception. Many competing approaches have been proposed. The following example illustrates features of several approaches including the methodology of frames, first proposed by Minsky in the mid 1970s.2

Each box represents a concept sometimes referred to as an object or frame. One important relationship, is a class of, refers to a group of entities that comprise a proper subset of a broader set. The principle of inheritance tells us that the characteristics of a set apply to all of its subsets (and all of their subsets, etc.) unless there is a specific indication to the contrary. Thus, we can conclude that mammals, being a subclass of animals, ingest proteins, even though this fact is not explicitly stated. We can also conclude that mammals, being ultimately a subclass of objects, are visible or tangible. The relationship is an example of refers to a subset with a single member. Thus, even though little is explicitly revealed in the Ray Kurzweil frame, the principle of inheritance lets us infer a great deal. We can conclude that Ray shares with the rest of the human species a superior intellect (although after reading this book, one might wish to enter an exception here). Since he is a male, we cannot conclude that he nourishes his young with milk, but we can determine that he is warm blooded, is usually mobile, and persists in time. We note that Fluffy, although of the cat species, does not inherit the characteristic of a tail, but is nonetheless domesticated, carnivorous, and so on.

Other types of relationships, types that do not imply the inheritance of characteristics, are also shown on the chart. Belongs to and was eaten by are examples of binary relationships that two singular objects or frames may have with one another. These relationships can also be described using hierarchical structures. For example, loves, hates, and is jealous of could all be examples of has feelings for.

Some areas of knowledge are more amenable to this type of hierarchical analysis than others. Taxonomy, the study of the classification of organisms, is one of the most effective. Biologists have been largely successful in constructing an elaborate tree structure that incorporates virtually all of the many millions of known earth species (including nearly a quarter million species of beetles alone).3 The system requires a minimum of seven levels:

  • Animalia kingdom
  • Chordata phylum
  • Mammalia class
  • Primate order
  • Homidae family
  • Homo genus
  • Homo sapien species

More complex classifications are also possible:

  • Animalia kingdom
  • Metazoa subkingdom
  • Chordata phylum
  • Vertebrata subphylum
  • Tetrapoda superclass
  • Mammalia class
  • Theria subclass
  • Eutheria infraclass
  • Ferungulata cohort
  • Ferae superorder
  • Carnivora order
  • Fissipeda suborder
  • Canoidea superfamily
  • Canidae family
  • Caninae subfamily
  • Canis genus
  • Canis lupus (wolf) species

Though some controversies persist (including, ironically, the exact placement of our own species), the system has proved a remarkably useful tool for organizing biological knowledge.4 Other areas have more difficulty with hierarchic classification. Consider the well-known Dewey Decimal system for classifying books. Through three levels of ten-fold expansion, Melvil Dewey (1851-1931) devised 999 categories for the Amherst College library in 1873.5 His system has since been refined to provide a virtually unlimited number of subdivisions, for example,

There is no question that the system is useful. Organizing libraries would be immensely difficult without it or some similar system (say, the Library of Congress system). Yet books do not always fall into one neat category, particularly with the rapid development of new interdisciplinary topics. It will be interesting to see how this book is classified: computer science, philosophy, history of technology, humor? Since books often deal with complex subjects that may span multiple areas of knowledge, a single position in a hierarchy is often an unsatisfactory compromise. This is not to say that hierarchical classification of organisms works perfectly: it does not. But it is interesting to note that every organism, even the lowest, has one or more "parents," which in turn had one or more parents, and so on. If we ignore interbreeding, the entire collection of all individual organisms that have ever lived form one or more gigantic trees with trillions of leaves. There is obviously no such equally fundamental tree organization for all books.

An ambitious attempt to organize all human knowledge in a single hierarchy is contained in the Propaedia section of the fifteenth edition of the Encyclopaedia Britannica, published in 1980. The Propaedia, which describes itself as an "outline of knowledge," is an 800-page attempt to codify all knowledge, at least that contained in the remaining 30,000 pages of the encyclopedia. For example, money is found under 534D1:

  • 5 Human society
  • 53 The production, distribution, and utilization of wealth
  • 534 The organization of production and distribution
  • 534D Institutional arrangements that facilitate production and output
  • 534D1 The nature and characteristics of money




Hierarchical relationships among lateral relationships.
The Propaedia does allow for multiple classifications. Each entry, such as 534D1, will provide a number of references into the main portion of the encyclopedia. Conversely, any section in the encyclopedia is likely to have multiple references to it from the Propaedia. The Propaedia takes time to understand, but it is surprisingly successful in view of the vast scope of the material it covers.

Hierarchical classification schemes have provided a powerful organizing tool for most of the sciences. It is clear, however, that we need to go beyond treelike structures to represent most concepts. Key to the design of data structures intended to represent concepts are the cross-links (i.e. nonhierarchical relationships). Consider the structures, called semantic networks, depicted in the figures.6 The vertical lines continue to represent such hierarchical relationships as part and is a. The horizontal links, however, give the concepts their distinguishing shapes. Here we see the same type of structure (with different shapes, of course) representing two very different ideas: the concept of an arch and that of a musical scale. An arch or scale can be implemented in a virtually unlimited number of ways, but all such manifestations can share the same conceptual representation.

The horizontal links are themselves concepts and can obviously be quite varied. They can also be represented by networks. The boxes, sometimes called objects, can also be either simple entities or networks. Thus, each network may refer to other networks to represent both the cross-links and the objects. The vertical lines continue to represent simple hierarchical relationships.

Semantic networks and other similar systems are reasonably successful in representing the shape and content of abstract ideas for use in computer knowledge bases. Creating such networks is not easy, however, and this has proved to be a major bottleneck. A major focus of current AI research is to create software that can automatically build such structures from examples of a concept.7

Such data structures as semantic networks provide a formal methodology for representing a broad class of knowledge. As they are easily stored and manipulated in a computer, they are a powerful and practical tool for capturing and harnessing the patterns inherent in at least some types of ideas. Since humans routinely deal with abstract concepts in a supremely subtle way, we can infer that the data structures in our own minds must be at least as powerful as these networks (they are, in fact, far more powerful). Though little is directly known about the data structures we use, we can draw a few hints from observations of our own behavior and thought patterns.

First, it is clear that we rarely, if ever, model the relationships between entities with single links.8 Every time we experience or come into contact with a concept, we add links that reinforce the structures inherent in a concept. For a common concept we may have millions of links expressing the same or similar associations. Indeed, the key relationships may not be explicitly coded at all but rather implied by the general pattern of the data structures. This redundancy has several implications. First, these knowledge structures are not subject to catastrophic failure if parts of the hardware fail. One estimate puts at 50,000 the number of neurons that die each day in an adult human brain (and this process accelerates with age), yet our concepts and ideas do not necessarily deteriorate with the hardware. The massive number of links also helps us to appreciate both the unity and the diversity of a concept. We probably have millions of links indicating or implying that a chair generally has four legs. The link between a chair and four legs is thus strongly established. The links refer to (or evoke) experiences we have had with chairs and so relate to memories of particular chairs we have known. The diversity that is possible within the concept of a chair is thus also captured. The massive redundancy also accounts for our ability (or inability) to deal with what is called cognitive dissonance (a piece of information that appears to contradict a well established belief or understanding).9 If we suddenly experience evidence, even strong evidence, that an idea or concept that we have is invalid, we do not immediately update all of our knowledge structures to reflect this new insight. We are capable of storing this apparently contradictory idea right alongside the concepts we already had. Unless the new idea is reinforced, it will eventually die out, overwhelmed by the large number of links representing our previous conceptions. There is no evidence of a mechanism in our brains to avoid or eliminate contradictory concepts, and we are sometimes quite comfortable with ideas that appear to be incompatible. Hence, ideas that are presented repeatedly early in our life by our parents or culture are not easily modified, even if our adult experiences are apparently inconsistent. In general, it takes repeated exposure to a new idea to change our minds. This is one reason that the media are so powerful. They have the ability to reach large audiences on a repeated basis and are thus capable of having a measurable effect on our data structures.10 Presenting an idea once, even if the idea is powerful and true, does not generally have a significant impact.

There is a strong link between our emotions and our knowledge. If information is presented in a way that elicits an emotional response, we are far more likely to change our knowledge structures (and hence our minds). For this reason, television, which has far greater potential than the print media to reach most people emotionally, has a correspondingly greater impact. Television commercials are often minidramas that attempt to engage us emotionally so that the underlying message will have its desired impact.11

Our knowledge is also closely tied into our pattern-recognition capabilities. People deal with the concept of a particular person's face as easily as with the concept of an arch or a musical scale. But we have yet to devise computer-based data structures that can successfully represent the unique features of a face, although for simpler visual concepts (such as the shape of an industrial part, or the shape of the letter A) representations similar to semantic networks are actively used.

One indication that we use structures similar to semantic networks is our ability to jump from concept to concept via the crosslinks. The thought of taxonomy may lead us to thoughts of primates, which may lead to monkeys, which may lead to bananas, which may lead to nutrition, which may lead to dieting, which may lead to the upcoming Thanksgiving party, which may lead to a particular family member, and so on. We clearly have all of our semantic information, with its massively redundant links, organized in a single vast network. Individual concepts are not identified as such. It requires a great deal of disciplined thought and study to explicitly call out (and describe) individual ideas. The ability to translate our mental networks into coherent language is a difficult skill, one that we continue to struggle with even as adults.

An interesting question concerns how much knowledge we are capable of mastering. It is estimated that the human brain contains on the order of 100 billion neurons.12 We now realize that individual neurons are each capable of remembering far more than the one or several bits of data originally thought possible. One method of storing information is in the strength of each synaptic connection. A neuron can have thousands of such connections, each potentially storing an analog number. There is also speculation that certain long-term memories are chemically coded in the neuron cell bodies. If we estimate the capacity of each neuron at about 1,000 bits (and this is probably low by several orders of magnitude), that gives the brain a capacity of 100 trillion (1014 bits). A typical computer-based semantic network requires only a few thousand bits to represent a concept. Because of the redundancy, however, our human semantic networks need much greater amounts of storage. If, as a rough guess, we assume an average redundancy factor of several tens of thousands, this gives us about 100 million bits per concept, and this yields a total capacity of 1 million concepts per human brain. It has been estimated that a "master" of a particular domain of knowledge (chess, medicine, etc.) has mastered about 50,000 concepts, which is about 5 percent of the total capacity, according to the above estimate.

Human intelligence is not, however, a function of the number of concepts we can store, but rather of the coherency of our concepts, our ability to create meaningful concepts from the information we are exposed to, the levels of abstraction we are capable of dealing with, our ability to articulate these concepts, and perhaps most importantly, our ability to apply concepts in ways that go beyond the original information that created them. This last trait is often regarded as a key component of creativity.13 As Roger Schank and Christopher Owens point out in their article in this book, we may be able to model this essential component of creativity using AI techniques. We are beginning to understand some of the mechanisms that enable us to apply a mental concept outside of its original domain and may ultimately be able to teach computers to do the same. And while humans do excel at their ability to recognize and apply concepts in creative ways, we are far from consistent in our ability to do so. Computers may ultimately prove far more thorough in their attempts to search all possibly relevant conceptual knowledge in the solution of a problem.14

The knowledge-search trade-off

The human brain uses a type of circuitry that is very slow. Our neurons can perform an analog computation in about 5 milliseconds, which is at least 10,000 times slower than a digital computer. On the other hand, the degree of parallelism vastly outstrips any computer architecture we have yet to design. The brain has about 100 billion neurons each with about 1,000 connections to other neurons, or about 100 trillion connections, each capable of a computation. If 1 percent of these are active, that produces 1 trillion computations in 5 milliseconds, or about 200 trillion computations per second. From this analysis Raj Reddy concludes that for such tasks as vision, language, and motor control, the brain is more powerful than 1,000 supercomputers, yet for certain simple tasks such as multiplying digital numbers, it is less powerful than the 4-bit microprocessor found in a ten-dollar calculator.15

It is clear that the human brain is not sequentially fast enough to perform lengthy searches on the implications of its knowledge base. Yet with such a large number of neurons and an even larger number of connections, it is capable of storing a vast amount of highly organized knowledge and accessing this knowledge in parallel. Thus, a typical strategy of the human brain is to access its memory of previously analyzed situations, since it is not capable of performing a great deal of analysis on a problem in real-time. The strategy is quite different for computers using the conventional serial (that is, nonparallel) architecture. There is sufficient sequential speed to perform extensive recursive search in the problem space, but often insufficient knowledge about a particular domain to rely too heavily on previously analyzed situations.

A dramatic example of this difference is found in the game of chess, our quintessential laboratory problem for exploring approaches to intelligence. Because of the slow sequential speed of human thought, the chess master has time only to consider perhaps 100 board positions, although he is able to take advantage of a large memory of previously analyzed situations.16

The knowledge-search trade-off in chess

Human chess masterComputer chess master
Number of rules or memorized situations30,000-100,000200-400*
Number of board positions considered for each move50-2001,000,000-100,000,000
*Many computer chess programs do have extensive libraries of starting positions, so there can be an appreciable amount of knowledge used in the early game. The figure of 200-400 rules refers to the mid and end games.

Because of his powerful pattern-recognition capabilities, the chess master can recognize similarities in these previous situations even if the match is not perfect. It is estimated that the chess master has mastered 30,000 to 100,000 such board positions. The leading machine players, while performing at comparable levels, have traditionally used very little knowledge of chess beyond the rules. HiTech, for example, uses only a few hundred heuristic rules to assist its search but is capable of examining millions of board positions for each move.

We thus find that there is a trade-off between knowledge and the computation required for recursive search, with the human and machine approaches to chess being at the opposite extreme ends of the curve.17 As Raj Reddy says, "When in doubt, sprout!" meaning that the sprouting of a recursive search tree by machine intelligence can often compensate for its relative lack of knowledge. Conversely, our human knowledge compensates for the brain's inability to conduct extensive sequential search in real-time. The figure illustrates the knowledge-search trade-off, again with the human and machine approaches to chess at opposite ends.18

This brings up the issue of how much knowledge is enough. Interestingly, in areas of knowledge as diverse as the size of an expert's vocabulary, the number of symptom-illness correspondences known by a medical specializt, and the number of board positions memorized by a chess master, the number of "chunks" of knowledge possessed by a human master of any particular discipline appears to be in the range of 30,000 to 100,000.19 We are also finding that a comparable number of production rules are needed in our machine-based expert systems in order to provide for sufficient depth of coverage and avoid the fragility that was evident in the first generation of such systems. This realization about the long-term memory of a human expert contrasts with observations of human short-term memory, which appears to be about 10,000 times smaller.

These insights are encouraging with regard to the long-term outlook for machine intelligence. Machine intelligence is inherently superior to human intelligence in its ability to perform high-speed sequential search. As our machine methods for representing, learning, and retrieving chunks of knowledge improve, there is no reason why our computers cannot eventually exceed the approximate 100,000 chunk limit of human knowledge within specific domains. Another advantage of machine intelligence is the relative ease of sharing knowledge bases. Humans are capable of sharing knowledge, but this requires a slow process of human communication and learning. Computers can quickly and efficiently pool their knowledge bases.

The so-called chunking of knowledge has become a major issue in AI research. A recent system called SOAR created by Allen Newell and his colleagues is able to automatically create its own chunks of knowledge and thereby learn from experience.20 It is capable of performing recursive search but also takes advantage of knowledge derived from a number of sources, including its own analysis of its own previous searches, information provided directly by its human teachers, and corrections of its own judgements. As it gains more experience in a particular problem area, it is indeed able to answer questions more quickly and with greater accuracy.21

The organization of an expert system

Expert systems are intended to replicate the decision making of a human expert within narrowly defined domains. Such systems have three primary components: a knowledge base, decision rules, and an inference engine.22

The knowledge base is intended to capture the ideas and concepts inherent in the domain in question. As mentioned above, creating such knowledge bases is a difficult and painstaking process. Although creating such knowledge bases automatically (or semiautomatically) is a focus of AI research, the knowledge bases of most expert systems in use today have been created link by link by human "knowledge engineers." Knowledge bases often incorporate structures similar to those discussed above to represent the concepts and relationships between concepts that are important to the domain. They may also include data bases of information with a more uniform structure.

The decision rules describe the methods to make decisions. In XCON, a system which configures computers for Digital Equipment Corporation, a typical rule may state that if a particular computer model needs more than six expansion boards, it requires an expansion chassis.23 If there is an expansion chassis, then there must also be certain expansion cables, and so on. As of 1987 XCON incorporated over 10,000 such rules. It reportedly is doing the work of over 300 human experts with substantially higher accuracy. XCON was developed as an expert system only after several earlier attempts using more conventional programming methodologies had failed.24

The first generation of expert systems have used hard decision rules with firm antecedents and certain consequences. Most human decision making, on the other hand, is based on uncertain and fragmentary observations and less than certain implications. If as humans we failed to consider information unless it was relatively certain, we would be left with very little basis on which to make most decisions. A branch of mathematics called fuzzy logic has emerged to provide an optimal basis for using probabilistic rules and observations.25

The third component of the expert system, the inference engine, is a system for applying the rules to the knowledge base to make decisions. Many expert systems in use today use standard serial computers (mainframes, minicomputers, and personal computers) with special software to perform the deductive and inductive reasoning required by the decision rules. For the more sophisticated expert systems now being created, this approach will not be fast enough in many cases. The number of inferences that must be considered (that is, the number of rules that must be applied to each concept or datum in the knowledge base) explodes exponentially as both the knowledge base and number of rules expands. Systems created in the early 1980s typically included several hundred to several thousand rules with similarly modest knowledge bases. For systems now being created with tens of thousands of rules and far more extensive knowledge bases, serial computers are

Photo by Lou Jones www.fotojones.com
Toshi Doi, pioneer of the compact disc. Doi is developing an intelligent workstation called NES.
often too slow to provide acceptable response times. Specialized inference engines incorporating substantial (and eventually massive) parallelism are being constructed to provide the requisite computing power. As noted below the Japanese fifth-generation computer project foresees the personal computer of the 1990s as containing extremely high-speed, highly parallel inference engines capable of rapidly manipulating abstract concepts.26

Putting Knowledge to Work

Zeppo: We've got to think!
Chico (with a dismissive hand gesture): Nah, we already tried dat.
The Marx brothers, as quoted in Mind Tools by Rudy Rucker.
Knowledge has an important property. When you give it away, you don't lose it.
Raj Reddy, Foundations and Grand Challenges of Artificial Intelligence (1988)

The actual codifying of scientific and professional knowledge in terms that a computer could understand began in the mid 1960s and became a major focus of AI research in the 1970s. Edward Feigenbaum, Bruce Buchanan, and their colleagues at Stanford University were early pioneers in the effort to establish the field now known as knowledge engineering.27 Their first effort, called DENDRAL, was one of the first expert systems. Begun in 1965 and developed throughout the 1970s, it embodied extensive knowledge of molecular-structure analysis, which was selected simply as an illustration of the ability of a computer-based system to master an area of scientific knowledge. A primary goal of the project was to investigate questions in the philosophy of science, including the construction and validity of hypotheses. It emerged nonetheless as a tool of practical value in university and industrial laboratories. Much of the interest in, and methodology of, expert systems, which spawned a major industry during the 1980s, can be traced to DENDRAL.28

A follow-on project, Meta-DENDRAL, was an early attempt to break the knowledge-learning bottleneck. Presented with data about new chemical compounds, Meta-DENDRAL was capable of automatically devising new rules for DENDRAL. Problems that the Meta-DENDRAL team faced included dealing with partial and often inaccurate data and the fact that multiple concepts were often intertwined in the same set of data. The problem of learning was by no means solved by the Meta-DENDRAL workers, but they did ease the job of the human knowledge-engineer, and many of Meta-DENDRAL's techniques are still the focus of learning research today.29

Expert systems in medicine

With the success of DENDRAL and Meta-DENDRAL by the early 1970s, Feigenbaum, Buchanan, and others became interested in applying similar techniques to a broader set of applications and formed the Heuristic Programming Project, which today goes by the name of the Knowledge Systems Laboratory. Perhaps their best known effort, MYCIN, an expert system to diagnose and recommend remedial treatment for infectious diseases was developed throughout the 1970s. In 1979 nine researchers reported in the Journal of the American Medical Association a comparison of MYCIN's ability to evaluate complex cases involving meningitis to that of human doctors.30 In what has become a landmark study, ten cases of infection were selected, including three viral, one tubercular, one fungal, and one bacterial. For each of these cases, diagnoses and treatment recommendations were obtained from MYCIN, a Stanford Infectious Disease faculty member, a resident, and a medical student. A team of evaluators compared the diagnoses and recommendations for all of the cases, without knowledge of who (or what) had written them, against the actual course of the disease and the actual therapy followed. According to the evaluators, MYCIN did as well or better than any of the human doctors. Although the domain of this evaluation was limited, the conclusion received a great deal of attention from both the medical and computer communities, as well as the general media. If a computer could match human intelligence in medical diagnosis and treatment recommendation, albeit within a limited area of specialization, there appeared to be no reason why the domains could not be substantially broadened. It was also evident that such systems could eventually improve on human judgement in terms of the consistent application of the vast (and rapidly increasing) quantity of medical knowledge.

MYCIN's success resulted in a high level of interest and confidence in expert systems.31 A sizeable industry was created in the decade that followed.32 According to DM Data, the market research firm, the expert-system industry grew from $4 million in 1981 to $400 million in 1988 and an estimated $800 million in 1990.33

Beyond the attention it focused on the discipline of knowledge engineering, MYCIN was significant in other ways. It introduced the now-standard methodology of a separate knowledge base and inference engine, as well as the recursive goal directed algorithms of the inference engine.34 Further, MYCIN did not just give diagnostic conclusions, it could also explain its reasoning and cite sources in the medical literature. Of major significance was MYCIN's use of its own version of fuzzy logic, that is, reasoning based on uncertain evidence and rules, as shown in the following rule, which also includes justification and reference:

MYCIN Rule 280

IF:

The infection which requires therapy is meningitis, and

The type of the infection is fungal, and

Organisms were not seen on the stain of the culture, and

The patient is not a compromised host, and

The patient has been to an area that is endemic for coccidiomycoses, and

The race of the patient is one of: Black, Asian, Indian, and

The cryptococcal antigen in the csf was not positive

THEN:

There is suggestive evidence (.5) that cryptococcus is not one of the organisms (other than those seen on cultures or smears) which might be causing the infection.

AUTHOR:

YU

JUSTIFICATION:

Dark-skinned races, especially Filipino, Asian and Black (in that order) have an increased susceptibility to coccidiomycoses meningitis.

LITERATURE:

Stevens et al. Miconazole in Coccidiomycosis. Am. J. Med. 60:191-202, Feb 1976.

In the mid to late 1970s a variety of enhancements for MYCIN were created, including TEIRESIAS, a knowledge-building tool, and EMYCIN (for Essential MYCIN), a portable version (that is, applicable to other applications) of MYCIN's inference engine.35 These tools were applied to other tasks, including a system called PUFF for interpreting pulmonary tests.36 During the 1980s the techniques established by MYCIN were applied to more sophisticated systems, including NeoMYCIN, a consultation system with a broader domain of medical knowledge than MYCIN, and ONCOCIN, a system for the design of oncology (cancer therapy) chemotherapy protocols.37 Innovations introduced by these later systems include hierarchical knowledge bases that go beyond decision rules and the ability to engage in a dialog with the user to elicit symptoms, call for additional tests and input, and clarify reasoning.38

A particularly ambitious system called CADUCEUS (formerly Internist) was developed at the University of Pittsburgh for diagnosis within the broad domain of internal medicine. As of 1982 CADUCEUS contained over 100,000 associations between symptoms and diseases and covered an estimated 70 percent of the relevant knowledge in the field.39 The performance of CADUCEUS was compared to that of human doctors on cases drawn from the New England Journal of Medicine. In a wide range of diagnostic categories CADUCEUS was actually more accurate than the human physicians.40

A typical dialog with the EMYCIN leukemia diagnosis system. (Courtesy of the Imperial Cancer Research Fund Laboratories, London)

October 4, 1984 4:15PM in (EMYCIN) EMYCIN.EXE.20515 by J-Fox

4-Oct-84 61:14:42

PERSON-1

(1) What is the person called

** ANNIE BODDIE

(2) What sex is Annie Boddie

** FEMALE

(3) How old is Annie Boddie

** 3

The first sample of Annie Boddie will be referred to as:

SAMPLE-1

(4) What was SAMPLE-1 (blood, marrow, etc)

** MARROW

(5) What is the blast count in SAMPLE-1

** UNK

(6) What was the Tdt result

** 20

(7) What is the C-ALL result

** 0

(8) What was the clinical diagnosis for Annie Boddie

* UNK

(9) What is the Sm lg result

* * 50

(10) What was the mouse rosette result

** UNK

(11) Is there a mediastinal mass

** UNK

(12) What was the Sheep rosette result

** UNK

(13) What was the HLA-DR result

** UNK

(14) What was the Kappa result

* * WHY [I.e., why is it important to determine the percentage of Kappa-positive cells?]

[1 .0] This will aid in determining the leukemic cell type. It has already been established that

[1 .1] the leukemic cell type is b-cell

Therefore, if

[1 .2] (1) The percentage of Kappa positive cells divided by the percentage lambda positive cells

is greater than or equal to 6, or

(2) The percentage lambda positive cells divided by the percentage of Kappa positive cells

is greater than or equal to 3

then there is weakly suggestive evidence (.4) that the leukemic cell type is monoclonal-b-cell

[RULE 032]

[Back to question 14.]

** WHY [I.e., why is it important to determine the leukemic cell type?]

[2.0] The leukemic cell type is one of the 2 key parameters associated with a SAMPLE. The value

of this parameter is always determined as soon as a SAMPLE is mentioned.

[2.1] The leukemic cell type is very important in determining the diagnosis [as, for example, in

RULE 055].

[Back to question 14.]

* * 62

(15) What was the Lambda result

* * 62

(16) Are there any other samples of Annie Boddie

* * NO

I was unable to make any conclusion about the diagnosis.

A dialog with TEIRESIAS, a computerized investment counselor. (Taken from Jean-Louis Lauriere, "Knowledge Representation and Use. Part 1, Expert Systems," Technology and Science of Informatics 1 [1983], no. 1:15)

TEIRESIAS: The following investments seem to be the most suitable at the moment: Varian Corporation and American Telephones & Telegraphs (AT&T). Is that correct?

Expert: No.

TEIRESIAS: Are there any missing?

Expert: Yes, Digital Equipment and Data General.

TEIRESIAS: Are there any which should not be present?

Expert: Yes, AT&T.

TEIRESIAS: Would you like me to guide the error search?

Expert: Yes.

TEIRESIAS: Let's begin then with AT&T. I used rule 27 to reach that conclusion: As (1) the investment field was not defined, (2) expected profits are above 10%, (3) the investment must be long term, so AT&T is a plausible choice (0.4). Is the rule correct?

Expert: Yes.

TEIRESIAS: Is there a premiss which should not have been evaluated as true?

Expert: Yes, number 1.

TEIRESIAS: I came to no conclusion on the issue of the field of investment. Should I have come up with something precise?

Expert: Yes. Advanced technology

TEIRESIAS: Tests on all rules which might have concluded in favor of the advanced technology field all failed because marked premisses were false....Were these two rules correct?

Expert. Yes.

TEIRESIAS: Should either of them not have been applied?

Expert: No.

TEIRESIAS: Is there a rule missing?

Expert: Yes.

TEIRESIAS: OK, so I need a rule which will allow me to deduce that the field should have been advanced technology. Would you like to give me a rule of that kind?

Expert: If the client's tax bracket is 80%, if the client is following up on market trend carefully, then it is obvious (0.8) that the investment should be made in high technology.

TEIRESIAS: The new rule is called rule 383. This is how I understand it: If (1) the client's tax payments are 50%, if (2) the market has followed an upward trend recently, if (3) the client manages his assets carefully, then the investment should be made in high technology (0.8). Is that right?

Expert: No.

TEIRESIAS: Should the premisses be changed?

Expert: Yes, change (2).

TEIRESIAS: What if (2) were changed into '(2) the client follows the market carefully, would that do?

Expert: Yes.

TEIRESIAS: Then you will doubtless want to eliminate (3). Right?

Expert: Yes.

TEIRESIAS: (Repetition of R383, then:) I don't like to criticize, Randy, but do you realize that usually, rules affecting investment fields which mention the client's tax bracket and the way he follows the market also mention the client's experience in making investments? Should I attempt to formulate a third premiss?

Expert: Yes.

TEIRESIAS: What do you think of: the client has moderate degree of experience in investment?

Expert: OK.

TEIRESIAS: Most of those rules also give a conclusion on the desired level of risk for the investment. Should I attempt to give that kind of conclusion here?

Expert: No.

TEIRESIAS: (Repetition of corrected R383, then:) Please tell me briefly why you have added this rule.

Expert: The system had not understood that it could specify the field of investment, which led to rule 27 being applied incorrectly.

TEIRESIAS: R383 has been added to the knowledge base. The consultation will start again to test whether this rule is in fact well founded or not. Make yourself comfortable, this could take some time...Results, suggested investments: Varian Corporation, Data General, Digital Equipment. Good, it looks like everything has been corrected, doesn't it?

Expert: Yes.

It is not surprising that medicine would be the first professional domain to attract significant knowledge-engineering attention. Medical knowledge is highly organized, and the linkage of symptoms and test results to diagnoses and remedial action is described in the medical literature in great detail. Although the knowledge-engineering process remains painstaking, the knowledge is sufficiently well organized for expert-system techniques. The systems described above are among dozens of medical expert systems in existence or under development. Some of these systems are already contributing to medical care.41 PUFF, for example, is routinely used at the Pacific Medical Center in San Francisco. Its reports are screened by an attending pulmonary physiologist, who modifies no more than 5 percent of PUFF's reports. Many patients have had their cancer-treatment programs recommended by ONCOCIN, which are also carefully reviewed. Although a large amount of development of medical expert systems has been done and their actual use as advisors is beginning, these systems have still had relatively little impact on medical practice in general. There are four reasons for this. First, the medical community is understandably conservative with regard to any new technology, let alone an entirely new approach to diagnosis. Even as consultants, these cybernetic diagnosticians will have to prove themselves repeatedly before they are given major roles. Second, the hardware these systems require has been expensive up until recently. This factor is rapidly vanishing: the latest generation of personal computers has sufficient power to run many of these applications. Third, many of these systems are not sufficiently developed for practical use. They are often "brittle," falling apart when cases fall outside of their narrow areas of expertise. In order to provide a consistent and predictable level of service, the knowledge bases require enormous refinement.42 The most advanced systems, such as CADUCEUS, are now beginning to reach sufficient depths of medical knowledge. Finally, the issue of competition from a new type of medical expert (one without a medical degree) is a real issue. Since the medical profession has effective control over the delivery of medical services, resistance to any new type of competition is not surprising. To succeed, the marketers of such services will need to position them clearly as tools that will make their users more competitive with their human peers.

Expert systems in other fields

Considering the gravity of many medical decisions, the slow pace of acceptance of expert systems in medicine is understandable. In the financial field, the situation is quite different. Here expert-system technology is often enthusiastically adopted. This too is perhaps not surprising. Financial professionals are used to imperfect decision making. They expect a certain fraction of decisions to be wrong. Practical demonstrations that a financial expert system can match and often exceed the quality of human decision making within a well defined area of expertise are more likely to be readily accepted.43

For example, Syntelligence has created a system for the insurance industry called Underwriting Advisor, which provides advice on risk analysis and pricing for various lines of insurance, including workers' compensation, property, liability, and other types. A particularly successful system is the Authorizer's Assistant, which reduces the transaction time of an American Express credit authorizer by 20 to 30 percent and has reportedly reduced expensive bad judgements by about 75 percent.

Applications are proliferating in a broad variety of fields. A system developed at SRI International called Prospector received significant attention when it was reported to have identified the location of a porphyry molybdenum deposit from readily available predrilling exploration data.44 Although this particular discovery had more scientific than economic significance, the potential of expert systems to improve the odds for the very expensive gambles involved in natural-resource discovery has fueled substantial investment in knowledge engineering by the energy and minerals industries.

A conversation with Prospector. (Human responses start with a double asterik.) (Courtesy of Edward Feigenbaum)

I am considering the possibility of a (Type-A porphyry copper deposit-PCDA) in the target area.

The following questions are intended to establish the nature of the petrotectonic setting.

1-To what degree do you believe that: (the target area is in a continental margin mobile belt)?

** -2

The following questions are intended to determine the nature of the regional environment.

2-To what degree do you believe that: (there are granitic intrusives in the region)?

**5

3-To what degree do you believe that: (the region contains an abundance of small stocks)?

**-3

....

8-To what degree do you believe that: (igneous rocks in the region have porphyritic texture)?

**Why

I am trying to establish whether some of the intrusive rocks in your area have textures suggestive of a hypabyssal to subvolcanic environment. Porphyry textures resulting from the rapid cooling and crystallization of a silicate melt which contains large mineral grains are suggestive of relatively shallow levels of emplacement.

8-To what degree do you believe that: (igneous rocks in the region have porphyritic texture)?

**5

My certainty in (the texture and morphology of the intrusive system suggest a subvolcanic (hypabyssal regional environment) is now: 4.924.

Do you want to pursue this further?

**No

Teknowledge is creating a system for General Motors that will assist garage mechanics in quickly diagnosing and repairing engine-control problems in GM's fuel-injected engines. With the increasing complexity of modern automobile designs, keeping the skills and knowledge of mechanics up to date has become a major problem. A computer-based system is an efficient way to provide continually updated expertise to thousands of locations around the world. A similar system called the Service Bay Diagnostic System, being developed by the Carnegie Group for Ford, is described in the article by Jeff Pepper in this book.

Digital Equipment Corporation's success with XCON has resulted in an extensive development program of expert systems encompassing most aspects of DEC's business, including XSEL (expert selling tool), an expert system that matches DEC products to customer needs; ISA (Intelligent Scheduling Assistant), which schedules manufacturing and shop floor activity; IDT (Intelligent Diagnostic Tool), which assists human technicians in diagnosing manufactured products that fail verification tests; and NTC (Network Troubleshooting Consultant), which diagnoses computer network problems.45

A survey conducted by the Applied Artificial Intelligence Reporter showed that the number of working expert systems swelled from 700 at the end of 1986 to 1,900 by the end of 1987. There were 7,000 systems under development at the end of 1986 and 13,000 at the end of 1987. This has swelled to tens of thousands of systems in 1990. The most popular application area is finance, with manufacturing control second and fault diagnosis third. With the number of expert systems rapidly expanding, a significant portion of the revenue of the expert-system industry comes from selling the tools to create expert systems. These include specialized languages like PROLOG and knowledge-engineering environments, sometimes called shells, such as Knowledge Craft from the Carnegie Group, ART (Automated Reasoning Tool) from Inference Corporation, and KEE (Knowledge Engineering Environment) from IntelliCorp.46

Some of the more advanced applications are being created by the U.S. Defense Department as part of its Strategic Computing Initiative (SCI)-not to be confused with the more controversial Strategic Defense Initiative (SDI), although both SCI and SDI involve extensive application of pattern-recognition and knowledge-engineering technologies.47 SCI envisions the creation of four advanced prototypes. The Autonomous Land Vehicle is an unmanned robotic vehicle that can avoid obstacles, conduct tactical maneuvers and carry out attack and defense plans while traveling at speeds of up to 40 miles per hour. Its on-board expert system is expected to carry out 7,000 inferences (applications of rules) per second. This will clearly require parallel processing, as typical expert systems implemented on serial computers rarely achieve speeds of greater than 100 inferences per second. SCI's Vision System will provide real-time analysis of imaging data from intelligent weapons and reconnaissance aircraft. It is expected to use computers providing 10 to 100 billion instructions per second, which will require massive parallel processing. The Pilot's Associate will be an integrated set of nine expert systems designed to provide a wide range of services to pilots in combat situations, including the planning of mission tactics, monitoring the status of key systems, navigation, and targeting.48 The system contemplates communication with the pilot via speech recognition and advanced visual displays. Finally, SCI's Battle Management System will assist the commanders of naval aircraft in coordinating resources and tactics during conflicts.

One issue confronting defense applications is the complexity of the software systems required and the inherent difficulty of testing such systems under realistic conditions. The core avionics of a typical fighter aircraft in the late 1980s used about 200,000 lines of software; the fighter aircraft of the early 1990s is expected to require about a million lines. Altogether, it is estimated that the U.S. military in 1990 uses about 100 million lines of code, which is expected to double within five years. Assuring the quality of such extensive software systems is an urgent problem that planners are only beginning to address.

As we approach the next generation of expert systems, three bottlenecks have been recognized and are being attacked by researchers in the United States, Europe, and Japan. Inference engines that use parallel processing are being developed to process the substantially larger knowledge and rule bases now being developed. The number of inferences per second that need to be handled often grows exponentially with the size of the knowledge base. With expert systems being

Photo by Lou Jones www.fotojones.com
Intelligent chip design. At an NEC research lab in Kawasaki, Japan, a computer-aided design (CAD) system generates a blueprint for a VLSI chip.
applied to such real-time situations as monitoring nuclear plants and coordinating weapons systems, the need for rapid response is obvious.49

Perhaps the most important issue being addressed is learning.50 Building knowledge bases by hand is extremely painstaking. Methods for automating this process have become a major focus of both academic and commercial development. EURISKO, a system developed at Stanford's Knowledge Systems Laboratory and now being refined at the Microelectronics and Computer Technology Corporation, has demonstrated an ability to generate its own rules as it solves problems and receives feedback from users.51 Several new expert systems are capable of growing their own knowledge bases simply by observing human decision makers doing their jobs. This is particularly feasible in financial applications, since in most cases financial decisions, as well as the data they are based on, are already easily accessed by computer communications.

Finally, another major issue is the proper use of uncertain and fragmentary information. It is well known that effective use of such information is vital to most human decision making, but recent research by Robert Hink and David Woods at Expert Systems Design has cast some doubt on just how skilled humans are at it.52 Hink and Woods found that for most people, including human experts within their domains, the ability to balance risks and make optimal decisions involving probabilistic outcomes is surprisingly low.53 For example, in some instances subjects would choose an opportunity for a large reward with a small probability over a smaller but certain reward even though the expectant value (the reward multiplied by the probability of receiving it) of the larger reward was substantially lower. One might ascribe this behavior to an enjoyment of risk taking and a general desire to reach for large rewards. In other instances these same subjects would select the more conservative option even though in these cases its expectant value was less. The decision-making patterns were neither optimal nor consistent. Often the way in which questions were worded had more effect on the choices than the underlying facts. These findings are consistent with the commonsense observation that some people display consistently better performance at certain types of tasks, even when considerable uncertainty is involved. Classic examples are card games such as poker, in which skilled players almost always come out on top, even though a significant element of chance is involved. Such players are obviously able to apply probabilistic rules in a more consistent and methodologically sound manner than their opponents. Hink and Woods found that the validity and coherency of most people's decision making, even within the persons' areas of professional competence, was dramatically lower than they had expected. The significance of this observation for the knowledge-engineering community is mixed. The bad news is that devising sound knowledge bases and rules will remain difficult, since even the human domain experts are not very effective at applying them, apart from frequent lack of awareness of their own decision rules. The good news is that there is considerable opportunity for computer-based expert systems to improve on human decision-making in those domains where the knowledge and decision-making processes can be captured and applied.54

Language: The Expression of Knowledge

Gracie: A truck hit Willy.
George: What truck?
Gracie: The truck that didn't have its lights on.
George: Why didn't it have its lights on?
Gracie: It didn't have to. It was daytime.
George: Why didn't the truck driver see Willy?
Gracie: He didn't know it was Willy.
A comedy routine of Burns and Allen, quoted by Roger Schank to illustrate problems of language comprehension
This is the cheese that the rat that the cat that the dog chased bit ate.
Marvin Minsky, The Society of Mind (citing a valid sentence that humans have difficulty parsing)
Squad Helps Dog Bite Victim
Book of Newspaper Headline Gaffes
Birds can fly, unless they are penguins and ostriches, or if they happen to be dead, or have broken wings, or are confined to cages, or have their feet stuck in cement, or have undergone experiences so dreadful as to render them psychologically incapable of flight.
Marvin Minsky, The Society of Mind (illustrating the difficulty of accurately expressing knowledge)
"Okay, where did you hide it?"
"Hide what?"
"You know."
"Where do you think?"
"Oh."
A "married conversation" cited by Nicholas Negroponte of the MIT Media Lab to illustrate the importance of shared knowledge in interpreting natural language
No knowledge is entirely reducible to words, and no knowledge is entirely ineffable.
Seymour Papert, Mindstorms

Students of human thought and the thinking process have always paid special attention to human language.55 Our ability to express and understand language is often cited as a principal differentiating characteristic of our species. Language is the means by which we share our knowledge. Though we have only very limited access to the actual circuits and algorithms embedded in our brains, language itself is quite visible. Studying the structures and methods of language gives us a readily accessible laboratory for studying the structure of human knowledge and the thinking process behind it. Work in this laboratory shows, not surprisingly, that language is no less complex or subtle a phenomenon than the knowledge it seeks to transmit.

There are several levels of structure in human language. The first to be actively studied was syntax, the rules governing the ways in which words are arranged and the roles words play. Syntactic rules govern the placement of nouns, verbs, and other word types in a sentence and also control verb conjugation and other elements of sentence structure. Although human language is far more complex, similarities in the syntactic structures of human and computer languages were noticed by AI researchers in the 1960s. An early AI goal was to give a computer the ability to parse natural language sentences into the type of sentence diagrams that grade-school children learn. One of the first such systems, developed in 1963 by Susumu Kuno of Harvard, was interesting in its revelation of the depth of ambiguity in the English language. Kuno asked his computerized parser what the sentence "Time flies like an arrow" means. In what has become a famous response, the computer replied that it was not quite sure. It might mean (1) that time passes as quickly as an arrow passes. Or maybe (2) it is a command telling us to time the flies the same way that an arrow times flies; that is, "Time flies like an arrow would." Or (3) it could be a command telling us to time only those flies that are similar to arrows; that is, "Time flies that are like an arrow." Or perhaps (4) it means that the type of flies known as "time flies" have a fondness for arrows: "Time-flies like arrows."56

It became clear from this and other syntactical ambiguities that understanding language requires both an understanding of the relationships between words and knowledge of the concepts underlying words.57 It is impossible to understand the sentence about time (or even to understand that the sentence is indeed talking about time and not flies) without a mastery of the knowledge structures that represent what we know about time, flies, arrows, and how these concepts relate to one another. An AI technology called semantic analysis attempts to apply knowledge of the concepts associated with words to the problem of language understanding. A system armed with this type of information would note that flies are not similar to arrows (which would thus knock out the third interpretation above). (Often there is more than one way to resolve language ambiguities. The third interpretation could also have been syntactically resolved by noting that "like" in the sense of "similar to" ordinarily requires number agreement between the two

Photo by Lou Jones www.fotojones.com
A Japanese display at the Tokyo Institute of Technology.
objects that are compared.) It would also note that there is no type of flies known as time flies (which would probably knock out the fourth interpretation). Another type of analysis, known as pragmatic analysis, attempts to apply the vast wealth of practical knowledge about the world to further resolve ambiguities. In applying this technique to our previous results, we use such tidbits of knowledge as that flies have never shown a fondness for arrows, and that arrows cannot and do not time anything, much less flies, to select the first interpretation as the only plausible one.

The ambiguity of language is far greater than may be apparent. During a language parsing project at the MIT Speech Lab, Ken Church found a sentence published in a technical journal with over one million syntactically correct interpretations!

It is clear that a vast amount of knowledge is needed to interpret even the simplest sentence. Indeed, some of the largest knowledge-based systems have been developed in the area of language understanding and translation. Translation is clearly impossible without understanding. Ever since Bar-Hillel's famous paper of 1960, "A Demonstration of the Nonfeasibility of Fully Automatic High-Quality Translation," researchers have understood the necessity that the computer understand both the syntax and semantics of the text in a language before attempting a translation into another language.58 The Logos computer-assisted translation system, for example, uses about 20,000 understanding and translation rules to translate German technical texts into English and still provides results that are only about 80 percent as accurate as a human translator. Logos researchers estimate that it would require about 100,000 rules to achieve human performance levels, even in the restricted domain of technical texts.59

Understanding human language in a relatively unrestricted domain remains too difficult for today's computers. Beyond the resolution of syntactic and semantic ambiguities, there are many issues regarding unspoken assumptions and appropriateness. If I say, "The bread is ready to eat," one can assume that it is not the bread that will do the eating; however, if I say, "The chicken is ready to eat," it is less clear who intends to eat whom. In this case, further contextual information is required. Resolving such ambiguities in a general way requires extensive knowledge and the ability to readily access the information most relevant. If I ask someone I have just met, What do you do? I am probably not interested in hearing about things which I obviously already know, such as eating, breathing, sleeping, thinking (although if someone wanted to be evasive, these might very well be appropriate responses). If I ask, Do you want something to eat or not? it would be technically correct to answer yes even if you did not want something to eat.60 Again, avoiding overly literal interpretations of language requires vast collections of knowledge that no one has yet bothered to collect. We are, in fact, only now beginning to know what knowledge to collect and how to collect it.

The ability of humans quickly and accurately to resolve syntactic ambiguities is not perfect, however, and a great deal of humor is based on this type of confusion. Skona Brittain cites the following example:

John: I want to go to bed with Marilyn Monroe again tonight.

Jane: Again?

John: Yes, I've had this desire before.

A further complication in understanding language is the ubiquitous use of idiomatic expressions. Consider the following: "She broke the ice by bringing up their latest advertising client. 'We're skating on thin ice with this one,' she said, 'the problem in promoting their new product is just the tip of the iceberg."' Although there are three references to ice, these sentences obviously have nothing to do with ice. Beyond the three ice-related idioms, the phrase "bring up" is also idiomatic, meaning to introduce a topic by speaking about it. Each idiom introduces a story or analogy that must somehow be integrated with the rest of the subject matter. Understanding language can thus require a knowledge of history, myths, literary allusions and references, and many other categories of shared human experience.61

When we talk to other human beings, we assume that they share with us an enormous core of knowledge and understanding about both the world and the subjects we plan to talk about.62 These assumptions vary according to who we are talking to and how much we know we have in common. Talking to coworkers or good friends obviously allows us to make more assumptions than talking to a stranger on the street, although even here we often assume shared knowledge of sports teams, current events, history, and many other topics. But in talking (or typing) to computers, no understanding of human commonsense knowledge can yet be assumed. While the slow and arduous process of teaching basic world knowledge to computers has begun, the most successful strategy in creating viable computer-based natural-language applications has been to limit the domain of discourse to an area where virtually all of the relevant knowledge can be captured. One of the first examples of this approach was Terry Winograd's 1970 MIT thesis SHRDLU.63 SHRDLU understood commands in natural English, as long as the commands pertained only to an artificial world composed of different colored blocks. While the limitations of the toy worlds of SHRDLU and other similar systems were criticized in the 1960s and 1970s, it turns out that such sharply constrained domains do have practical value.64 For example, the world of computer data bases is no more complicated than the world of colored blocks, but it happens to be one that many business people do interact with daily. One of the more successful natural-language-understanding programs, Intellect, from Artificial Intelligence Corporation, allows users to ask questions in natural English concerning information in their data bases.65 Because the domain is sufficiently limited, the Intellect system can rely primarily on syntactic rules, although semantic and pragmatic knowledge has also been incorporated. Competitive systems developed by Cognitive Systems, a company founded by Roger Schank, rely more heavily on explicit representations of semantic and pragmatic knowledge. By means of Schank's script methodology, similar knowledge bases can be used for both language understanding and decision making.66 For example, an expert system called Courtier, created by Cognitive Systems for a Belgian bank, can provide portfolio advice in response to natural-language commands.67

Perhaps the largest market for natural-language systems is translation.68 Translating (mostly technical) texts by means of traditional techniques is today a multibillion-dollar business. While computerized translation systems are not yet sufficiently accurate to run unassisted, they can significantly increase the productivity of a human translator. One of the challenges in developing computerized translation

Photo by Lou Jones www.fotojones.com
Larry Harris, president of Artificial Intelligence Corp. (AIC) and developer of natural language systems. AIC's Intellect program enables executives to interact with their database systems using ordinary English.
systems is that each pair of languages represents a different translation problem, or rather, each pair of languages represents a pair of problems. An interesting approach to simplifying this difficulty is being pursued by DLT, a Dutch firm that is developing translators for six languages to and from a standard root language. They use a modified form of Esperanto, a century-old language originally proposed as a universal language, as an intermediate representation. In their system, a translation from English to French would be accomplished in two steps, English to Esperanto and then Esperanto to French. Esperanto was selected because it is particularly good at representing concepts in an unambiguous way, thus making it ideal as a root conceptual language. Translating among 6 different languages would ordinarily require 30 different translators (6 languages, each to be translated into 5 other languages), but with the DLT approach, only 12 are required (6 translators from the 6 languages into Esperanto and 6 more from Esperanto back to the 6 languages). Furthermore, as DLT points out, the translating systems that go from Esperanto to another language are relatively simple.69

Perhaps the most intensive work on automatic language translation is being pursued at a number of Japanese research centers, including Kyoto University and the Tokyo Institute of Technology. Japan's Ministry of International Trade and Industry has cited Japanese-English and English-Japanese translation systems as vital to Japan's economy.

One of the primary barriers to more widespread use of computer technology is communication between human and machine. Most persons, even those with technical training in fields other than computer science, find the specialized syntax required by most computer applications to be intimidating and confusing. There is considerable agreement that the optimal way to interact with computers would be the same way we interact with our human colleagues and assistants: by talking things over in a natural language. Providing this capability will require integrating large-vocabulary speech recognition (recognizing what the words are from speech) with a high level of natural-language understanding (understanding what the words mean). Though these capabilities are not yet sufficiently well developed to replace either the keyboard or the specialized computer syntaxes now used for most computer applications, the natural-language market is beginning to take hold for a variety of applications. DM Data estimates the natural-language market (not including speech recognition) at $80 million in 1987 and projects a $300 million market in 1990.70

Once again, we find that the relative strengths of machine and human intelligence are quite different. Humans first learn to listen to and understand spoken language. Later on we learn to speak. Computers, on the other hand, have been generating printed natural-language output since their inception and have only recently begun to understand it. Also, the ability of computers to speak predated their ability to recognize spoken language. Another example: humans gain mastery of written language at a much later age than verbal language. Here too computer history has reversed this sequence. As a final example, computers today can understand natural-language sentences dealing with "complex" financial inquiries yet still stumble on the "simple" sentences that children can understand. This phenomenon is widely misunderstood. The popular robot character R2D2 (of Star Wars fame)

Photo by Lou Jones www.fotojones.com
Language translation at present involves bilingual humans making copious use of a dictionary.


Photo by Lou Jones www.fotojones.com
Jun-Ichi Tsujii of Kyoto University, an authority on the automatic translation of languages.


Photo by Lou Jones www.fotojones.com
A tale of two countries. The keyboards on this computer displays both English and Japanese characters. The upper board allows direct input of Japanese ideographs. Developed at the Tokyo Institute of Technology, this machine translation system can translate text from either language almost instantaneously.


Photo by Lou Jones www.fotojones.com

is supposed to understand many human languages yet is unable to speak (other than with "computerlike" squeaks and other noises), which gives the mistaken impression that generating human language (and speech) is far more difficult than understanding it.

Putting It All Together: The Age of Robots

I am incapable of making an error. HAL, in Stanley Kubrick and Arthur C Clarke's 1968 film 2001: A Space Odyssey
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Isaac Asimov's three laws of robotics

In R.U.R., a play written in 1921, the Czech dramatist Karel Capek (1890-1938) describes the invention of intelligent mechanical machines intended as servants for their human creators. Called robots, they end up disliking their masters and take matters into their own "hands." After taking over the world, they decide to tear down all symbols of human civilization. By the end of the play they have destroyed all of mankind. Although Capek first used the word "robot" in his 1917 short story "Opilec," creating the term from the Czech words "robota," meaning obligatory work, and "robotnik," meaning serf, R.U.R. (for "Rossum's Universal Robots") introduced the word into popular usage. Capek intended his intelligent machines to be evil in their perfection, their ultimate rationality scornful of human frailty. Although a mediocre play, it struck a chord by articulating the uneasy relationship between man and machine and achieved wide success on two continents.71 The spectre of machine intelligence enslaving its creators, or at least competing with human intelligence for employment and other privileges, has continued to impress itself on the public consciousness.

Although lacking human charm and good will, Capek's robots brought together all of the elements of machine intelligence: vision, auditory perception, touch sensitivity, pattern recognition, decision making, judgement, extensive world knowledge, fine motor coordination for manipulation and locomotion, and even a bit of common sense. The robot as an imitation or substitute for a human being has remained the popular conception. The first generation of modern robots was, however, a far cry from this anthropomorphic vision.72 The Unimation 2000, the most popular of the early "robots," was capable only of moving its arm in several directions and opening and closing its gripper. It had no senses and could move its arm with only two or three degrees of freedom (directions of movement) of the six possible in three-dimensional space. Typical applications of these early robots, introduced during the 1970s, involved moving objects from one place to another (a capability called pick and place).

More sophisticated devices, such as the American company Cincinnati Milacron's T3 (The Tomorrow Tool), the German KUKA robots, and the Japanese Hitachi robots, were introduced in the early 1980s. These second-generation robots can move with five or six degrees of freedom, can effect more precise movements, are faster, and have more delicate grippers. The motions of these robots can be programmed, but they still had no provision for conditional execution, that is, operations conditioned on some external event. Since these robots still have no way of sensing their environment, there are no inputs on which to base any decision making. These second-generation robots became well known for welding and spray painting, primarily in the automotive industry.73

The third generation, introduced in the mid 1980s, began to display a modicum of intelligence. Robots of this generation-Unimation's PUMA, IBM's 7535 and 7565, and Automatix's RAIL series-contain general-purpose computers integrated with vision and/or tactile sensing systems. By 1987 robotic vision systems alone had developed into a $300 million industry, with estimates of $800 million for 1990.74 Specialized programming languages, such as Unimation's VAL and IBM's AML, allow these robots to make decisions based on changes in their environment.75 Such systems can, for example, find industrial parts regardless of their orientation and identify and use them appropriately in complex assembly tasks.76

With the flexibility and sophistication of robots improving each year, the population of industrial robots has increased from a few hundred in 1970 to several hundred thousand by the late 1980s. Some are used in factories that are virtually totally automated, such as Allen Bradley's facility for manufacturing electric motor starters (see the companion film to this book).77 The only human beings in this factory monitor the process from a glass booth, while computers control the entire flow of work from electronically dispatched purchase orders to shipped products.78 Though the era of workerless factories has begun, the most significant short-term impact of this latest generation of robots is in settings where they work alongside human coworkers. Increasingly, new factories are designed to incorporate both human and machine assemblers, with the flow of materials monitored and controlled by computers.79

With the arrival of the third generation, the diversity of tasks being accomplished by robots has broadened considerably.80 A robot named Oracle is shearing sheep in western Australia. One called RM3 is washing, debarnacling, and painting the hulls of ships in France. Several dozen brain operations have been performed at Long Beach Memorial Hospital in California with the help of a robot arm for precision drilling of the skull. In 1986 police in Dallas used a robot to break into an apartment in which a suspect had barricaded himself. The frightened fugitive ran out of the apartment and surrendered.81 The U.S. Defense Department is using undersea robots built by Honeywell to disarm mines in the Persian Gulf and other locations. Thousands of robots are routinely used in bioengineering laboratories to perform the extremely delicate operations required to snip and connect minute pieces of DNA. And walking robots are used in nuclear power plants to perform operations in areas too dangerous for humans. One such robot, Odetics's Odex, looks like a giant spider with its six legs.82

The next generation of robots will take several more steps in replicating the subtlety of human perceptual ability and movement, while retaining a machine's inherent advantages in speed, memory, precision, repeatability, and tireless operation. Specialized chips are being developed that will provide the massively parallel computations required for a substantially higher level of visual perception. Equally sophisticated tactile sensors are being designed into robot hands. Manipulators with dozens of degrees of freedom will combine the ability to lift both very heavy objects and delicate ones without breaking the latter. These robots' "local" intelligence will be fully integrated into the computerized control systems of a modern factory.83

Forerunners of these robots of the 1990s are beginning to compete with human dexterity and intelligence on many fronts. A robot developed by Russell Anderson of Bell Labs can defeat most human opponents at Ping Pong.84 Two other Ping Pong playing robots, one English and one Japanese, recently met each other in San Francisco for a match. A robot hand developed at the University of Utah can crack an egg, drop the contents into a mixing bowl, and then whip up an omelet mixture all at several times the speed of a master chef.85 The Stanford/JPL Hand,

Photo by Lou Jones www.fotojones.com
It's harder than it looks. A robot arm at the Boston Museum of Science stacking blocks.


Photo by Lou Jones www.fotojones.com
Tsuneo Yoshikawa, director of Robotics Engineering at Kyoto University, with his students.
designed by Kenneth Salisbury and other MIT researchers, is a three-fingered robot that can perform such intricate tasks as turning a wing nut. A collaborative effort now underway between the University of Utah Center for Biomedical Design and the MIT Artificial Intelligence Laboratory aims at constructing a hand that will "exhibit performance levels roughly equivalent to the natural human hand," according to Stephen Jacobsen, the chief designer of the project. A voice-activated robot to provide quadriplegic patients such personal services as shaving, brushing teeth, feeding and retrieving food and drinks is being developed by Larry Leifer and Stefan Michalowski under a grant from the Veterans Administration (see the companion film to this book).86

A particularly impressive robot called Wabot-2 was developed in the mid-to-late 1980s by Waseda University in Tokyo and refined by Sumitomo Electric.87 This human-size (and humanlike) 200-pound robot is capable of reading sheet music through its camera eye and then, with its ten fingers and two feet, playing the music on an organ or synthesizer keyboard. It has a total of 50 joints and can strike keys at the rate of 15 per second, comparable to a skilled keyboard player. Its camera eye provides relatively high resolution for a robot. Using a charge-coupled device (CCD) sensing array, it has a resolution of 2,000 by 3,000 pixels (by comparison, the eyes of a person with good eyesight can resolve about 10,000 by 10,000 points). Wabot-2 also has a sense of hearing: it can track the pitch of a human singer it is accompanying and adjust the tempo of its playing accordingly. Finally, the robot has rudimentary

Photo by Lou Jones www.fotojones.com
Hurahiko Asada, a robotics expert at Kyoto University. Asada pioneered the application of the direct-drive motor to improve robots' fine motor coordination.


Photo by Lou Jones www.fotojones.com
Ken Salisbury of the MIT Artificial Intelligence Lab fine tunes the dexterous Stanford-JPL robot hand.
speech-recognition and synthesis capabilities and can engage in simple conversations. There are severe limitations on the complexity of the musical score that it can read, and the music must be precisely placed by a human assistant. Nonetheless, Wabot-2 is an impressive demonstration of the state of the robotic art in the late 1980s.

Another state-of-the-art robot developed around the same time was a half scale model of a Quetzalcoatlus northropi (better known as pterodactyl, a winged dinosaur that lived 65 million years ago). The replica, developed by human-powered flight pioneer Paul MacCready, could fly by flapping its robotic wings, much like its reptile forebear. Unfortunately, in a demonstration for the press, MacCready's invention crashed, which caused a loss of public interest in it. It has flown successfully, however, and it represents a particularly sophisticated integration of sensors and actuators with real-time decision-making by on-board computers.88

Not surprisingly, the world's largest supporter of robotic research is the U.S. Defense Department, which foresees a wide range of roles for robotic fighters in the 1990s and early twenty-first century. A walking truck with fat bent legs is being developed by the U.S. Army for roadless terrains.89 The U.S. Air Force is developing a number of pilotless aircraft, or flying robots, that can perform a variety of reconnaissance and attack missions.90 Early versions of such robot craft played a vital role in the Israeli destruction of 29 Russian surface-to-air missile (SAM) sites in the Bekaa Valley in a single hour during its invasion of Lebanon in 1982.

The field of robotics is where all of the AI technologies meet: vision, pattern recognition, knowledge engineering, decision-making, natural-language

Photo by Lou Jones www.fotojones.com
Anita Flynn of the MIT Artificial Intelligence Laboratory with a robot friend.
understanding, and others. As the underlying technologies mature and as the growing corps of robot designers gets better at integrating these diverse technologies, robots will become increasingly ubiquitous.91 They will tend our fields and livestock, build our products, assist our surgeons; eventually they will even help us clean our houses. This last task has turned out to be one of the most difficult. As we have seen with other AI problems, machine intelligence has first been successfully deployed in situations where unpredictable events are held to a minimum. It was not surprising, therefore, that manufacturing was the first successful application for robotic technology, since factories can be designed to provide predictable and relatively well-organized environments for robots to work in. In contrast, the environments of our homes change rapidly and present many unpredictable obstacles.92 So, effective robotic servants in the home will probably not appear until early in the next century. By that time, however, robotic technology will have dramatically transformed the production and service sectors of society.93

An International Affair

In 1981 Japan's powerful Ministry of International Trade and Industry (MITI) announced plans to develop a new kind of computer. This new computer would be at least a thousand times more powerful than the models of 1981, would have the intelligence to converse with its human users in natural spoken language, would be programmed with vast arrays of knowledge in all domains, would have human-level decision-making capabilities, and would sit on a desktop. They called this new type of machine a fifth-generation computer. The first four generations were characterized by the type of electronic components they used, the first being vacuum tubes, the second transistors, the third integrated circuits, and the fourth VLSI (very large scale integrated) chips. The fifth generation of computers, on the other hand, would be characterized by something different, by its intelligence.94 With MITI's track record of having led Japanese industry to dominance in consumer electronics and a broad range of other high-tech fields, the announcement was a blockbuster. MITI formed the Institute for New Generation Computer Technology (ICOT) to carry out its project. ICOT began active development in 1982 with funding of approximately $1 billion (half from MITI and half from industry) for ten years.95

In the United States and Europe the spectre of the loss of the strategically important computer industry led to urgent consultations at the highest levels of government and industry.96A few months after ICOT began development, a major response by American industry had been initiated. Twenty-one leading American computer and electronics companies, among them Control Data, Digital Equipment, Honeywell, NCR, Sperry, and Bell Communication Research, had formed a new company called Microelectronics and Computer Technology Corporation (MCC). This collaboration was intended to pool money and research talent to overcome several bottlenecks in advanced computer research that no single member of the consortium had the resources to solve alone.97 IBM was not invited to join because of concerns regarding antitrust laws. Special legislation was still required and was signed by President Reagan in 1984.98 MCC's research budget of about $65 million per year is targeted at a wide variety of AI, human-interface, and computer-architecture problems. Primary goals of MCC research are the development of integrated-circuit packaging techniques and computer-assisted design tools that will give American companies a practical edge in the worldwide electronics marketplace. MCC hopes to develop a chip-design station that will enable a small group of engineers to completely design an advanced custom VLSI chip in under a month, whereas one to two years is now required. MCC has also identified parallel processing as the most effective way to achieve the massive increases in processing power required for future AI applications.

In addition to MCC, the Defense Advanced Research Projects Agency (DARPA), which has historically funded a major portion of American university-based AI research, has increased its AI funding. It is now investing over $100 million per year.

The English response to Japan's challenge was a $500 million program called the Alvey Program, after John Alvey, who had proposed it and became its chairman.99 Unlike MCC, which conducts 97 percent of its research in house, Alvey has no research laboratory of its own. With funding primarily from the government, Alvey has provided money for over 100 colleges, companies, and laboratories to conduct research in a wide variety of AI, VLSI, software engineering, man-machine interfaces, and other advanced computer technologies. Alvey has planned a number of "demonstrators" to integrate the results of its research, but its primary emphasis is to encourage advanced research laboratories in England to put high priority on information technologies and to train a new generation of knowledge engineers and computer scientists.100

In 1984 the European Economic Community (EEC) formed the European Strategic Program for Research in Information Technology (ESPRIT).101 This $1.5 billion program has funded companies, universities, and government laboratories throughout Europe in virtually all areas of computer technology, including office automation, robotics, and computer-aided manufacturing.102

In many ways the original MITI announcement was perfectly timed to create an intense response. In addition to growing concern in the United States and Europe over Japanese competition for trade markets, there was a growing awareness that AI technology, which had been a focus of largely academic research for over 25 years, was now poised to radically transform the way computers are used and to have a far-reaching impact on society. The MCC, DARPA, Alvey, and ESPRIT responses to the MITI challenge were just the tip of the iceberg. The enormous publicity that Japan's fifth-generation computer project received and the ensuing conferences, books, congressional inquiries, and other forms of debate helped to set the priorities of industry and university research and development throughout the world.103

The timing turned out to be right. According to DM Data, commercial revenue in the U.S. from AI-related technologies (not including robotics), was only $52 million in 1981, grew to $1.4 billion by 1987, and is projected to be $4 billion in 1990. The AI industry as a whole is expected to hit tens of billions of dollars per year during the 1990s. Very likely the worldwide information and computer industry will be over one trillion dollars per year in the year 2000 and will be largely intelligent by today's standards. Indeed, the ability to effectively harness information and knowledge is expected to be the key to wealth and power in the decades ahead.

 Be the first to comment on this article!

A few bits of knowledge.


 
 

[Post New Comment]