Chapter 12: The Business of Discovery
Originally published by Henry Holt and Company 1999. Published on KurzweilAI.net May 15, 2003.
There used to be quite a demand for psychic mediums who would go
into "spirit cabinets" and channel fields to contact lost souls
who would communicate by making sounds. In my lab, once we had developed
the techniques to induce and measure weak electric fields around
the human body, we realized that we could make a modern-day spirit
cabinet (well, everything but the lost souls part). So of course
we had to do a magic trick, which we were soon working on with the
magicians Penn & Teller, who were planning a small opera with
Tod Machover. We expected to have fun; we didn't expect to save
the lives of infants, or learn something about how industry and
academia can remove technological barriers by removing organizational
barriers.
The hardware for our spirit cabinet was developed by Joe Paradiso,
a former experimental particle physicist who in his spare time builds
electronic music synthesizers that look just like enormous particle
detectors. Joe designed a seat that radiates a field out through
the medium's body, and from that can invisibly measure the smallest
gestures. Tod shaped a composition for it, and Penn & Teller
toured with it.
The trick worked technically, but it ran into an unexpected artistic
problem: audiences had a hard time believing that the performers
created rather than responded to the sounds. Science fiction had
prepared them to believe in hyperspace travel but not in a working
spirit cabinet. It was only by sitting people in the chair that
we could convince them that it did work; at that point we had a
hard time getting them back out of it.
I saw this project as an entertaining exercise for the physical
models and instrumentation that we were developing to look at fields
around bodies, not as anything particularly useful. The last thing
I expected was to see it turn into an automotive safety product.
But after we showed it, I found a Media Lab sponsor, Phil Rittmueller
from NEC, in my lab telling me about child seats. NEC makes the
controllers for airbags, and they were all too familiar with a story
that was about to hit the popular press: infants in rear-facing
child seats were being injured and killed by airbags. Deaths were
going to result from leaving the airbags on, and also from switching
them off. Finding a way for a smart airbag to recognize and respond
appropriately to the seat occupant was a life-and-death question
for the industry.
No one knew how to do it. To decide when to fire, the seat needed
to distinguish among a child facing forward, a child facing reverse,
a small adult, and a bag of groceries. Under pressure to act, the
National Highway Traffic Safety Administration was about to mandate
a standard for disabling the airbag based on the weight of the occupant,
an arbitrary threshold that would be sure to make mistakes. The
only working alternative imposed the unrealistic requirement that
all child seats had to have special sensor tags installed in them.
Phil wondered if our fancy seat could recognize infants as well
as magicians. My student Josh Smith veered off from what he was
doing for a few weeks to put together a prototype, and to our very
pleasant surprise it not only worked, it appeared to outperform
anything being considered by the auto industry. After NEC showed
the prototype to some car companies they wanted to know when they
could buy it. NEC has since announced the product, the Passenger
Sensing System.
It looks like an ordinary automobile seat. Flexible electrodes
in the seat emit and detect very weak electric fields, much like
the weak fields given off from the cord of a stereo headphone. A
controller interprets these signals to effectively see the threedimensional
configuration of the seat occupant, and uses these data to determine
how to fire the airbag. The beauty of the system is that it's invisible
and requires no attention; all the passenger has to do is sit in
the seat. Although the immediate application is to disable the airbag
for rear-facing infant seats, in the future the system will be used
to control the inflation of an airbag based on the location of the
occupant, and more generally help the car respond to the state of
the occupant.
Now I would never have taken funding for automobile seat safety
development—it's too remote from what I do, and too narrow.
NEC would never have supported magic tricks internally—it's
too remote from what they do, and too crazy. Yet by putting these
pieces together, the result was something unambiguously useful that
we probably could not have gotten to any other way. This is one
of the secrets of how the Media Lab works with industrial sponsors:
maximize contact, not constraints.
That's what much of academia and industry carefully prevent. The
organization of research and development in the United States can
be directly traced to an influential report that Vannevar Bush wrote
for Franklin Roosevelt in 1945.
Two technologies developed during World War II arguably ended the
conflict, first radar and then nuclear bombs. These were created
under the auspices of the then-secret Office of Scientific Research
and Development, directed by Vannevar Bush. After the war President
Roosevelt asked him to figure out how to sustain that pace of development
for peacetime goals, including combating disease and creating jobs
in new industries.
The resulting report, Science—The Endless Frontier,
argued that the key was government support of basic research. Both
nuclear weapons and radar were largely a consequence of fundamental
studies of the subatomic structure of matter, the former directly
from nuclear physics, and the latter through the instruments that
had been developed for nuclear magnetic resonance experiments. Just
when science and technology emerged as a key to economic competitiveness
after the war, the country was facing a shortage of scientists and
engineers who could do the work. Attracting and training them would
be essential.
Vannevar Bush proposed that the government's postwar role should
be to fund curiosity-driven basic research studies. These should
be done externally, at universities and research institutes, and
be evaluated solely with regard to their scientific merit without
any concern for practical applications. Since the direction of true
basic research can never be predicted, the government would provide
long-term grants, awarded based on the promise and record of a research
project rather than claims of expected outcomes of the work. Researchers
doing solid work could expect steady funding.
The useful fruits of the basic research would then be picked up
by applied research laboratories. Here the government would have
a stronger role, both through the work of its own agencies, and
through fostering a regulatory climate that encouraged industry
to do the same. Finally, the applied results would move to industry
where product development would be done.
He proposed the creation of a new agency to support all government
research, the National Research Foundation. By the time the enabling
legislations was passed in 1950 the title had changed to the National
Science Foundation (NSF) since there were too many entrenched government
interests unwilling to cede control in areas other than basic science.
Vannevar Bush thought that a staff of about fifty people and a budget
of $20 million a year should be sufficient to do the job.
In 1997 the NSF had twelve hundred employees and a budget of $3
billion a year. Attracting new scientists is no longer a problem;
finding research jobs for them is. The NSF gets so many proposals
from so many people that simply doing great work is no longer sufficient
to ensure adequate, stable, long-term funding. Vannevar Bush's system
is straining under the weight of its own successful creation of
an enormous academic research establishment.
It's also struggling to cope with the consequences of the growth
of the rest of the government. The original proposal for the National
Research Foundation recognized that research is too unlike any other
kind of government function to be managed in the same way; grantees
would need the freedom to organize and administer themselves as
they saw fit. The usual government bidding and accounting rules
would have to be relaxed to encourage basic research.
Not that the government has been particularly successful at overseeing
itself. The shocking inefficiency of the federal bureaucracy led
to the passage in 1993 of the Government Performance and Results
Act. GPRA sought to bring industrial standards of accountability
to bear on the government, forcing agencies to develop clear strategic
plans with measurable performance targets, and basing future budgetary
obligations on the value delivered to the agencies' customers.
This means that the U.S. government is now in the business of producing
five-year plans with production targets, just as the former Soviet
Union has given up on the idea as being hopelessly unrealistic.
GPRA applies to the NSF, and its grantees, and so they must do the
same. Therefore a basic research proposal now has to include annual
milestones and measurable performance targets for the research.
My industrial sponsors would never ask me to do that. If I could
tell them exactly what I was going to deliver each year for the
next five years then I wouldn't be doing research, I would be doing
development that they could better perform internally.
Vannevar Bush set out to protect research from the pressure to
deliver practical applications, freeing it to go wherever the quest
for knowledge leads. GPRA seeks to nail research to applications,
asking to be told up front what a project is useful for. Both are
good-faith attempts to derive practical benefits from research funding,
and both miss the way that so much innovation really happens.
Vannevar Bush's legacy was felt at Bell Labs when I was there before
the breakup of the phone system in the 1980s. The building was split
into two wings, with a neutral zone in the middle. Building 1 was
where the basic research happened. This was where the best and the
brightest went, the past and future Nobel laureates. The relevant
indicator of performance was the number of articles in the prestigious
physics journal that publishes short letters, a scientific kind
of sound bite. Building 2 did development, an activity that was
seen as less pure from over in Building 1. This didn't lead to Nobel
prizes or research letters. Conversely Building 1, viewed from Building
2, was seen as arrogant and out of touch with the world. The cafeteria
was between the buildings, but the two camps generally stuck to
their sides of the lunchroom. Meanwhile, the business units of the
telephone company saw much of the whole enterprise as being remote
from their concerns, and set up their own development organizations.
Very few ideas ever made it from Building 1 to Building 2 to a product.
There were still plenty of successes to point to, the most famous
being the invention of the transistor. But this happened in exactly
the reverse direction. The impetus behind the discovery of the transistor
was not curiosity-driven basic research, it was the need to replace
unreliable vacuum tubes in amplifiers in the phone system. It was
only after the transistor had been found that explaining how it
worked presented a range of fundamental research questions that
kept Building 1 employed for the next few decades.
Few scientific discoveries ever spring from free inquiry alone.
Take the history of one of my favorite concepts, entropy. In the
mid-1800s steam power was increasingly important as the engine of
industrial progress, but there was very little understanding of
the performance limits on a steam engine that could guide the development
of more efficient ones. This practical problem led Rudolf Clausius
in 1854 to introduce entropy, defined to be the change in heat flowing
into or out of a system, divided by the temperature. He found that
it always increases, with the increase approaching zero in a perfect
engine. Here now was a useful test to see how much a particular
engine might be improved. This study has grown into the modern subject
of thermodynamics.
The utility of entropy prompted a quest to find a microscopic explanation
for the macroscopic definition. This was provided by Ludwig Boltzmann
in 1877 as the logarithm of the number of different configurations
that a system can be in. By analyzing the possible ways to arrange
the molecules in a gas, he was able to show that his definition
matched Clausius's. This theory provided the needed connection between
the performance of a heat engine and the properties of the gas,
but it also contained a disturbing loophole that Maxwell soon noticed.
An intelligent and devious little creature (which he called a demon)
could judiciously open and close a tiny door between two sides of
a box, thereby separating the hot from the cold gas molecules, which
could then run an engine. Repeating this procedure would provide
a perpetual-energy machine, a desirable if impossible outcome. Many
people spent many years unsuccessfully trying to exorcise Maxwell's
demon. An important step came in 1929 when Leo Szilard reduced the
problem to its essence with a single molecule that could be on either
side of a partition. While he wasn't able to solve the demon paradox,
this introduced the notion of a "bit" of information.
Szilard's one-bit analysis of Maxwell's demon provided the inspiration
for Claude Shannon's theory of information in 1948. Just as the
steam engine powered the Industrial Revolution, electronic communications
was powering an information revolution. And just as finding the
capacity of a steam engine was a matter of some industrial import,
the growing demand for communications links required an understanding
of how many messages could be sent through a wire. Thanks to Szilard,
Shannon realized that entropy could measure the capacity of a telephone
wire as well as an engine. He created a theory of information that
could find the performance limit of a communications channel. His
theory led to a remarkable conclusion: as long as the data rate
is below this capacity, it is possible to communicate without any
errors. This result more than any other is responsible for the perfect
fidelity that we now take for granted in digital systems.
Maxwell's demon survived until the 1960s, when Rolf Landauer at
IBM related information theory back to its roots in thermodynamics
and showed that the demon ceases to be a perpetual-motion machine
when its brain is included in the accounting. As long as the demon
never forgets its actions they can be undone; erasing its memory
acts very much like the cycling of a piston in a steam engine. Rolf's
colleague Charles Bennett was able to extend this intriguing connection
between thought and energy to computing, showing that it is possible
to build a computer that in theory uses as little energy as you
wish, as long as you are willing to wait long enough to get a correct
answer.
Rolf and Charles's work was seen as beautiful theory but remote
from application until this decade, when power consumption suddenly
became a very serious limit on computing. Supercomputers use so
much power that it's hard to keep them from melting; so many PCs
are going into office buildings that the electrical systems cannot
handle the load; and portable computers need to be recharged much
too often. As with steam engines and communication links, entropy
provided a way to measure the energy efficiency of a computer and
guide optimizations. Lower-power chips are now being designed based
on these principles.
This history touches on fundamental theory (the microscopic explanation
of macroscopic behavior), practical applications (efficient engines),
far-reaching implications (digital communications), and even the
nature of human experience (the physical limits on thinking). Yet
which part is basic, and which is applied? Which led to which? The
question can't be answered, and isn't particularly relevant. The
evolution of a good idea rarely honors these distinctions.
The presumed pathway from basic research to applications operates
at least as often in the opposite direction. I found out how to
make molecular quantum computers only by trying to solve the short-term
challenge of making cheap chips. If I had been protected to pursue
my basic research agenda I never would have gotten there. And the
converse of the common academic assumption that short-term applications
should be kept at a distance from basic research is the industrial
expectation that long-term research is remote from short-term applications.
A newly minted product manager once told me that he had no need
for research because he was only making consumer electronics, even
though his group wanted to build a Global Positioning System (GPS)
receiver into their product. GPS lets a portable device know exactly
where it is in the world, so that for example a camera can keep
track of where each picture was taken. It works by measuring the
time it takes a signal to reach the receiver from a constellation
of satellites. This requires synchronizing the clocks on all of
the satellites to a billionth of a second. To measure time that
precisely, each satellite uses an atomic clock that tells time by
detecting the oscillation of an atom in a magnetic field. According
to Einstein's theory of relativity, time slows down as things move
faster, and as gravity strengthens. These effects are usually far
too small to see in ordinary experience, but GPS measures time so
finely that the relativistic corrections are essential to its operation.
So what looks like a simple consumer appliance depends on our knowledge
of the laws governing both the very smallest and largest parts of
the universe.
Across this divide between the short and long term, a battle is
now being fought over the desirability of basic versus applied research.
The doers of research feel that they've honored their part of the
bargain, producing a steady stream of solid results from transistor
radios to nuclear bombs, and cannot understand why the continuing
funding is not forthcoming. The users of research results find that
the research community is pursuing increasingly obscure topics removed
from their needs, and question the value of further support.
As a result both academic and industrial research are now struggling
for funding. Many of the problems that inspired the creation of
the postwar research establishment have gone away. The most fundamental
physics experiments have been the studies at giant particle accelerators
of the smallest structure of matter; these were funded by the Department
of Energy rather than the National Science Foundation for an interesting
historical reason.
After World War II the researchers who developed the nuclear bomb
were clearly of great strategic value for the country, but few wanted
to keep working for the military. So Oppenheimer made a deal for
the government to fund particle physics research to keep the community
together and the skills active, with the understanding that it might
be tapped as needed for future military needs. The Department of
Energy then grew out of the Atomic Energy Agency, the postwar entity
set up to deal with all things nuclear. Now with the end of the
cold war the demand for nuclear weapons is not what it used to be.
At the same time, the particle experiments have reached a scale
that new accelerators can no longer be afforded by a single country,
if not a whole planet. Further scientific progress cannot come easily
the way it used to, by moving to ever-higher energies to reach ever-finer
scales. So there are now a lot of highenergy physicists looking
for work.
Similarly, the original success of the transistor posed many research
questions about how to make them smaller, faster, quieter. These
have largely been solved; an inexpensive child's toy can now contain
a chip with a million perfect transistors. There is still an army
of physicists studying them, however, even though we understand
just about everything there is to know about a transistor, and can
make them do most everything we want them to.
What's happened is that disciplines have come to be defined by
their domain of application, rather than their mode of inquiry.
The same equations govern how an electron moves in a transistor
and in a person. Studying them requires the same instruments. Yet
when my lab started investigating the latter, to make furniture
that can see and shoes that can communicate, along with all of the
interest I was also asked whether it was real physics. This is a
strange question. I can answer it, dissecting out the physics piece
from the broader project. But I don't want to. I'd much rather be
asked about what I learned, how it relates to what is known, and
what its implications are.
Vannevar Bush's research model dates back to an era of unlimited
faith in industrial automation, and it is just as dated. Factories
were once designed around an assembly line, starting from raw materials
and sequentially adding components until the product was complete.
More recently it's been rediscovered that people, and machines,
work better in flexible groups that can adapt to problems and opportunities
without having to disrupt the work flow. Similarly, there is a sense
in which a conveyor belt is thought to carry ideas from basic research
to applied research to development to productization to manufacturing
to marketing. The collision between bits and atoms is a disruption
in this assembly line that cries out for a new way to organize inquiry.
Joe Paradiso turning an experiment into a magic trick, and Phil
Rittmueller turning that into an auto safety product, provide a
clue for where to look. A secret of how the Media Lab operates is
traffic: there are three or four groups in the building every day.
We're doing demos incessantly. This never happens in academia, because
it is seen as an inappropriate intrusion into free inquiry. And
it's not done in industry, where knowledge of internal projects
is carefully limited to those with a need to know. Both sides divide
a whole into pieces that add up to much less.
The companies can come in and tell us when we've done something
useful, like the car seat. They can do that far better than we can.
And, they can pose what they think are hard problems, which we may
in fact know how to solve. Instead of writing grant proposals I
see visitors, something that I would much rather do since I learn
so much from them. After helping them with what I know now, I can
use their support to work on problems that would be hard to justify
to any kind of funding agency. Ted Adelson nicely captures this
in a simple matrix:
Looks-easy-is-easy questions do not need research. Looks-hard-is-hard
is the domain of grand challenges such as the space program that
require enormous resources and very long-term commitments. Looks-hard-is-easy
are the short-term problems that are easy to convey in a demo and
to relate to an application. Looks-easy-is-hard are the elusive
problems that really motivate us, like figuring out how machines
can have common sense. It's difficult to explain why and how we
do them, and hence is hard to show them in a demo. The lower-left
corner pays the bills for the upper right, openly stealing from
the rich projects to pay for the poor ones.
The third trick that makes this work is the treatment of intellectual
property, the patents and copyrights that result from research.
Academia and industry both usually seek to control them to wring
out the maximum revenue. But instead of giving one sponsor sole
rights to the result of one project, our sponsors trade exclusivity
for royalty-free rights to everything we do. This lets us and them
work together without worrying about who owns what. It's less of
a sacrifice for us than it might sound, because very few inventions
ever make much money from licensing intellectual property, but fights
over intellectual property regularly make many people miserable.
And it's less of a sacrifice for the sponsors than it might sound,
because they leverage their investment in any one area with all
of the other work going on. When they first arrive they're very
concerned about protecting all of their secrets; once they notice
that many of their secrets have preceded them and are already familiar
to us, and that we can solve many of their internal problems, they
relax and find that they get much more in return by being open.
The cost of their sponsorship is typically about the same for them
as adding one new employee; there are very few people who can match
the productivity of a building full of enthusiastic, unconstrained
students, and it's hard for the companies to find and hire those
people. Any one thing we do the sponsors can do better if they really
want to, because they can throw much greater resources at a problem.
The difficult thing is figuring out where to put those resources.
We can happily learn from failing on nine projects, something that
would bankrupt a company, and then hand off the tenth that succeeds.
This relationship in turn depends on another trick. We have a floor
of people who wear suits and write memos and generally act corporate.
This is essential to translate between the two cultures. Most every
day I see industrial visitors asking to fund physics research, even
though they don't realize that's what they are saying. They're posing
problems without knowing where to look for solutions. It's unrealistic
to expect them to come in understanding what disciplines are relevant,
and how to work with them.
Sponsors as well as students need training. I realized this when
a company visited, was excited by what they saw and became a sponsor,
and just as quickly terminated their support, explaining that we
were not delivering products in a timely fashion. The real fault
lay not with them for so completely missing what a research lab
is for, but with us for not recognizing and correcting their misunderstanding.
Over time, I've come to find that I like to work equally well with
Nobel laureate scientists, virtuosic musicians, and good product
managers. They share a remarkably similar sensibility. They have
a great passion to create, bring enormous discipline to the task,
and have a finely honed sense of what is good and what is bad.
While their approaches overlap, their training and professional
experience do not. Mastery of one helps little with another. Yet
this is exactly what companies presume when they look to their researchers
to find applications for their work. I was visiting a large technology
company once that was trying to direct its research division to
be more useful. They did this by setting up a meeting that was more
like a religious revival, where researchers were asked to come forward
and testify with product plans. Not surprisingly, the researchers
didn't have a clue. It takes an unusual combination of life experience
and disposition to excel at perceiving market opportunities and
relating them to research capabilities.
In between the successful scientist, artist, and manager is a broad
middle ground where work does not answer to a research discipline,
or a critical community, or the marketplace. Nicholas Negroponte
and I spent a morning attending a research presentation at an industrial
lab along with the company's chain of command. After a laborious
discussion of their method, exactly how they went about designing
and evaluating their project, they finally showed what they did.
Nicholas was livid—an undergraduate could (and in fact did)
accomplish the same thing in a few afternoons. He couldn't believe
the waste of everyone's time. Amid all their process they had forgotten
to actually do something. Because the same group of people
were posing the question, doing the research, and evaluating the
results, that message never reached them.
This is one more reason why demos are so important. They provide
a grounding that is taken for granted in a mature field but that
otherwise would be lacking in an emerging area. A steady stream
of visitors offers a reality check that can help catch both bad
and good ideas. This is a stretch for corporate cultures based around
carefully controlling access, a protective reaction that can cause
more harm than good in rapidly changing fields. I've seen corporate
research labs that require management approval even for Internet
access, limiting it to those employees who can show a job-related
need to communicate. This distinction makes about as much sense
as asking which employees have a job-related need to breathe; not
surprisingly it keeps out much more information than it lets in.
Demos serve one more essential function. Our students spend so
much time telling people what they're doing that they get good at
communicating it to varied audiences, an essential skill lacking
in most of science. And then eventually, after doing it long enough,
they start to figure out for themselves what they're really doing.
One of my former students told me that he found this to be the single
most important part of his education, giving him an unfair advantage
in his new lab because none of his colleagues had experience talking
to nonscientists about their work.
Too much research organization is a matter of constraints, trying
to decide who should do what. The sensible alternative that is regularly
overlooked is contact, trying to bring together problems and solutions.
Since the obvious connections have long since been found, this entails
finding the non-obvious ones like the car seat that would not show
up on a project list. The only successful way I've ever seen to
do this is through regular visits that allow each side to meet the
other.
It is a simple message with enormous implications. The result in
the Media Lab is something that is not quite either traditional
academia or industry, but that draws on the best of both. This model
is not universally applicable; some problems simply take lots of
time and money to solve. But it sure is fun, and it lets us pursue
ideas without trying to distinguish between hardware and software,
content and representation, research and application. Surprisingly
often, working this way lets all sides get exactly what they want
while doing just what they wish.
The inconvenient technology that we live with reflects the inconvenient
institutional divisions that we live with. To get rid of the former,
we need to eliminate the latter.
WHEN THINGS START TO THINK by Neil Gershenfeld. ©1998 by
Neil A. Gershenfeld. Reprinted by arrangement with Henry Holt and
Company, LLC.
|