|
|
Chapter 4: Wear Ware Where?
Originally published by Henry Holt and Company 1999. Published on KurzweilAI.net May 15, 2003.
Steve Mann was one of the Media Lab's best-dressed students. It's
easy to recognize Steve because you don't see him; you see his computer.
His eyes are hidden behind a visor containing small display screens.
He looks out through a pair of cameras, which are connected to his
displays through a fanny pack full of electronics strapped around
his waist. Most of the time the cameras are mounted in front of
his eye, but when he rides a bicycle he finds that it's helpful
to have one of his electronic eyes looking backward so he can see
approaching traffic, or in a crowd he likes to move an eye down
to his feet to help him see where he's walking.
Because the path an image takes to reach his eyes passes through
a computer, he's able to transform what he sees. He can crank up
the brightness to help him see in the dark; some days he likes to
see like a bug and have the background fade out so that he perceives
only moving objects; and over time he's found that he prefers to
have the world appear rotated on its side. The reality that he perceives
becomes something that can be adjusted under software control; once
he's had enough of seeing like a bug, he can try out any other scheme
for viewing the world that he can think of.
The rest of Steve's body is also wired. A small handheld keyboard
lets him read and write without looking away from whatever else
he is doing. It has one key for each finger; by pressing the keys
down in combinations he is able to type as fast as he could on a
conventional keyboard. Beyond his hands, his clothing is full of
sensors connected to other parts of his body. Measurements of his
heart rate, perspiration, and footsteps let him add a dashboard
for his body to the display in his glasses, warning him when he's
running hot and needs to relax.
Since a computer displays and records Steve's world, he can enhance
more than just his senses. His memory is augmented by the accumulated
data in his system. By storing and analyzing what he writes, and
sees, and hears, the computer can help him recall when he last met
someone, what they spoke about, and what he knows about him or her.
Or, if he's fixing one of his ham radios, he can see a circuit diagram
at the same time he sees a circuit board.
Steve is not an island; an antenna on his head links him into the
network. His body is both a licensed video transmitter and an Internet
node. This lets him send and receive information, and access faster
computers and bigger databases than he can carry around. Not only
can he exchange e-mail with friends during the course of a day,
he can exchange his senses. The data that goes to his display screens
can just as easily be sent around the world; he maintains a Web
page with images he's recently seen. Steve's wife can look out through
his eyes when he's shopping and help him select fruit, something
he doesn't do well. If he's walking down a dark street late at night,
he can have some friends look over his virtual shoulder to help
keep a remote eye on him.
Working in the Media Lab over the last few years has been like
living through a B movie, The Invasion of the Cyborgs. One
by one our conventionally equipped students have been disappearing,
replaced by a new generation that, like Steve, is augmented by wearable
computers. At first I found it disconcerting to teach to people
who were seeing things that I wasn't, but I soon found that the
enhancements they were applying to me helped me better communicate
with them. When one of Steve's colleagues, Thad Starner, took a
seminar from me, he insisted that I make my lecture notes available
on the network for him. He wore glasses that projected a display
that allowed him to see my text and me at the same time, so that
he took notes on a copy of the lecture that appeared to him to be
floating next to me. Because of the apparently intrusive technology
intervening between us I actually had more of his attention than
the students flipping through paper copies.
When Thad, Steve, and their peers started wearing computers, people
would cross the street to get away from them. Now people cross the
street to get to them, to find out where they can buy one for themselves.
Wearable computers are a revolution that I'm certain will happen,
because it already is happening. Three forces are driving this transition:
people's desire to augment their innate capabilities, emerging technological
insight into how to embed computing into clothing, and industrial
demand to move information away from where the computers are and
to where the people are.
FedEx runs one of the world's largest airlines. Unlike an ordinary
airline that usually gets a few weeks' notice between the time a
ticket is sold and a flight departs, FedEx has just a few hours
after it receives a package to make sure that planes are in the
right place, at the right time, with room for the package. Each
step in this nocturnal logistical miracle is invariably a race against
time, as packages are sorted and packed into planes preparing to
depart. Literally seconds count, as delays at any point can ripple
through the system. Ideally, the planning of the fleet's routes
and manifests should begin the instant that a driver picks up a
package. To be useful, this must be done without encumbering the
drivers, who need to be able to speed through crowded offices with
awkward loads. This is why FedEx has been a pioneer in the industrial
application of wearable computers. Their package scanners and data
networks are moving from the central hubs, to the trucks, to the
drivers, so that as soon as an item is collected its data can be
racing ahead of it on a wireless link to prepare its itinerary.
Maintaining a fleet of airplanes like FedEx's presents another
industrial demand for wearable computers. The owner's manual for
a 747 weighs a few tons, and because of the need to control access
for safety and liability, it can't even be kept in the hangar where
it is needed. It certainly doesn't fit on a plane that must maximize
revenue per pound. Keeping the manual up-to-date is a full-time
job, with life-and-death implications that necessitate recording
a careful paper trail of changes. Complex maintenance procedures
need to be followed by mechanics working in confined spaces, once
again keeping a record of each operation they perform. For these
reasons aviation mechanics are beginning to wear computers with
display glasses that let them follow instructions without looking
away from their work, disk drives that can store a mountain of manuals
(one CD-ROM holds about a million pages of text), and network connections
that automate the record keeping.
While an airplane mechanic is willing to strap on pounds of computer
gear in order to get access to necessary information, and a Media
Lab grad student will do the same as a lifestyle choice, most everyone
else is unlikely to be willing to walk around looking like they
were wearing flypaper when a hurricane hit an electronics store.
Fortunately, we're learning how to provide the functionality of
a wearable computer without the inconvenience of the crude early
models.
It's a safe assumption that the clothes you wear are near a body
(yours). One of my former grad students, Tom Zimmerman (now at IBM),
realized that this trivial observation has a remarkable implication.
Tom had come back to grad school after inventing the Data Glove,
a glove full of wires that measures the location and configuration
of a hand. The company he cofounded, VPL, developed this glove into
the interface that helped launch the concept of virtual reality,
letting people use physical interactions to manipulate virtual worlds.
But Tom was dissatisfied with the cumbersome wires and wanted to
find new ways to build interfaces that are as natural as the physical
world.
Tom arrived at the Media Lab around the time that we were working
on a collaboration with Tod Machover and the violinist Ani Kavafian.
After the project with Yo-Yo I thought that I understood everything
there was to know about measuring the position of the bow of a stringed
instrument. Moments after putting the sensors on Ani's violin it
was clear that something was very wrong: we were finding the location
of her hand, not the bow. Yo-Yo plays his Strad sitting down, with
his hand above the bridge; Ani plays hers standing up, with her
hand beyond the bridge. Something about this change in geometry
was leading to an interaction between her hand and the electric
field that was supposed to detect the bow.
I was puzzled by this artifact because the signal went in the wrong
direction. I expected that by placing her hand between the transmitting
and receiving electrodes she would improve the coupling and make
the signal stronger, but her hand had exactly the opposite effect
and made the signal weaker. I tried unsuccessfully to solve the
complex equations that are needed to predict this behavior; then
I did something very smart: I went on a trip to Europe and left
Tom alone in the lab. While I was away, he walked down to the corner
deli and had them fill a glove with hamburger. While they were left
wondering about Tom's odd taste, he brought this armless hand back
to the lab and confirmed that it caused the signals to move in the
direction that I had expected—up. When he connected the hand
to a wire arm, the interaction then flipped to make the signal go
down. The source of our problem was immediately clear: part of Ani's
body was in the field and part was out; she was guiding the field
away from the receiver and out to the room.
Tom then realized that we should be able to detect the part of
the field that was passing through her body. This creates a tiny
current (about 0.0000001 percent of what illuminates a lightbulb).
To find such a small signal buried in all of the other possible
sources of electrical interference, we used a circuit that was matched
to the pattern that was being transmitted. Therefore, if we could
find that pattern, we could find other ones. In other words, we
could transmit data through a body. The bug could become quite a
feature.
We soon had a prototype working. A transmitting unit, about the
size of a deck of cards, had a pair of electrodes on opposite sides
that created an electric field. Placed near a body, for example
in a shoe, the field changed the body's average voltage by a tiny
amount. A similar receiving unit could measure this voltage change
at another part of the body. By varying the tiny voltage, a message
could be exchanged. This means that a computer in a shoe could send
data to a display in a wristwatch without needing wires. There are
Wide Area Networks (WANs) to link up cities, and Local Area Networks
(LANs) to link up buildings; we had created a Personal Area Network
(PAN) to connect parts of a body.
Unlike a conventional wireless radio, PAN keeps personal data in
the body, where it belongs. The radio spectrum is one of the scarcest
of modern resources, as more and more applications vie for the available
channels. One way to use the spectrum efficiently is to divide it
up geographically, such as the roughly one-kilometer cells used
for mobile telephones. PAN shrinks the cells down to one body. A
roomful of people could be using it without interfering with each
other, or worrying about someone surreptitiously eavesdropping—unless
they approach close enough to pick the bits out of their pockets.
Within one body, PAN provides a means to get rid of the wires in
a wearable computer. Between bodies, it can do much more. Two people
using it could exchange an electronic business card just by shaking
hands. A hand on a doorknob could send the data to unlock a door,
or the act of picking up an airport telephone could download the
day's messages. The contents of a package could be read automatically
as it is picked up.
Right now, gestures are either physical (exchanging a printed card)
or logical (exchanging e-mail). With PAN, physical gestures can
take on logical meaning. We've spent millennia as a species learning
how to manage our physical interactions with people and things.
We have a comfortable protocol for when and where to shake hands.
Rather than decouple such familiar gestures from the digital world,
PAN helps create a reality that merges the logical and physical
components. The World Wide Web helps you to think globally but not
to act locally; a Person Wide Web is needed for that.
The biggest surprise about PAN is that it has been such a surprise.
There's very little precedent for this idea, which is so obvious
in retrospect. In the postlude to 3001: The Final Odyssey, Arthur
C. Clarke complains that he thought he had invented the idea of
exchanging data through a handshake as a capability for the next
millennium, and was mortified to later find out that we were already
doing it.
Dick Tracy might have had his wristwatch TV, but the real implications
of wearing computers have been too unexpected to be anticipated
by fiction. Once a computer becomes continuously available it literally
becomes part of the fabric of our lives rather than just an appliance
for work or play. It is a very different conception of computation.
The closest fictional predecessor to PAN was Maxwell Smart's shoe
phone, which he had to take off (invariably at the most inopportune
times) if he wanted to call the Chief. PAN now lets Max keep his
shoes on, and shoes are in fact an ideal platform for computing.
You almost always have your shoes with you; you don't need to remember
to carry them along when you leave the house in the morning. There's
plenty of space available in a shoe for circuitry—no companies
are yet fighting over access to your shoes, but they will. When
you walk, watts of power pass through your feet and get dissipated
in abrading your shoes and impacting the ground; it's possible to
put materials in a shoe that produce a voltage when they are flexed,
which can be used as a power source. With realistic generation efficiencies
of a few percent there's plenty of energy available for a low-power
computer. Instead of lugging around a laptop's power supply and
adapters, and feeding it batteries, you just need to go for a walk
every now and then, and feed yourself. Finally, PAN lets a shoe
computer energize accessories around your body, communicating with
a wristwatch or eyeglasses. Running down this list of specifications,
it's apparent that a foot bottom makes much more sense than a lap
top. Shoe computers are no joke.
Wearable computers are really just a natural consequence of the
personalization of computation. The original impersonal computers
were mainframes, locked away in remote rooms. Then came minicomputers,
which could be shared by one workgroup. From there came the PC,
a computer used by a single person. This allowed much more personal
expression, as long as you equate computing with sitting in front
of a beige box with a cathode ray tube, keyboard, and mouse. Wearables
finally let a computer come to its user rather than vice versa.
Joel Birnbaum, the thoughtful senior vice president for research
and development at Hewlett Packard, notes that in the computer industry
"peripherals are central." Most of the attention in the business,
and press, has been focused on the components inside a computer
(processor speed, memory size, . . . ). But as PCs increasingly
become commodities like toasters, which are more-or-less interchangeable
and sold based on price, the emphasis is shifting to the external
peripherals that people use to interact with the computer. From
there it's a small step to focus on your accessories instead of
the computer's: the eyeglasses that can serve as displays for your
eyes, the earrings that can whisper a message into your ears. Right
now the demand for such wearable components is being imperfectly
met by home-brew parts and small startup firms. Since high-tech
industries abhor a product vacuum, this is sure to be filled by
computer companies seeking to grow beyond their current saturated
markets.
If computers have become like toasters, then software has become
like toast. Interacting with a computer is all too often about as
satisfying as interacting with a piece of toast, filling without
being particularly stimulating, frequently overdone, and with a
nasty habit of leaving messy crumbs of information everywhere. Just
as wearable computers replace the homogeneity of uniform hardware
with the ultimate in personalization, their intimate connection
with their wearer helps do the same for software.
Computers currently have no idea whether you're happy or sad, tense
or relaxed, bored or excited. These affective states have enormous
implications for your relationship with the world; if you're in
a good mood I'll tell you different things than if you're in a bad
one. A person who cannot perceive affect is emotionally handicapped
and cannot function normally in society, and the same is true of
a computer.
Because they have no idea what state you're in, machines are uniformly
cheerful, or bland, behavior that is just as annoying in a computer
as it is in a person. Affect has very immediate implications. Studies
have shown that if you're stressed, information delivered quickly
helps you relax; if you're relaxed, information delivered quickly
makes you stressed. Something as simple as the pace at which a computer
delivers information should depend on your mood.
Steve Mann's thesis advisor Roz Picard has been finding that the
kind of sensor data that a wearable can collect about its wearer
(temperature, perspiration, muscle tension, and so forth), coupled
with the techniques that have been developed in the last decade
to let computers recognize patterns such as faces in images, enables
a wearable computer to do a surprisingly good job of perceiving
affect by analyzing the data that it records. But this shouldn't
be very surprising: you routinely do the same with other people
just by looking and listening.
As unsettling as it might be to contemplate a computer that is
aware of your mood, the converse is to be forced to tolerate one
that is sure to almost always respond to it inappropriately. Byron
Reeves and Clifford Nass at Stanford have shown that most of what
we know about the psychology of human interaction carries over to
how people interact with computers; whatever our intentions, we
already relate to computers as we do to people. The Stanford group
has observed that most of the behavior that you can read about in
psychology textbooks also occurs between people and computers: aggressive
people prefer aggressive instructions from the machine while more
withdrawn people prefer instructions that are phrased more gently,
people are more critical in evaluating a computer if they fill out
a survey on a second computer than if they fill it out on the one
being evaluated, and so forth. Because we subconsciously expect
computers to be able to understand us, we must give them the means
to do so.
Even more disturbing than the prospect of a computer that knows
how you feel is one that can share this most personal of information
with other people, but here, too, the benefits can outweigh the
risks. My colleague Mike Hawley worked with Harry Winston Jewelers
to create what must be the world's most expensive computer display,
a $500,000 diamond brooch with a heart-rate monitor that lets it
pulse red in synchrony with its wearer's heartbeat. While you may
not want to wear your heart on your sleeve, how about letting your
jewelry tell a romantic interest when your heart skips a beat?
This fanciful brooch was part of the world's first wearable fashion
show, held at the Media Lab in the fall of 1997. I began to realize
that wearables were turning from geek-chic to high fashion when
two of our grad students, Rehmi Post and Maggie Orth, figured out
how to use conducting threads to embroider circuitry on fabric.
Using filaments of Kevlar and stainless steel, they were able to
program a computer-controlled sewing machine to stitch traces that
could carry data and power, as well as sense the press of a finger.
They call this e-broidery. It allowed them to get rid of
the last of the apparent wires in a wearable, marrying the form
and function into the garment. One of the first applications was
a Levi's denim jacket that doubles as a musical instrument: a keypad
sewn onto the front of the jacket lets the wearer call up rhythmic
backing and play accompanying melodic fills. I suspect that Josh
Strickon, the student who programmed the jacket's musical interface,
is the first person from MIT to publish research in Vogue, appearing
as a model to display the jacket.
Freed from the need to strap on external electronics, it was natural
to start wondering what wearables could look like. This was a question
for designers, not technologists, and so we turned to the fashion
schools of the world. Guided by Thad's advisor Sandy Pentland, students
from the Bunka Fashion College in Tokyo, Creapole in Paris, Domus
in Milan, and the Parsons School of Design in New York submitted
designs; the best were invited to come to the Media Lab to build
them. Among the lively creations was a dress that played music based
on the proximity of nearby bodies, dancing shoes full of sensors
that controlled the sights and sounds of a performance, and a jacket
made with an electronic ink that could change back and forth between
a solid color and pinstripes under software (softwear?) control.
We weren't prepared to handle the enormous press interest in the
event. Because clothing is so very personal, the reporting crossed
over from the technical journalists usually seen around MIT to mainstream
media. The many ensuing interviews conveyed a recurring sense of
wonder, and fear, at the arrival of wearables. The reality of the
show made clear that ordinary people may be wearing computers in
the not-too-distant future, raising endless questions about the
implications for how we're all going to live with them.
Privacy was a recurring concern. If it's already hard to sneeze
without someone noticing it and selling your name in a database
of cold-sufferers, what's going to happen if you carry a computer
everywhere? Here I think that wearables are part of the solution,
not the problem. Right now you're digitally unprotected; most of
your communications and transactions are available to anyone who
legally or illegally has the right equipment to detect them. From
pulling charge slips out of a trash can to scanning cellular phone
frequencies, it doesn't take much ambition to peer into people's
private lives. Every time you call a toll-free number, or use a
credit card, you're releasing quite a few bits of personal information
to relative strangers.
There are cryptographic protocols that can ensure that you have
complete control over who gets access to what information and when;
the problem is using them. If you have to boot up your laptop each
time you want to make a secure phone call, you're unlikely to use
it. By wearing a computer that is routinely accessible, you can
erect a kind of digital shield to help protect you by sending and
receiving only what you intend. This won't be impermeable; privacy
is not an absolute good, but a complex and personal tradeoff. If
you go into a store and make your electronic identity accessible,
the store can give you better service because they learn about your
preferences, and a discount because you've given them demographic
information, but you've lost some privacy. If you turn your identity
off, you can cryptographically perform a secure transaction that
lets you make a purchase without the store learning anything about
you, but it comes at a higher price and with worse service.
Another reason to consider sharing private information is for security.
A cop walking the beat can see and hear only part of one street;
wearables let many people see many places at the same time. While
this certainly does raise questions about the spread of surveillance
(students in the Media Lab made Steve Mann put a red light on his
head to indicate when his eyes were on-line), a mugging is also
quite an invasion of privacy. The balance between openness and privacy
can't be settled in advance for all people in all situations; the
technology needs to enable a continuous balance that can be adjusted
as needed. On a lonely street late at night I want as many friends
as possible to know what's happening around me; in a complex business
negotiation I might want to make sure that some people elsewhere
know everything and some know nothing; when I'm at home I may choose
to let no one else have any access to me.
Along with loss of privacy, another fear is loss of autonomy. What
happens if we come to rely on these machines? Thad, Steve, and their
peers have confounded conventional wisdom about how long it's possible
to be immersed in a computer environment. Early studies had suggested
that after a few hours people start becoming disoriented and suffering
ill effects; Thad and Steve have been wearing their systems all
day long for years. What the studies missed was the extent to which
their wearable computers have become an essential part of how these
people function. Instead of performing an assigned task in a laboratory
setting, they depend on their computers to see, and hear, and think.
They are incomplete without them.
That's a less radical step than it might appear to be, because
it's one we've all made. We're surrounded by technologies that we
depend on to function in a modern society. Without the power for
heat and light, the fabrics for protection from the environment,
the printing for newspapers and books, the fuel for transportation,
we could still survive, but it would be a much more difficult existence.
It can be satisfying to throw off all of those things for a time
and go camping in the woods, but even that is done by most people
by relying on a great deal of portable technology for shelter and
convenience. We are happy to consult a library to provide knowledge
that we can't obtain by introspection; Steve and Thad go just a
bit further to take advantage of a vast library of information that
is continuously available to them.
Glasses are a familiar technology that we rely on to correct vision.
Some people cannot function without the lenses that let them see
near, or far, and some have vision problems that cannot be corrected
by ordinary glasses. A wearable computer is equally useful for both,
blurring the distinction between them. Whether or not you can read
large or small print, you can't see in the dark, or see around the
world, or look backward and forward at the same time. Wearables
can certainly help with disabilities, but the range of what we consider
to be abled and disabled is actually quite small compared to what
both types of people can do by enhancing their senses with active
processing. Many people are unwilling to give up these kinds of
capabilities once they experience them. There's nothing new about
that; the same is true of eyeglasses wearers.
A final concern with wearables is loss of community. A quick glance
around an office where people who used to eat lunch together now
sit alone surfing the Web makes clear that the expansion in global
communication has come at the cost of local communication. The members
of a wired family are already more likely than not to be found at
home in separate rooms, typing away at their computers, even if
they're just answering each other's e-mail. What's going to happen
to interpersonal relationships if the computer becomes continuously
accessible?
The responsibility for this loss of personal contact lies more
with the computer on the desktop than with the person sitting in
front of it. Finding out what's happening around the world is interesting,
as is finding out what's happening around you. Right now the technology
forces an unacceptable decision between doing one or the other.
As wearable computing wraps the Net around people this is going
to change. I'm confident that for quite some time people will remain
more engaging than computers, but finding the right person at the
right time might require looking next door or on the next continent.
Computers currently facilitate interactions that would not otherwise
have occurred, with tools such as groupware that permit multiple
people to work on a document, or videoconferencing that lets meetings
happen among people at many sites. These functions require special
software, hardware, and even rooms, and are consequently used primarily
for narrow business purposes. Once wearables make it possible to
see, or hear, anyone at anytime, then interacting with people physically
far away can become as routine as talking to someone in front of
you. A person having a polite conversation might equally well be
talking to their neighbor, to a friend around the world, or even
to their shoes; the whole notion of neighborhood becomes a logical
rather than a physical concept.
"Networking" then becomes something much more compelling than passing
around business cards at a cocktail party. If a group of people
share their resources, they can accomplish many things that they
could not do individually. For example, the performance of a portable
radio is limited by the range of frequencies that are available
(decided by the telecommunications regulatory agencies), the amount
of noise in the receiver (which is ultimately determined by the
temperature of the system), and the strength of the signal (a function
of the size of the antenna). There isn't much that one person can
do about these things, short of wearing cryogenic underwear or an
inverted umbrella on his or her head. But radio astronomers have
long understood how to combine the signals from multiple antennas
to make an observation that could not be done by a single one. Processing
the signals from many small antennas to effectively create one large
one requires knowing precisely where each antenna is. This might
not appear to work with people, who have an annoying habit of moving
around, but we're learning how to design radios that can automatically
synchronize and share their signals with other radios in their vicinity.
If you want to exchange data with a wireless network from a wearable
computer, and a few nearby people agree to let you temporarily run
your signals through their radios, you can send and receive data
twice as fast as you could alone. Since network traffic usually
happens in bursts, when you're not using your radio they in turn
can take advantage of it. The same thing can be done with computing,
temporarily borrowing unused processor time from the people around
you to speed up computationally intensive tasks.
Al Gore has observed that the U.S. Constitution can be viewed as
a sophisticated program written for an enormous parallel computer,
the populace. By following its instructions, many people together
can do what they could not do separately. When computation becomes
continuously available, the analogy becomes literal. Polling, voting,
and legislating can become part of our ongoing collective communications
rather than isolated events. Instead of using a few monitors to
oversee a controversial election, an entire nation could bear witness
to itself if enough people were wired with wearables.
It's not too far from there to see wearable computers as a new
step in our evolution as a species. The organization of life has
been defined by communications. It was a big advance for molecules
to work together to form cells, for cells to work together to form
animals, for animals to work together to form families, and for
families to work together to form communities. Each of these steps,
clearly part of the evolution of life, conferred important benefits
that were of value to the species. Moving computing into clothing
opens a new era in how we interact with each other, the defining
characteristic of what it means to be human.
WHEN THINGS START TO THINK by Neil Gershenfeld. ©1998 by
Neil A. Gershenfeld. Reprinted by arrangement with Henry Holt and
Company, LLC.
| | |