|
|
|
|
|
|
|
Origin >
Dangerous Futures >
Nanoethics and Technological Revolutions: A Precis.
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0666.html
Printable Version |
|
|
|
Nanoethics and Technological Revolutions: A Precis.
If we believe that nanotechnology will eventually amount to a technological revolution, and if we are going to attempt nanoethics, we should consider some of the earlier technological revolutions that humanity has undergone and how our moral principles and technology impact assessment exercises would have fared.
Originally published in Nanotechnology
Perceptions: A Review of Ultraprecision Engineering and Nanotechnology,
Volume 2, No. 2, May 8, 2006. Reprinted with permission on KurzweilAI.net,
May 8, 2006.
1. Some eleven thousand years ago, in the neighborhood of Mesopotamia,
some of our ancestors took up agriculture, thereby beginning the
end of the hunter-gatherer existence that our species had lived
ever since it first evolved. Population exploded even as nutritional
status and quality of life declined, at least initially. Eventually,
greater population densities led to greatly accelerated cultural
and technological development.
In 1448, Johan Gutenberg invented the movable type printing process
in Europe, enabling copies of the Bible to be mass-produced. Gutenberg’s
invention became a major factor fueling the Renaissance, the Reformation,
and the scientific revolution, and helped give rise to mass literacy.
A few hundred years later, Mein Kampf was mass-produced using
an improved version of the same technology.
Work in atomic physics and quantum mechanics in the first three
decades of the 20th century laid the foundation for the
subsequent Manhattan project during World War II, which raced to
beat Hitler to the nuclear bomb.
In 1957, Soviet scientists launched Sputnik 1. In the following
year, the US created the Defense Advanced Research Projects Agency
to ensure that US would keep ahead of its enemies in military technology.
DARPA began developing a communication system that could survive
nuclear bombardment by the USSR. The result, ARPANET, later became
the Internet—the long-term consequences of which remain to
be seen.
2. Suppose you are an individual involved in some way in what may
become a technological revolution. You might be an inventor, a funder
of research, a user of a new technology, a regulator, a policy-maker,
an opinion leader, or a voting citizen. Suppose you are concerned
with the ethical issues that arise from your potential involvement.
You want to act responsibly and with moral integrity. What does
morality require of you in such a situation? What does it permit
but does not require? What questions do you need to find answers
to in order to determine what you ought to do?
If you consult the literature on applied ethics, you will not find
much advice that applies directly to this situation. Ethicists have
written at length about war, the environment, our duties towards
the developing world; about doctor-patient relationships, euthanasia,
and abortion; about the fairness of social redistribution, race
and gender relations, civil rights, and many other things. Arguably,
nothing humans do has such profound and wide-ranging consequences
as technological revolutions. Technological revolutions can change
the human condition and affect the lives of billions. Their consequences
can be felt for hundreds if not thousands of years. Yet, on this
topic, moral philosophers have had precious little to say.
3. In recent years, there have been increasing efforts to evaluate
the ethical, social, and legal implications (“ELSI”) of important
new technologies ahead of time. Much attention has been focused
on ethical issues related to the human genome project. Now there
is a push to look at the ethics of advances in information technology
(information and computer ethics), brain science (neuroethics),
and nanotechnology (nanoethics).
Will “ESLI” research produce any important findings? Will it have
any significant effects on public policy, regulation, research priorities,
or social attitudes? If so, will these effects be for the better
or for the worse? It is too early to tell.
But if we believe that nanotechnology will eventually amount to
a technological revolution, and if we are going to attempt nanoethics,
then we might do well to consider some of the earlier technological
revolutions that humanity has undergone. Perhaps there are hidden
features of our current situation with regard to nanotechnology
that would become more easily visible if we considered how our moral
principles and technology impact assessment exercises would have
fared if they had been applied in equivalent circumstances in any
of the preceding technological revolutions.
If such a comparison were made, we might (for example) become more
modest about our ability to predict or anticipate the long-term
consequences of what we were about to do. We might become sensitized
to certain kinds of impacts that we might otherwise overlook—such
as impacts on culture, geopolitical strategy and balance of power,
people’s preferences, and on the size and composition of the human
population. Perhaps most importantly, we might be led to pay closer
attention to what impacts there might be in terms of further technological
developments that the initial revolution would enable. We might
also become more sophisticated, and perhaps more humble, in our
thinking about how individuals or groups might exert predictable
positive influence on the way things develop. Finally, we might
be led to focus more on systems level aspects, such as institutions
and technologies for aggregating and processes information, for
making decisions regarding e.g. regulations and funding priorities,
and for implementing these decisions.
| | |
|
|
Mind·X Discussion About This Article:
|
|
|
|
Re: Nanoethics and Technological Revolutions: A Precis.
|
|
|
|
Agriculture:
This new technology will mean the end of our nomadic lifestyle. Mankind will become "settled" - living in one place, and claiming plots of land as "property".
Those who prefer the nomadic lifestyle will increasingly find themselves marginalized and their way of life threatened. Fences and more forceful defense of property claims will block their wandering; lands formerly their territory for hunting, grazing and gathering will be claimed as someone else's "property", upon which they are no longer allowed to trespass. We can expect that this will lead to conflicts in the transition period, and ultimately to the disappearance of our cherished and pleasant way of life.
People will put a large amount of effort into developing "their" land - making them reluctant to leave it and move on, while making that land more valuable to others. We foresee a whole new class of predatory humans arising, extorting land-holders to pay for "protection" (from others like themselves) or be forced to move, leaving their valuable developed land to be claimed by the "protectors".
Beyond the simple economic unfairness of that situation, it seems likely that the wealth and control gained by those who are willing to prey on their fellow humans, will tend to place the worst sort of humans in a position of power over all - in effect, allowing them to set society's rules to favor themselves, rather than relying on time-tested codes of honor and justice passed down and enforced by more organic tribal and familial power structures.
Individually, the majority of people will eventually change from independent hunters into little better than slaves - tied to their land, turning the majority of their production over to armed thugs who rule them and live in luxury. Their spirits broken, some may come to believe that they are innately inferior to the thugs - that the thugs have a right to their position due to innate superiority.
On the bright side, it seems unlikely that people will stand for this insane condition for very long. People will surely revolt and establish a better system of social organization. We predict that it may take several generations - perhaps as long as a century for the people of a settled area to realize their situation and find a way out of this trap.
|
|
|
|
|
|
|
|
|
Re: Nanoethics and Technological Revolutions: A Precis.
|
|
|
|
Bostrom suggests that moral philosophy has little to say about the ethics of technological advancement. Comparatively – that is, compared to the amount it opines about distributive justice, say – I think Bostrom is correct, though I’d like to make a few points.
In some ways ethicists have evinced a general interest in the subject of technological advancement. It typically shows up in discussion of Intellectual Property Rights – for a key utilitarian argument for the ethical propriety of such institutions as Patent Offices is that technological advancements are socially desirable, but because such advances typically have the salient features of public goods (allowing free-riding by non-inventors), they must be actively sponsored by the state. It is here, therefore, where some not insignificant work on the in/desirability of technological advances has been done. It is also interesting that when potentially undesirable technologies are in the offing, NGOs will attempt to block advance of the technology through manipulation or regulation of the patent system. It’s a fascinating question whether such a route could halt, or at least alter the trajectory of, the specific technological forays that are under discussion here. Market powers are strong, and artificial regulation over ideational entities is what creates the modern market in such products. But it might not always be so – exponentially increasing rates of advance are liable to leave legislation in their wake.
More specifically, Bostrom notes ethical focus on the Human Genome Project. It’s probably fair to say that many of the issues regarding genomics in general – quite apart from this specific endeavour, with its particular issues – have been dissected by moral philosophers in the past decade. I think it’s interesting to consider why moral philosophy might be so attuned to bioethics, and yet, as Bostrom alludes, so silent about other types of technological revolutions. As examples of areas of such echoing silence I would hazard both nanotechnology and genuine AI are reasonable candidates.
What follows is very speculative, but if I may be allowed a little conjecture: I think the reasons for this lack of interest are very different in each case. For the development of genuine AI, I think moral philosophers might be reticent to write serious papers on an issue so well-trammelled by science-fiction writers. Considering the moral implications of despotic super-computers sounds a little populist, regardless of whether or not such a fate is just as likely as global warming, say. Professional journal editors might well raise their eyebrows derisively. Obversely, I think because nanotechnology has hitherto largely avoided the public’s gaze, ethicists have not been aware of it, or at least not of the depth and power of its promises and threats. There is also an issue of expertise. Because there is, of late, a very close and healthy relationship between philosophy and biology, many of the issues in bioethics were immediately familiar to philosophers. I suspect this is not so much the case with other forms of technology. Dennett recently mused about the lack of ‘Philosophers of Engineering’ and I think his point is well made.
So we’re not thinking about it. Should we be thinking about it? Absolutely. If it wasn’t worth thinking before we act, we’d never have become intelligent in the first place. ;-) Perhaps our power to invent and create will outstrip our power to predict and ameliorate. It wouldn’t be the first time. But we’ll never know unless we try. |
|
|
|
|
|
|
|
|
Re: Nanoethics and Technological Revolutions: A Precis.
|
|
|
|
First, we are gene driven. That is the technology qwhich directs us. Assuming Dawkins is correct, that the human is a gene's way of making another gene, then human societies are driven by the gene3tic replicative algorithm, which seeks to maintain linear continuity and growth while avoiding the necessity of change.
Any social drive exhibited by humans will, I think, reflect that basic tendency, often referred to as Narcissism, the linear expansion of ourselves into our environment.
Nanoethics reflect that same expression, a reflective part of our own nature, in trying to find ways to limit nanobots when we have little success in finding ways to limit our own expansive urges.
Humans have sought to overecome their environment via organization once they became aware of themselves as separate from their environment. An ancient account(whether true or false is irrelevant) is the Tower of Babel. If we picture Yahweh as a scientist trying to curb the natural tendencies of the genetic replicative algorithm, we also see that the people of that time would have organized to such a degree that entropy would be accelerated and cause their own destruction, or a virus could wipe the entire species out.
Having one language and unlimited imagination(as our present computers are beginning to posess), Yahweh altered the software systems by which they communicated. The hardware, the genetic replicative algorithm, was necessary for survival, but the process by which each human communicated would restrict their ability to organize.
The reference to ethics in the article deals with the same type of problem: how to limit growth and maintain adaptability?
If our genetic hardware promoted the ideal of growth and expansion, why not introduce a process that ran counter to that principle?
Notice some interesting parallels to our own dilemma. A tiny nation was placed in isolation so that they could not develop an ongoing culture dependent on their environment(Israel's slavery in Egypt). When freed, they did not inhabit a fixed environment condcive to their culture, since they actually had no culture.
Instead, they wandered in an environment totally hostile to culture and learned to internalize a law that regulated their social behavior without regard to environment.
This in itself is a reversal of the normal genetic process of development. Systems adapt to an environment, grow to levels of social awareness and expand their control, and in humans' case, build an empire.
Israel's history was a reversal. They lived in isolation and learned to obey a law that would apply wherever they found themselves, independent of environment or the genetic replicative algorithm.
Israel was not only a "separate people" but the "sin' around them was referred to as "leavening".
What a marvelous parallel to the idea of entropy. Systems grow and expand until they collapse of their own weight. In entropy, systems grow, creating chaos in related areas, until they can no longer expand due to the chaos created by their own efforts at growth, IOW, leavening.
The solution presented was tro present a law(programming) that ran counter to the initial drive of the genetic replicative algorithm, so that it would have to be kept by conscious decision and not by innate forces. Israel consciously agreed to keep a law that would force them to conastantly re-interpret and re-integrate their society, applying each interpretation within the framework of the genetic replicative algorithm.
In so doing, israel became an incredibly diverse culture, with many differing forms of religions that increased their adaptaptive abilities.
Absorbed by the "leavened" empires that developed according to typical genetic algorithms, their diversity caused a systems crash, with a re-creation of diversity at different levels, absorbing all the facets of their environment at an individual level.
By perpetuating the law so that every "jot and tittle" was maintained, they introduced a "virus" or "meme" that acted to disrupt the natural inclinations of the genetic algorithm.
This alternate programming running counter to genetic implications is well described by the apostle Paul:
"For that which I do, I allow not: for what I would, that do I not, but what I hate, that I do.
"Now it is no more I that do it, but sin that dwelleth in me....for to will is present with me; but how to perform that which is good I find not....I find then a law, that, when i would do good, evil is present within me...but i see another law, warring against the law of my mind...."
What is described is the application of a cultural meme attempting to override the genetic replicative algorithm, which may work for a time, but groups are easily susceptible to the urge for replication, i.e., the proselytizing zeal.
The genetic replicative drive simply overtakes the meme that sets us apart as individuals. The results are christianity, which rose to power by the sword, and Islam, which rose by the same method.
Israel, with its focus on that unchanging law and the diversity of interpretations arising within their own culture, remain hated by those cultures that succumb to the genetic replicative algorithm.
That which we refer to as "spiritual" is merely that which remains undefineable, which forces us to constantly question our own assumptions, and makes us see ourselves as individuals.
There is no evidence of God, nor of any system that woukld provide peace and freedom by its cultural application. If there were, we could program it into robots and they could ecome the true "sons of God".
The biblical evidence for my statements are Romans 8:7, Ephesians 2:8-10, and 1 Corinthians `1: 27-29.
As for human biological ethics, we have only one correct choice: choose no human collective ideology or religion that proposes a panacea. Jesus himself offered this same advice in Matthew 24:23.
But SAI would not necessarily have genetic replicative algorithms to deal with. It can unite as one mind(and be destroyed by a smart virus maker) or disengage to separate "brains".
Creation of warring drives in SAI would be quite destructive. |
|
|
|
|
|
|
|