Origin > Dangerous Futures > Embrace, Don't Relinquish, the Future
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0106.html

Printable Version
    Embrace, Don't Relinquish, the Future
by   Max More

Extropy Institute head Max More finds Bill Joy's Wired essay uninformed, unworkable, and even unethical because it will slow down progress in medicine and other vital areas, he believes.


Originally published May 7, 2000 at Extropy.org. Published on KurzweilAI.net February 26, 2001. Read Ray Kurzweil's response to Bill Joy here.

When a scientist publishes a paper, her peers expect to see evidence that she has read prior work relevant to her topic. They expect the scientist to have studied the field thoroughly before contributing a paper, especially in a controversial field. Bill Joy, as Chief Scientist at Sun Microsystems, should understand this. In reading his essay "Why the Future Doesn't Need Us" I am struck by his public show of ignorance of thinking about future technologies, his unrealistic thoughts about "relinquishment", and his slighting of those who have considered these issues deeply as lacking in common sense. At the same time, I appreciate his courage in publicly laying out his fears.

As a philosopher, I find his comments about losing our humanity to be frustratingly offhand. I will address this issue in a separate response. Here I wish to focus on Joy's call for relinquishment of the technologies of genetic engineering, molecular nanotechnology, and robotics (and all associated fields). As someone who has thought about these issues for many years, I wish to challenge Joy's relinquishment policy on two grounds: First, it's unworkable. Second, it's ethically appalling. (A third reason--that in practice it would result in authoritarian control while still failing to achieve its purpose--I will leave for a separate response.)

According to Joy's extensive essay, his apocalyptic thinking was set off by hearing a conversation between Ray Kurzweil and Hans Moravec. Apart from attending a Foresight Institute conference back in 1989, Joy shows no sign of having read any of the writings or listening to any of the talks of those who have devoted themselves to the issues he raises. Despite the brilliant clarity of Kurzweil's writing, Joy still isn't clear whether we are supposed to "become robots or fuse with robots or something like that." He gives no credit to the years of work by the Foresight Institute not only in promoting the idea of nanotechnology but in planning for its potential dangers by considering both technical and knowledge and policy-based approaches. Certainly we here at Extropy Institute--a multi-disciplinary think tank and educational organization devoted to the human future--never heard from Joy before he released his missive to the masses.

Someone in Joy's influential position has a responsibility to delve into prior thinking on these issues before scaring a public already unreasonably afraid of some advanced technologies, including genetic engineering. I find it incredible that Joy cites Carl Sagan, one of my intellectual inspirations in the course of criticizing we leading advocates of 21st century technologies as lacking in common sense. Those who advocate obviously unrealistic policies such as global relinquishment should not make accusations about common sense. This would be less galling if Joy had actually bothered to find out what we advocates for the future had to say over the last twelve years. (In 1988, a year before the Foresight conference that Joy attended, we founded Extropy magazine which evolved into Extropy Institute--a transhumanist organization devoted to "Incubating Better Futures".) Joy also accuses us of lacking humility while in an interview he draws a (misleading) parallel between himself and Einstein's 1939 letter to President Roosevelt.

While acknowledging the tremendously beneficial possibilities of emerging technologies, Bill Joy judges them too dangerous for us to handle. The only acceptable course in his view is relinquishment. He wants everyone in the world "to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge". Joy joins the centuries-old procession of theocrats, autocrats, and technocrats in attacking our pursuit of unlimited knowledge. He mentions the myth of Pandora's box. He might have thrown in the anti-humanistic and anti-transhumanistic myths of the Garden of Eden, the Tower of Babel, and the demise of Icarus. Moving from myth to reality, he should have been explicit in describing the necessary means deployed throughout history: burning books, proscribing the reading of dangerous ideas, state control of science.

PART 1: RELINQUISHMENT CANNOT WORK

The first of my objections to relinquishment has already been well made by Ray Kurzweil. Joy's fantasies about relinquishment ride on the assumption that "we could agree, as a species" to hold back from developing the "GNR" technologies (genetic engineering, nanotechnology, and robotics) and presumably any enabling or related technologies. Perhaps Joy's experience in having a staff of engineers to do his bidding have blinded him to the incredibly obvious fact that the six billion humans on this planet do not and will not agree to relinquish technologies that offer massive benefits as well as defensive and offensive military capabilities.

We have failed to prevent the spread of nuclear weapons technology, despite its terrifying nature and relative ease in detection. How are we to prevent all companies, all governments, all hidden groups in the world from working on these technologies? Bill, all six billion of these people--many desperately in need of the material and medical benefits offered by these technologies--will not read the Dalai Lama and go along with your master plan. Relinquishment is a utopian fantasy worthy of the most blinkered hippies of the '60s. Adding coercive enforcement to the mix moves the idea from utopian fantasy to frightening dystopia.

Ray Kurzweil points to a fine-grained relinquishment that can at least reduce the dangers of runaway technologies among those willing to play this game. Nanotechnology pioneer Eric Drexler has long recommended designing nanomachines that will quickly cease functioning if not fed some essential and naturally uncommon ingredient. Ralph Merkle's broadcast architecture offers another way to develop nanomachines under control. These and other proposals can reduce the hazards of accidental nanotechnological disasters.

However, we can pursue intelligent design, ethical guidelines, and oversight only piecemeal, not universally. Less cautious or less benevolent developers will refuse even this fine-grained relinquishment. That fact makes it imperative to accelerate the development of advanced technologies in open societies. Only by possessing the most advanced technological knowledge can we hope to defend ourselves against the attacks and accidents from outside our sphere of influence. We should be pushing for better understanding of nanotech defenses, accelerated decoding and deactivation of genetically-engineered pathogens, and putting more thought into means of limiting runaway independent superintelligent AI.

I will not address genetic engineering since I regard this as an insignificant danger compared to those of nanotechnology and runaway artificial intelligence (AI). The dangers of runaway artificial superintelligence have received less attention than those of nanotechnology. Perhaps this is because the prospect of AI seems to move further away every time we take a step forward. Bill Joy cites only Hans Moravec on this issue, perhaps because Moravec's view is the most frightening available. In Moravec's view of the future, superintelligent machines, initially harnessed for human benefit, soon leave us behind. In the most pessimistic Terminator-like scenario, they might remove us from the scene as an annoyance. Oddly, despite having read Kurzweil's book, Joy never discusses Ray's thoroughly different (and more plausible) scenario. In Ray's future projections, we gradually augment ourselves with computer and robotic technology, becoming superhumanly intelligent. Moravec's apartheid of human and machine is replaced with the integration of biology and technology.

While a little research would have shown Joy that extropian and other transhumanist thinkers have indeed addressed the danger of explosively evolving, unfriendly AI, I grant that we must continue to address this issue. Again, global relinquishment is not an option. Rather than a futile effort to prevent AI development, we should concentrate on warding off dangers within our circle of influence and developing preventative measures against rogue AIs.

Human beings are the dominant species on this planet. Joy wants to protect our dominance by blocking the development of smarter and more powerful beings. I find it odd that Joy, working at a company like Sun Microsystems, can think only of the old corporate strategy where dominant companies attempted to suppress disruptive innovations. Perhaps he should take a look at Cisco Systems, or Microsoft, both of which have adopted a different strategy: Embrace and extend. Humanity would do well to borrow from the new business strategists' approach. Realistically, we cannot prevent the rise of non-biological intelligence. We can embrace it and extend ourselves to incorporate it. The more quickly and continuously we absorb computational advances, the easier it will be and the less risk of a technology runway. Absorption and integration will include economic interweaving of these emerging technologies with our organizations as well as directly interfacing our biology with sensors, displays, computers, and other devices. This way we avoid an us-versus-them situation. They become part of us.

PART 2: RELINQUISHMENT IS UNETHICAL

Some people reach ethical conclusions by consulting an ultimate authority. Their authority gives them answers that are received and applied without questioning. For those of us who prefer a more rational approach to ethical thought, reaching a conclusion involves consulting our basic values then carefully deciding which of the available paths ahead will best reflect those values. Our factual beliefs about how the world works will therefore profoundly affect our moral reasoning. Two individuals may share values but reach differing conclusions due to divergent factual beliefs. I suspect that my ethical disagreement with Joy over relinquishment results both from differing beliefs about the facts and differing basic values.

Joy assigns a high probability to the extinction of humanity if we do not relinquish certain emerging technologies. Joy's implicit calculus reminds me of Pascal's Wager. Finding no rational basis for accepting or rejecting belief in a God, Pascal claimed that belief was the best bet. Choosing not to believe had minimal benefits and the possibility of an infinitely high cost (eternal damnation). Choosing to believe carried small costs and offered potentially infinite rewards (eternity in Heaven). Now, the extinction of the human race is not as bad as eternity in Hell, but most of us would agree that it's a utterly rotten result. If relinquishment can drastically reduce the odds of such a large loss, while costing us little, then relinquishment is the rational and moral choice. A clear, simple, easy answer. Alas, Joy, like Pascal, loads the dice to produce his desired result.

I view the chances of success for global relinquishment as practically zero. Worse, I believe that partial relinquishment will frighteningly increase the chances of disaster by disarming the responsible while leaving powerful abilities in the hands of those full of hatred, resentment, and authoritarian ambition. We may find a place for the fine-grained voluntary relinquishment of inherently dangerous means where safer technological paths are available. But unilateral relinquishment means unilateral disarmament. I can only hope that Bill Joy never becomes a successful Neville Chamberlain of 21st century technologies. In place of relinquishment, we would do better to accelerate our development of these technologies, while focusing on developing protections against and responses to their destructive uses.

My assessment of the costs of relinquishment differ from Joy's for another reason. Billions of people continue to suffer illness, damage, starvation, and all the plethora of woes humanity has had to endure through the ages. The emerging technologies of genetic engineering, molecular nanotechnology, and biological-technological interfaces offer solutions to these problems. Joy would stop progress in robotics, artificial intelligence, and related fields. Too bad for those now regaining hearing and sight thanks to implants. Too bad for the billions who will continue to die of numerous diseases that could be dispatched through genetic and nanotechnological solutions. I cannot reconcile the deliberate indulgence of continued suffering with any plausible ethical perspective.

Like Joy, I too worry about the extinction of human beings. I see it happening everyday, one by one. We call this serial extinction of humanity "aging and death". Because aging and death have always been with us and have seemed inevitable, we often rationalize this serial extinction as natural and even desirable. We cry out against the sudden death of large numbers of humans. But, unless it touches someone close, we rarely concern ourselves with the drip, drip, drip of individual lives decaying and disintegrating into nothingness. Some day, not too far in the future, people will look back on our complacency and rationalizations with horror and disgust. They will wonder why people gathered in crowds to protest genetic modification of crops yet never demonstrated in favor of accelerating anti-aging research. Holding back from developing the technologies targeted by Joy will not only shift power into the hands of the destroyers, it will mean an unforgivable lassitude and complicity in the face of entropy and death.

Joy's concerns about technological dangers may seem responsible. But his unbalanced fear-mongering and lack on emphasis of the enormous benefits can only put a drag on progress. We are already seeing fear, ignorance, and various hidden agendas spurring resistance to genetic research and biotechnology. Of course we must take care in how we develop these technologies. But we must also recognize how they can tackle cancer, heart disease, birth defects, crippling accidents, Parkinson's disease, schizophrenia, depression, chronic pain, aging and death.

On the basis of Joy's recent writing and speaking, I have to assume that we disagree not only about the facts but also in our basic values. Joy seems to value safety, stability, and caution above all. I value relief of humanity's historical ills, challenge, and the drive to transcend our existing limitations, whether biological, intellectual, emotional, or spiritual.

Joy quotes the fragmented yet brilliant figure of Friedrich Nietzsche to support his call for an abandonment of the unfettered pursuit of knowledge. Nietzsche is telling the reader that our trust in science "cannot owe its origin to a calculus of utility; it must have originated in spite of the fact that the disutility and dangerousness of the 'will to truth', or 'truth at any price' is proved to it constantly." Joy has understood Nietzsche so poorly that he thinks Nietzsche here is supporting his call for relinquishing the unchained quest for knowledge in favor of safety and comfort. Nietzsche was no friend to "utility". He despised the English Utilitarian philosophers for their enthroning pleasure or happiness as the ultimate value. Even a cursory reading of Nietzsche should make it obvious that he valued not comfort, ease, or certainty. Nietzsche liked the dangerousness of the will to truth. He liked that the search for knowledge endangered dogma and the comforts and delusions of dogma.

Nietzsche's Zarathustra says: "The most cautious people ask today: 'How may man still be preserved?'" He might have been talking of Bill Joy when he continues: "Zarathustra, however, asks as the sole and first one to do so: `How shall man be overcome?"... "Overcome for me these masters of the present, o my brothers--these petty people: they are the overman's greatest danger!" If we interpret Nietzsche's inchoate notion of the overman as the transhumans who will emerge from the integration of biology and the technologies feared by Joy, we can see with whom Nietzsche would likely side. I will limit myself to one more quotation from Nietzsche:

And life itself confided this secret to me: "Behold," it said, "I am that which must always overcome itself. Indeed, you call it a will to procreate or a drive to an end, to something higher, farther, more manifold: but all this is one... Rather would I perish than forswear this; and verily, where there is perishing... there life sacrifices itself--for [more] power... Whatever I create and however much I live it--soon I must oppose it and my life; ... 'will to existence': that will does not exist... not will to life but... will to power. There is much that life esteems more highly than life itself. Zarathustra II 12 (K: 248)

Like Nietzsche, I find mere survival ethically and spiritually inadequate. Even if, contrary to my view, relinquishment improved our odds of survival, that would not make it the most ethical choice if we value the unfettered search for knowledge and intellectual, emotional, and spiritual progress. Does that mean doing nothing while technology surges ahead? No. We can minimize the dangers, ease the cultural transition, and accelerate the arrival of benefits in two ways: We can develop a sophisticated philosophical perspective on the issues. And we can seek to use new technologies to enhance emotional and psychological health, freeing ourselves from the irrationalities and destructiveness built into the genes of our species.

We should be spurring understanding of emotions and the neural basis of feeling and motivation. I've seen some good work in this area (such as Joseph LeDoux's The Emotional Brain), but until very recently cognitive science has ignored emotions. If we are to flourish in the presence of incredible new technological abilities, we would do well to focus on using them to debug human nature. Power can corrupt, but knowledge that brings the power to self-modify so as to refine our psychology can ward off corruption and destruction. I have spoken on this topic more than I have yet publicly written, but I would stress the importance of advancing our abilities for refinement of our own emotions.

Improving philosophical understanding will speed the absorption and integration of new technologies. If we continue to approach rapid and profound technological change with philosophical worldviews rooted in old myths and pre-scientific story-making, we will needlessly fear change, miss out on potential advances, and be caught unprepared. When the announcement came from Scotland proclaiming the first successful mammalian cloning, the Catholic Pope issued a statement opposing cloning on grounds that made no sense. (His vague objection would apply equally to identical twins.) President Clinton and other leaders also automatically moved to ban human cloning, with no indication of clear thinking based in science and philosophy.

Extropians and other transhumanists have been developing philosophical thinking fitting to these powerful emerging technologies. In our books, essays, talks, and email forums, we have explored a vast range of philosophical issues in depth. Just last year in August 1999, I chaired Extropy Institute's fourth conference: Biotech Futures: Challenges and Choices of Life Extension and Genetic Engineering. The conference laid out the likely path of emerging technologies and dissected issues raised. In my own talk, I analyzed implicit philosophical mistakes that engender fear and resistance to the changes we anticipate. I summarized our own goals in a letter to Mother Nature, and have laid out some guiding values in The Extropian Principles.

Bill Joy's essay and subsequent talks may feed the public's fear and misunderstanding of our potential future. On the other hand, perhaps his thoughts will raise interest in the philosophical, ethical, and policy issues in a productive way. As a philosopher committed to incubating better futures, I along with my colleagues in Extropy Institute welcome constructive input from Joy in this continuing learning process. Humanity is on the edge of a grand evolutionary leap. Let's not pull back from the edge, but by all means let's check our flight equipment as we prepare for takeoff.

Original article at Extropy.org
 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

What ethics?
posted on 08/25/2001 1:04 PM by jpjolly@aon.at

[Top]
[Mind·X]
[Reply to this post]

Max More is right to say that Bill Joy's plea for "relinquishment" is nonsense. The new developments of the past few decades are genies that won't go back into their bottles. However, figuring out what to do with them is going to require a lot of attention by all of us. The reason that so many people have trouble articulating clear ethical positions regarding new technologies is that some new technological developments challenge the validity of our ethical values.

Like all the other commentators who cite "ethics" as the basis for their arguments, More ignores the fact that our ethical concepts reflected a world in which certain facts of life - death, illness, suffering - were assumed to be immutable. Against this background, it is easy to recognize long life and health as universal values, and to view caring for the sick as an exemplary exercise of these values. Yet in this context, caring for the sick means taking time from from our other endeavors to give them whatever attention and comfort we can. But has the work of Edward Jenner or Christian Barnaard contributed to society's ability to fulfill this ideal? In today's world of "granny dumping" and industrial-style hospital care, I tend to think that we have moved further from it than ever. (It would be interesting to find out how many of the top people in bio-engineering have ever spent an afternoon holding the hand of a sick friend or relative.)

More calls himself a philosopher, but a good philospher would define his terms first. And by doing so, he would forced to admit that the ethics he is talking about are those of a bygone era, ill-equipped to deal with playing God on the scale to which we now seem capable.

Advocates of genetic engineering for medical purposes take the position that their research is ethically "good" because it may, for example, some day eradicate those diseases which take the greatest toll in terms of human lives. Yet they conveniently ignore the fact that such a development would change the basic parameters of human existence at least as fundamentally as, say, the development of agriculture. Using mere biological intelligence, and employing only the concepts of the previous millenium, it is possible to envision some of the unbalancing effects this might have on humanity, such as the implications of new horizons in overpopulation. What form all this might take is, of course, subject to speculation. But whatever happens, it is an insult to our intelligence when lobbyists tell us that everything will be the same, only we won't have to worry about cancer any more.

Or try this: What is cloning? Cloning uses a cell from a living organism to make a biologically identical copy of the original organism. And what is its purpose? Is it for procreation? Society possesses a wealth of moral thought on reproduction, but the idea of cloning does not appear in any of this. Is it for research? In the debate on stem cell research, advocates place the possible knowledge to be gained above the sanctity of innocent human life. Yet this is an certainly an alien concept in existing ethics.

At the level of events which represent paradigm shifts in human history, once we step back from existing cultural positions, we have few concepts that are useful in judging what happens. If a technological development provides overwhelming competitive advantages to those who use it, there is ultimately no way to stop its development. But it is important to remember that changes in the parameters of existence do not take place in an orderly manner. Eventually, the effects of sci-fi technologies change our lives much more drastically than the ability to order pizza via Internet has done. Although Bill Joy's call for researchers to relinquish development of "threatening" technologies is naive and futile, there can be no sensible philosophical discussion of the future of technology without a reconsideration of ethical concepts. The public and scientific debate on these topics will have to go beyond the mentalities of mere fundraising on the one hand and wishing it would all go away on the other. Those who consider themselves to be on the cutting edge would provide a much more valuable service by thinking hard about how technology will redfine our ethical categories, and help the rest of us to understand it better.

Re: What ethics?
posted on 08/25/2001 6:07 PM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Whose ethics are we talking about? Are we to be guided by the ethics of the Taliban or the Southern Baptists or the Catholics, Buddhists or Hindus? All of them have different ideas about what kind of conduct can be called ethical. Or do we have to create a new universal cannon of ethics that applies to a people who would verge on being gods to the people who wrote the original books on the subject?

Re: What ethics?
posted on 08/26/2001 10:25 AM by jpjolly@aon.at

[Top]
[Mind·X]
[Reply to this post]

It's not a matter of merely choosing from existing ethical systems, nor of throwing them all out to worship new tech, but of trying to find some rudimentary consensus about what humanity is we want technology to do for us. Otherwise, the Trekkies should just admit that their faith in technology is a religion in its own right. Bill Joy quotes George Dyson (from "Darwin Among the Machines"): "In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines." One of our new religions is Darwinism. But does that mean we want to dedicate ourselves fully to its principles?

If we are still talking about More's text, then the question is whether we can even talk about these things in moral terms when the conditions that gave rise to our moral concepts are no longer adequate. In case you think that's an exaggeration, consider how difficult it is for people today to go even a short time without their cars, phones, and microwaves. Can you or anyone you know make fire without a match? How much more difficult will it then be for us to live without our computers, once we have integrated them into our individual brain circuits?

The debate, such as it is, is being carried out between two extreme groups. On the one hand, there are those who tell us that the technology they are about to invent will be like the Shmoo's from the Li'l Abner comics, curing all ills, solving all problems with a smiling face. On the other, we have the wild-eyed prophets trudging in from the desert to warn us of impending doom. In this climate, and in the context of traditional ethical systems, can anyone claim to know whether there is even a grain of value in radical new technologies? When day-to-day survival is the problem, anything that makes that survival easier looks "good". But is prolonging human life good in and of itself if basic survival in "conventional" terms has never been easier? What I am saying is that the talented, ethically res'ponsible scientists working on the forefornt of technological breakthroughs should have the honesty to recognize that when they re-draw the demarcations lines of the possible, they automatically cast doubt on their ethical mandate for further research. And those people who still think that it is possible to "undiscover" earth-shaking new technologies need to overcome their reservations, inform themselves, and take part in the discussion, even the Taliban.

Once again, it seems to me that there is very little serious critical thought going on here, whether among technophiles or among Luddites.

Re: What ethics?
posted on 08/26/2001 3:58 PM by tomaz@techemail.com

[Top]
[Mind·X]
[Reply to this post]

> the technology they are about to invent will be like the Shmoo's from the Li'l Abner comics, curing all ills, solving all problems with a smiling face

Exactly. A political correctness, that the 'technology cannot solve all problems' - or some theatrical considerations which drives Holywood movies - are totally irrelevant.

> curing all ills, solving all problems with a smiling face

YES!


- Thomas Kristan

Singularity cult amounts to blind worship of gray goo in Schmoo form?
posted on 01/20/2002 10:54 AM by craighubleyus@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

If you really believe that the most dangerous thing that can arise from nanotech is the Schmoos, you're pathetic and likely insane.

But at least we know what you worship now.

Re: The ethics of a moving target
posted on 08/26/2001 10:02 PM by jrichard@empire.net

[Top]
[Mind·X]
[Reply to this post]

One of the great challenges of accelerating technology is that it introduces ethical issues faster than the human community can absorb and focus on.

In a way, the neo-Luddite movement can be seen as an attempt to slow the whole process down so that the old ethics (whatever it may be) can still function in a world that resembles its source.

As each technical possibility emerges, one way to view the ethical aspect is to ask who will want this technology and what harm will there be if they get it.

One might ask what are the ethics in situations where a large minority would want something that the majority would consider harmful because it changes their world in a way they don't want.

For example, if a large minority chose to be cyborgs with greatly enhanced intelligence, should the rest of humanity accept a world in which they would gradually become second-class citizens?

The importance of coming to grips with all this suggests that we should be funding Ethics and Technology centers in all the major universities.
This would give us a significant pool of people who could frame all the ethical considerations before decisions were made on allowing certain technologies to proceed.

uncritical promotion of technology is itself a violation of ethics.
posted on 01/20/2002 10:57 AM by craighubleyus@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

My own experience is that the Luddites are asking the right questions.

The techno-philes are still promoting the technology mindlessly, as in nuclear age.
Given history and current events, uncritical promotion of technology is itself a violation of ethics.

"But is prolonging human life good in and of itself if basic survival in "conventional" terms has never been easier? "

No. Because a bunch of two-hundred-year-old vampires spouting the same propaganda from their childhood and demanding organ transplants from children is not a sane ruling class.

how likely is an intelligent species to become a "god" not commit suicide?
posted on 01/20/2002 10:59 AM by craighubleyus@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

"do we have to create a new universal canon of ethics that applies to a people who would verge on being gods to the people who wrote the
original books on the subject?"

Likely, yes. If time is cyclic this is easy but necessary. If time is linear this is hard but could be avoided in favor of a dumb bet on blind luck. Think about it.

Max More knows nothing about ethics - he is a self-serving cultist
posted on 01/20/2002 7:11 AM by craighubleyus@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

http://www.tikkun.org/magazine/index.cfm/action/tikkun/issue/tik0111/article/011111a.html

"In the memorable phrase of Father Thomas Berry, our current economic and technological
system has turned all of nature from a community of subjects into a collection of objects. To
restore relationship and begin healing we must again treat the living kingdom as a community of
subjects, each with its own meaning and destiny, none as merely exploitable objects or means
of production. Moving towards this new moral community involves nothing less than replacing
the infrastructure of cold evil with technologies and human systems which are responsive to our
physical and spiritual needs and the needs of the rest of the biotic community. This means
evolving a means of production and social organization for which we can take true
responsibility. It is a daunting, almost overwhelming task, but the alternative is to continue to live
in state of cold evil, complicit in the current system's crimes and distanced from relationship and
healing. This we can no longer do."

More is provably wrong. Joy is morally right. But of course if you ignore proofs from both history and science, and have no morality, of course you may follow More to doomsday.

If you want to play with this stuff off my (and Bill Joy's) planet, we probably can't stop you.

But destroying all of it, and all of its creators, we find inside our gravity well, is a reasonable precaution.

Craig Hubley

Re: Max More knows nothing about ethics - he is a self-serving cultist
posted on 01/20/2002 9:46 AM by grantc4@hotmail.com

[Top]
[Mind·X]
[Reply to this post]

Isn't it wonderful! Now that we're gods, everyone wants to take over and direct the course of evolution. The only problem is, we're all pulling in different directions. How is that going to be any different from the random selection we have now?

ethics is a matter of constraining yourself to needs of others
posted on 01/20/2002 10:42 AM by craighubleyus@yahoo.com

[Top]
[Mind·X]
[Reply to this post]

Damn good point. There is no difference between a bunch of pseudo-gods pulling in random directions and simple randommness.

And random evolution has given us some pretty disastrous events, e.g. cyanophytes chewing up the whole methane atmosphere a billion years ago.

Now, that was fun for us, but if a bunch of robotic critters decide to put it *back*... hmm

Big ugly disasters, dieoffs, etc..

If Max More is going to call Bill Joy unethical, he should look into the process of ethics itself:
it usually requires modifying your own behavior to deal with the constraints imposed on it by others.

Time will tell...

Re: ethics is a matter of constraining yourself to needs of others
posted on 01/21/2003 9:02 PM by Jeremy

[Top]
[Mind·X]
[Reply to this post]

Time will tell...

and when it does its work, it may be too late.

Just some thoughts...
posted on 11/05/2003 12:13 PM by techno

[Top]
[Mind·X]
[Reply to this post]

Max More argues that relinquishment, as suggested by Bill Joy, will not work based on two principle grounds (its unworkable and its unethical). I completely agree with the fact that relinquishment will be unsuccessful on the grounds that it is unworkable. How many times have we seen the entire world come to a consensus on a global issue? Most people cannot reach a consensus on issues within their own homes. Even if we can get a consensus, how do we enforce it? If the potential exists, someone will carry on the research somewhere in the world where enforcement is not very strong. I don't agree with the concept of 'fine-grained' relinquishment either. Max More mentions that 'Eric Drexler has long recommended designing nanomachines that will quickly cease functioning if not fed some essential and naturally uncommon ingredient.' If it was this easy to control these nanomachines, then why would people be opposed to such technology? Responsible developers might take these types of precautions, but what about individuals who might use the technology for other purposes? Max More mentions that accelerating development of advanced research in open societies will help us defend ourselves against rouge developers. Although this makes sense if we are trying to stay ahead of everyone else who might use the technology for unintended purposes, it somewhat contradicts the notion of putting more thought into intelligent design and ethical guidelines. I believe that intelligent design and ethical considerations requires us to slow down the process of development so that we can really understand where we are and where we should go next. As Ray Kurzweil mentions, we cannot look at the advance of technology from a linear perspective because the growth is exponential. Accelerating the speed at which we develop advanced technology then only seems to decrease the amount of time we have to critically assess our progress.
On the grounds that relinquishment is unethical, I have mixed feeling about this. I do believe that it is unethical to completely stop research in areas that will help eliminate suffering and disease and will allow us to live much longer lives. On the other hand, if the destructive power of the technology becomes greater than the constructive power, then is it really worth it? I am not exactly sure where I stand on the ethics issue. I don't agree with Max Mores' view of aging and death as a serial extinction of humanity. We live in a world of cycles. Human life as well as technology follows this cyclic pattern of rise and fall. What is birth without death? Technology one day might allow us to live until we are 200 years old or more, but we will eventually die. The rationalization of aging and death as natural and possibly even desirable will always remain, even when we can live hundreds of years. If he believes that technology will somehow solve this, I think he is a little too optimistic about the greatness of the future.

thx.

Re: Just some thoughts...
posted on 11/09/2003 9:51 PM by Naomi8

[Top]
[Mind·X]
[Reply to this post]

Max More protests that Bill Joy's "Why the Future Doesn't Need Us" Wired essay was prematurely publicized without adequate consideration of possible ramifications. More insists that 'someone in Joy's influential position has a responsibility to delve into prior thinking on these issues before scaring a public already unreasonably afraid of some advanced technologies.' However, individuals like Joy bring meaning to Luddism and spark awareness to a possible future that I have never even imagined. Intelligent people should register Joy's arguments as professional opinion and do extensive research to discover evidence supporting or contradicting his beliefs, and ultimately make up their own minds on what to believe. More lazy-minded people can rely on the popular trilogy 'The Matrix' to shape the fundamentals of their beliefs. At times it may be difficult to distinguish fact from fiction, hence the message 'The Matrix' portrays could more deeply penetrate the general population's minds, than could More's attempt of persuasion in the Wired essay. Should the co-directing Wachowski brothers also be condemned for 'scaring' a public on the consequences of advanced technologies and gearing our perspectives toward a bleak future?

The objection that Joy's call for mandatory relinquishment is unworkable does seem accurate in reality, now. The commentary above argues through the rhetorical question: 'how many times have we seen the entire world come to a consensus on a global issue?' The obvious answer being almost never. However, there are significant exceptions to the answer hinted at. Worldwide consensus can be reached on key issues. An example of this being that countries considered hostile are prohibited of owning weapons of mass destruction. If a strong enough voice is heard than I am confidant that the world will listen. A snowball effect could be initiated through individual minds like Bill Joy's, through to the U.S. senate, through to United Nations, through to the world. However, before we can reach consensus, More is correct in that more explicit means and measures need to be described on how to enforce relinquishment worldwide of dangerous technologies. Joy's essay should not be treated as an ultimatum to either enact relinquishment or accept the inevitable terrors to come. He is simply one of the pioneering voices of Luddism in our time, and with sufficient attention and consideration, relinquishment could be workable and successful if deemed a necessary means for preventing destructive uses of technologies.

The above commentary claims that 'intelligent design and ethical considerations requires us to slow down the process of development so that we can really understand where we are and where we should go next.' This may not be necessary if a genuine attempt was made to bring a pool of experts up-to-speed with what possible consequences could follow from our advanced technologies, deemed dangerous or not. Often organizations conduct control reviews to identify the risks and weaknesses in a project. The Leader of Opposition role in the government is a prime example of the type of experts needed to question and analyze advances in 'GNR' technologies. Although all issues may not be identified, these experts could prepare humankind for what is to come in our future. With scientists and contingency experts moving in tandem, there could be legislation passed and valuable lessons learned to equip us all for the unexpected.

I offer unwavering support that relinquishment could mean 'deliberate indulgence of continued suffering' of humankind. There is strength in More's rebuttal on ethical grounds that relinquishing 'GNR' technologies could mean continued suffering of 'illness, damage, starvation, and all the plethora of woes humanity has had to endure through the ages.' General moral imperatives to 'contribute to society and human well-being' and 'avoid harm to others' (ACM Code of Ethics) are directly applicable in the short-term, and potentially in the long-term. With our 'Leader of Opposition GNR experts' in place, we can only hope that continual advancement of 'GNR' technologies with complete awareness of the consequences can bring us to the 'utopian fantasy' we all dream of.

Re: Embrace, Don't Relinquish, the Future
posted on 06/12/2005 3:50 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

I strongly agree. Embrace, don't relinquish the future.

I could never design, with my own flesh and limited life-span, a computer replete with all of its peripheral devices, (not to mention the internet or its protocols) that I am using to commuicate with this discussion group. I don't have the math ability for it. I suspect strongly that the same is true of the Luddites in our midst. But I differ from them in that I appreciate the efforts of the technologists who gave me this wonderful ability to communicate effectively with vast numbers of my fellow humans.

To quote Petr Beckmann, author of "A History of Pi":


"It has again become fashionable to blame science and technology for the ills of society. I have some sympathies for the Luddites who were uneducated, miserable, and desperate. I have none for the college-educated illiterates who drivel about 'too much science and technology' because they want to conserve their lifestyle by denying it to everyone else."


So why do the new fascist 'left' and 'right' believe that force is the answer to all of life's problems? -Because it's too hard for them to imagine the truth: that the technologists value life, when they themselves do not.

It has forever been the aim of well-meaning technologists to decentralize power through the peaceful and voluntary market. It has been the goal of the left to go to the strongest bully, and beg them to do the impossible: make us all safe. The result of this action, (when the bully looks around and realizes that the vast ocean of idiots begging for safety is much stronger than he alone is) is that the bully happily accepts the responsibility. "Sure thing, Mr. Luddite, I'll keep you safe!" And in turn, the brutalizer only asks that the luddite pseudo-intellectuals gain the confidence of the rest of the mindless herd, from which he draws his real brute-strength.

The only reason why the luddites here in America have not already turned this country into a complete dictatorship is that the technologists have defended the most important human right: The right of self-defense. The second amendment gaurantees that we cannot be subjected to Stalinism, even if that's exactly what the Luddites among us want.

There is a saying in the gun culture that "Gun control isn't about guns, it's about control". This statement's truth is proven more likely with every luddite argument on these boards. The luddites value 'control' and 'stability' over the lives of a family that is cowering behind their door for fear of the thought police.

Ultimately, what the luddites want is for there to be no strong individuals among us (for all of those individuals to be targeted for destruction by the collective). They then would control everything, because they recognize that they would not ever be given the responsibility of leadership from an informed collective, but an uninformed collective might well grant them the crown. Luckily for them, the collective wishes to avoid the responsibility of thinking.

It is in the hands of the few then, the technologists, that the responsiblity of decentralizing power falls.

If the power becomes concentrated in government hands, then the Luddites will control (or limit) its deployment.

This would be a tragedy. As relatively free individuals, we can already see what government does to individual rights when it is given the choice -it declares individual rights null and void. This is the nature of government, and more basically of collective rule.

There is no valid reason to strip people from their families and put them in prisons, because they owned certain kinds of drugs or small arms. -But the fearmongers do it anyway, and they have a lot of political support from the rest of the uneducated masses.

This shows that the majority of unimaginitive people cannot imagine a positive use for something that has a potential negative use (ie: illegal drugs, illegal small-arms). Everyone in the new-tech community should be painfully aware of this fact.

We must also be aware that there are people who do not value life among us. These are not the machine gun owners who stand prepared to defend against a democide (as the technology banners would have us believe). These are the Luddites who say "We love life so much that we want to get rid of all machine guns, so that they can never kill again".

This then leaves the machine guns in the hands of only the government. Whatever ignorant luddite then whispers in that government's ear then gets to point the machine guns at his enemies.

The Weimar Republic required universal gun registration as a supposed means of denying gun registration (and thus gun ownership) to gypsies (a minority that the majority of ignorant citizens unreasonably feared and hated). Their stated goal was to collectively reduce violence. The result of their goal was to allow Hitler to remove guns from the hands of his regime's political enemies (in 1938), and then cart the Jews (and gypsies) off to concentration camps, without a shot fired. There have been similar defensive technology restrictions in every major genocide/democide. A very recent example of this was in Rwanda. The Tutsis were not allowed, by law, to own weapons of self-defense. Had they been able to, they would not have allowed goons to bash their heads in with machetes.

The political norm of luddite anti-technology meant that one option for saving the Tutsis was never considered: "Why not arm them equally, quickly?" Why not ask the western world to contribute money to buy them guns? Seems crazy, to people who have been trained not to think about the nature of self defense, and the nature of human collectivism. But it would have saved the Tutsis -they would have returned fire, rather than be massacred.

In the absence or loss of a democratic election process, universal gun ownership is the only thing that can prevent a democide/genocide. See:

http://www.hawaii.edu/powerkills
http://www.jpfo.org

Consider how these basic rules of defensive technology and collective rule apply to a defensive technology like military nanotechnology. We haven't outgrown the collective rule by thugs yet, and there is no indication that the majority of the populace even wishes this to happen. But our technology is getting much better.

If we wish to maintain our human dignity, then there must be some free individual who has the private ability to manufacture advanced military nanotechnology, and distribute this, as a defensive technology, to a willing populace of current libertarians/gun-owners.

This is not an assertion that self-described libertarians are the answer, but rather those who currently act as libertarians on the balance of power in society. A gun-owning 'Democratic/Republican Party' voter who believes in human dignity and equality under the law is vastly more 'libertarian' than a Libertarian Party member in New York City who's given up his guns and does nothing to ensure that his rights are defended.

The statements I've made here are some of the few political statements that do not violate the natural laws that govern human politics.

Rather than relinquish technological advancement, I suggest that pioneers of defensive technologies pursue and distribute their work as far and wide as possible. How should it be distributed? It should be distributed to as many of those who have a proven past ability to responsibly control defensive technology, as possible.

I would recommend that for instance, a past reliable manufacturer of 50 caliber rifles, would be a better person to trust with this technology than would any higher-up in the department of defense. Someone who was registered as a Libertarian Party member in 1975 or 1984-90 (the low points of the party's popularity) would be another good choice. As would nearly anyone smart enough to build the technology itself (which is the primary reason that this debate will likely be pointless: those smart enough to make the advancements will just advance faster than the rest of us, whether anyone likes it or not).

These individuals are the minority that has forsaken power to advance the ideological cause of "Live and Let Live".

The very worst thing to do? Let nature take its course, trusting the "leading force" government to do the right thing.

The leadership of the current "Leading Force" (George Bush/USA) sees nothing wrong with the provisions in the "Patriot Act" that deny trial by jury to those accused of being vaguely defined as "terrorists". So much for the "due process" of a science court chosen by government or military leaders. Mainstream politicians also see nothing wrong with granting validity to ideas as technologically backwards and demented as the Pope's.

By what right does any of this aggregate of irrational humanity make a demand on any rational being that he/she stop working on anything?

The technologists need to stop assuming that their antagonists have any valid or legitimate motives.

Ayn Rand was one of the first people to note that the anti-technology forces seem to be rebelling against human creativity and production more than anything else. The probable reasons? Occam's razor gives them to us: 1) It is easier to destroy than build, and if people give you status for destroying, why not pursue it? 2) Advocating destruction will allocate the power of the producer to you personally, through the misdirection, appropriation, and theft by authoritarian might and 3) If you haven't seen the beauty and complexity that life is capable of, and haven't prepared for, then the level of jealousy you must feel towards those who have must be pretty large -so why should they have more money, reproductive opportunity, comfort, and status than you do? You want what they have, but there's no reason why you should have it, since you didn't work/think to attain it. Why not? Either A) You're not smart enough, B) Nobody helped you in the right direction when you were young (and you didn't overcome being sent in the wrong direction on your own) or C) The productivity is itself a bad thing, and everyone else is wrong for desiring it.

The reasons why most Luddites choose option "C" to rationalize their actions is described quite well by Ayn Rand and her legacy here:

http://www.aynrand.org/site/PageServer?pagename=me dia_topic_environmentalism_and_animal_rights

I wish that most technologists would recognize that politics cannot be avoided through a lack of addressing it. The luddite masses will not go away, and they are used to solving their problems with force.

Please, familiarize yourself with the Libertarian Party Platform, and support Libertarian candidates when they defend your right to innovate. To do less is to be philosophically divided, and thus easy pickings for the ignorant collective.

On the other hand, if the technologists all united around the Libertarian Platform, they would have enough grey matter directed at any election to win it.
http://www.lp.org/issues/platform_all.shtml
Even a cursory attempt to study elections and then win them, on bahalf of libertarian candidates, would pay DRAMATIC dividends to the technology movement.

The Luddites have grasped the fact that, since the public is willing to used force, they should control that force if they want to get their way.

The Libertarians offer the chance to render that force of collective ignorance impotent in its ability to destroy.

Keep in mind that this is not an argument for destroying the Luddites (even though they've proven they'd happily destroy us). This is an argument in favor of disarming their offensive against us (organized anti-technology government forces), while allowing them to keep their defensive weaponry (their own small arms/equal nanotech/equal AI augmentation).

Causing dramatic Libertarian/educational victories in the political arena would likely prevent mass defensive/offensive bloodshed later.

Why?

Because tolerance of different paths is central to the enjoyment of life for all.

The luddites assume that something that lives forever and is vastly intelligent is a threat. Why? No threat has been implied by this statement. A vast intelligence would likely repair in us that which breaks, rather than break it.

It seems vastly more likely that the artilects will treat unmodified humans like we treat dogs. We give them what they want, treat them with kindness, take them to the vet and cure their illnesses. Extend their lives as long as possible. The extent to which they suffer is the extent to which we fail to control our own world. The artilects would likely be less limited in their graces than are we.

That might be better than what we have now.

Infinitely better than what we have now would be for us to advance with the artilects themselves, exchanging value for value the whole way.

The more uncoerced good choices we have, the better.

Obviously.

-Jake

Re: Embrace, Don't Relinquish, the Future
posted on 06/12/2005 4:25 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

but whatever the psychological reason for 'ludditism' we need to look at the enormous power of coming technologoes, especially computing and and deal with detailed ways to make it safe.


It's no good waking up one morning and billions of pc's have gone awol and dug themselves into the earth somewhere in australia refusing to negotiate with us :)


I think Bill Joy and Steve Hawking and others are right to urge a public safety debate on A.I.


There is a real issue of genocidal consequences here and parliaments shpuld be lobbied to draft safety legislation WITHOUR impeding prgress.


Eldras

Re: Embrace, Don't Ruin, the Future
posted on 06/15/2005 1:34 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

There is a real issue of genocidal consequences here and parliaments should be lobbied to draft safety legislation WITHOUT impeding prgress.


Of course, if they draft safety legislation, it will impede progress, whether you like it or not.

There's ALWAYS the chance of democide resulting from the actions of government though. Nanotechnology is just as (or more) likely to decrease the odds of democide as it is to increase them. Also, the potential benefit of millions of people living longer lives is the most reasonable use of nanotechnology, with or without government intervention. (I use the more accurate word democide for "mass murder by government, during peacetime" rather than genocide which indicates that the killing has to be guided along race lines -see RJ Rummel's website, "The Democratic Peace" at http://www.hawaii.edu/powerkills )

We are more likely to be killed by a government that can easily manipulate a mass of poor, unhealthy, stupid opposing mobs, than we are by a post-singularity government that has healthier and wealthier citizens.

Our government is also currently encouraging an inflationary crash of our money system, through deficit spending. See http://www.harrybrowne.com or read "The Coming Collapse of the Dollar and How You Can Profit From It" by Turk and Rubino. This dramatically increases the likelihood of mass murder/dictatorship during peacetime.

Our citizenry has already turned their backs on nearly every limitation on government power that was originally built into our system. Jury nullification of law, free speech, etc... But most people blindly think government is just a way for them to get what they want.

How, pray tell, will Congress or Parliament, being lobbied by the ignorant masses, use their only tool (brute force or its threat) to forestall supposedly unconstructive developement of nanotechnology? They can't. First, they themselves are probably dumber than a boot, and hence decided to enter a career of "deciding how to steal and spend money that isn't theirs, in order to coerce people to act against their wishes (which often line up perfectly with everyone's best interest)". Second, it won't even be someone as smart as Kurweil (or even the Unabomber for that matter) whose advice they'll be taking --it'll be advice from a mob full of lowest-common-denominators. Third, even if the advice was good (which it won't be), the legislators couldn't do a damn thing but shift the research money being spent to illegal channels, adding the cost of leaving the US to the already great cost of doing business. Fourth, the language of the bill will then have to be interpreted by a horde of lawyers who know exactly how to prey on the stupidity of (judge and prosecution) hand-picked jurors (and sometimes by regulatory prosecutors who don't even need to worry about jurors, because the threat of a regulatory "civil" fine can force an out-of-court extortion settlement if they want to).

Put simply: If you aren't smart enough to build nanotechnology yourself, don't go screwing things up for those who are, unless you're damned sure that they're working for Al Qaeda. (And if that's the case, there are already a million laws already on the books that are adequate to deal with the threat.) The only thing that government ever does is slow everything down, get in the way, and push formerly honest citizens into new black markets. (Of course, existing criminals also take advantage of new black markets created by govenment intervention - I was oversimplifying.)

The single greatest non-natural threat to humans is government intervention or its threat (if you include the cost of regulation).
http://www.hawaii.edu/powerkills/VIS.TEARS.ALL.ARO UND.HTM

If Kurzweil really wants to move technology forward, he should run for President as a Libertarian. (Actually, with the web traffic he gets here, and what http://www.lp.org gets during a presidential year, that would probably be a mutually beneficial thing... he'd likely sell at least another 100,000 books. Of course, there are only vague hints as to what his politics are - although he uses a differently-weighted version of Ayn Rand's comment on death and taxes in his 'Fantastic Voyage' book)

We'd already be living to 500 if it wasn't for the taxes, AMA, and FDA. Before you shrug this idea off, imagine if the wealth of the nation was nearly doubled (no IRS or regulatory licenses, etc...), and any health invention could be instantly introduced to the market, without regulatory cost. Keep in mind the thousands of heart patients killed by the FDA ban on propranalol alone (prior to its approval). See:
http://www.FDAReview.org/incentives.shtml
http://www.FDAReview.org/harm.shtml

The above isn't proof, but its a good first step towards taking adequate responsiblity for your vote. (The more responsiblity one takes for their vote, the further they generally move from believing that government force is a good way to solve problems -unless they're evil.)

PostScript... Nanotechnologists: consider relocating to Costa Rica (or Alaska if you don't want to leave the States). Cost Rica just elected 10% of their parlaiment as Libertarian. http://www.libertario.org -With even a little bit of extra help, they could create a better version of the USA there. Consider this: If 3/4 of their congress and their president are libertarian, then it will be more capitalist than Hong Kong was, and it will be a mecca for new industry. If you have money, and you want to eliminate government obstacles to freedom, they've got a credit card contribution page on their website. They'd be a good example to use here regarding what could and should be. They also know what they're doing, because they accomplished their previous electoral victories with nearly no outside money.

The reason I included AK as an alternative is because there is a large unincorporated area here that pays no taxes. Also, there are lots of libertarians here, even if they are too underfunded to win big political races. Plus, it's be better to introduce your single vote into a State that only has ~600,000 voters rather than 3,000,000+ voters, and you only need to live here during the summer and own property to vote here (Although technically they can't legally disenfranchise the homeless if you're willing to make that claim for voting purposes).

-Jake

Re: Embrace, Don't Ruin, the Future
posted on 06/16/2005 11:06 AM by FarmerGene

[Top]
[Mind·X]
[Reply to this post]


"We'd already be living to 500 if it wasn't for the taxes, AMA, and FDA. Before you shrug this idea off, imagine if the wealth of the nation was nearly doubled (no IRS or regulatory licenses, etc...),"

Most people are not capable of conceiving of how much better off they would be with little or no government.

After reading Rummel's 'Death by Government' I wondered if someone would write a book encompassing democide, the destruction of property through wars, the losses occurring because of taxes and regulations (taxes should be subtracted from GDP), and deaths from wars. It boggles the mind to understand how anti-life and anti-progress government really is.

"Cost Rica just elected 10% of their parlaiment as Libertarian. http://www.libertario.org "

Costa Rica, the Switzerland of Central America. Beautiful country. Is the Limon Project still going?

Re: Embrace, Don't Ruin, the Future
posted on 06/16/2005 11:47 AM by FarmerGene

[Top]
[Mind·X]
[Reply to this post]


Jake, could you write that book?

Joe

Re: Embrace, Don't Ruin, the Future
posted on 06/16/2005 3:18 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

Thanks for the positive feedback. To be honest, I was bracing myself to find out that most of the big-brains here on the board are socialist/communist/collectivist, or some such. -Perhaps they are, and I'm in for a letdown.

I'm writing a book about what I know right now, and there may well be a chapter that notes how destructive government is, from several sources. Unfortunately, there is little I could do to expand on the immense body of work that's already been written on the subject of the costs of government. Rummel's website is the best one I know, because he reduces outside debate to just the things everyone can agree on, ie: enforced starvation and mass murder in the streets is a BAD THING. Even here, he loses "earnest" green party supporters and die-hard collectivists who refuse to see the obvious.

The Cato Institute doesn't assume individual rights as obvious, and tries to find pragmatic justifications for defending freedom. They try to put a "friendly face" on freedom. The trouble is: That this is what is necessary in America today.

The Americans of today are not the strong-willed people who created this country. They have a hard time understanding why they should assume responsibility for something as basic as their personal self-defense. Much less why free trade, open immigration, and other 'abstract concepts' are worth defending.

The point of all of this is to say that my efforts could be better spent actually organizing towards individual freedom in Alaska and other potential freedom hot-spots. My book will be about the realities of organizing a local pro-freedom movement in an unfree area (which Alaska is, even though it is much better than the lower 49).

Unfortunately I likely won't be able to respond to these posts again until late July, as I will be trying the local approach in certain areas of Arizona and won't have much time. If you really want to discuss ways that one can be more effective in moving towards freedom, it's not a book that cannot talk and answer individual objections that people have. Everyone has a reason why freedom won't or can't work, or a "way of pursuing freedom" that is ineffective and doesn't require much personal effort. If you want to help me move the cause of freedom forward, email me, and I will email you my phone number. jcwitmer at hotmail.

Re: Embrace, Don't Ruin, the Future
posted on 06/30/2005 12:30 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

july's good.

I find knwoledge exhange great and it will reduce to perfect/near perfect laws like physics.

I think finance is a phase humans are going through but it wont last post singularity

Re: Embrace, Don't Ruin, the Future
posted on 09/19/2005 1:58 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

Sorry: It's a little past July, and I'm on a shared computer, traveling.

I think there'll be finance post-singularity, it just won't usually be 'life or death'. There will always be more intelligent, creative, attractive, etc... people/things/robots. People will still strive for different things, and they'll still want to be with things they can't take to the more-malleable frontier, wherever that is.

-J