|
|
|
|
|
|
|
Origin >
Dangerous Futures >
Are We Becoming an Endangered Species? Technology and Ethics in the Twenty First Century
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0358.html
Printable Version |
|
|
|
Are We Becoming an Endangered Species? Technology and Ethics in the Twenty First Century
A Panel Discussion at Washington National Cathedral
Ray Kurzweil addresses questions presented at Are We Becoming an Endangered Species? Technology and Ethics in the 21 st Century, a conference on technology and ethics sponsored by Washington National Cathedral. Other panelists are Anne Foerst, Bill Joy and Bill Mckibben.
Originally presented on November 19, 2001 at Washington National Cathedral. Published on KurzweilAI.net November 19, 2001. See the briefing paper, which contains questions posed to all panelists. Also see news item.
Bill McKibben, Ray Kurzweil, Judy Woodruff, Bill Joy, and Anne Foerst discuss the dangers of genetic engineering, nanotechnology and robotics.
Ray Kurzweil: Questions and AnswersRay Kurzweil, how do you respond to Mr. Joy's concerns? Do scientific and technological advances pose a real threat to humanity, or do they promise to enhance life?The answer is both, and we don't have to look further than today to see what I call the deeply intertwined promise and peril of technology.
Imagine going back in time, let's say a couple hundred years, and describing the dangers that lay ahead, perils such as weapons capable of destroying all mammalian life on Earth. People in the eighteenth century listening to this litany of dangers, assuming they believed you, would probably think it mad to take such risks.
And then you could go on and describe the actual suffering that lay ahead, for example 100 million people killed in two great twentieth-century world wars, made possible by technology, and so on. Suppose further that we provide these people circa eighteenth century a choice to relinquish these then future technologies, they just might choose to do so, particularly if we were to emphasize the painful side of the equation.
Our eighteenth century forbears, if provided with the visions of a reliable futurist of that day, and if given a choice, might very well have embraced the view of my fellow panelist Bill McKibben who says today that we "must now grapple squarely with the idea of a world that has enough wealth and enough technological capability, and should not pursue more."
Judy Woodruff interviews Ray Kurzweil at Washington National Cathedral.
Now I believe that implementing such a choice would require a Brave New World type of totalitarian government in which the government uses technology to ban the further development of technology, but let's put that perspective aside for a moment, and pursue this scenario further. What if our forefathers, and foremothers, had made such a decision? Would that have been so bad?
Well, for starters, most of us here today would not be here today, because life expectancy would have remained what it was back then, which was about 35 years of age. Furthermore, you would have been busy with the extraordinary toil and labor of everyday life with many hours required just to prepare the evening meal. The vast majority of humanity pursued lives that were labor-intensive, poverty-stricken, disease-ridden, and disaster-prone.
This basic equation has not changed. Technology has to a great extent liberated at least many of us from the enormous difficulty and fragility that characterized human life up until recent times. But there is still a great deal of affliction and distress that needs to be conquered, and that indeed can be overcome by technological advances that are close at hand. We are on the verge of multiple revolutions in biotechnology -- genomics, proteomics, rational drug design, therapeutic cloning of cells, tissues, and organs, and others -- that will save tens of millions of lives and alleviate enormous suffering. Ultimately, nanotechnology will provide the ability to create any physical product. Combined with other emerging technologies, we have the potential to largely eliminate poverty which also causes enormous misery.
And yes, as Bill Joy, and others, including myself, have pointed out, these same technologies can be applied in destructive ways, and invariably they will be. However, we have to be mindful of the fact that our defensive technologies and protective measures will evolve along with the offensive potentials. If we take the future dangers such as Bill and others have described, and imagine them foisted on today's unprepared world, then it does sound like we're doomed. But that's not the delicate balance that we're facing. The defense will evolve along with the offense. And I don't agree with Bill that defense is necessarily weaker than offense. The reality is more complex.
We do have one contemporary example from which we can take a measure of comfort. Bill Joy talks about the dangers of self-replication, and we do have today a new form of fully nonbiological self-replicating entity that didn't exist just a few decades ago: the computer virus. When this form of destructive intruder first appeared, strong concerns were voiced that as they became more sophisticated, software pathogens had the potential to overwhelm, even destroy, the computer network medium they live in. Yet the immune system that has evolved in response to this challenge has been largely effective. The injury is but a small fraction of the benefit we receive from computer technology. That would not be the case if one imagines today's sophisticated software viruses foisted on the unprepared world of six or seven years ago.
One might counter that computer viruses do not have the lethal potential of biological viruses or self-replicating nanotechnology. Although true, this only strengthens my observation. The fact that computer viruses are usually not deadly to humans means that our response to the danger is that much less intense. Conversely, when it comes to self-replicating entities that are potentially lethal, our response on all levels will be vastly more serious.
Having said all this, I do have a specific proposal that I would like to share, which I will introduce a little later in our discussion. Mr. Kurzweil, given humanity's track record with chemical and biological weapons, are we not guaranteed that terrorists and/or malevolent governments will abuse GNR (Genetic, Nanotechnology, Robotics) technologies? If so, how do we address this problem without an outright ban on the technologies?Yes, these technologies will be abused. However, an outright ban, in my view, would be destructive, morally indefensible, and in any event would not address the dangers.
Nanotechnology, for example, is not a specific well-defined field. It is simply the inevitable end-result of the trend toward miniaturization which permeates virtually all technology. We've all seen pervasive miniaturization in our lifetimes. Technology in all forms -- electronic, mechanical, biological, and others -- is shrinking, currently at a rate of 5.6 per linear dimension per decade. The inescapable result will be nanotechnology.
With regard to more intelligent computers and software, it's an inescapable economic imperative affecting every company from large firms like Sun and Microsoft to small emerging companies.
With regard to biotechnology, are we going to tell the many millions of cancer sufferers around the world that although we are on the verge of new treatments that may save their lives, we're nonetheless canceling all of this research.
Banning these new technologies would condemn not just millions, but billions of people to the anguish of disease and poverty that we would otherwise be able to alleviate. And attempting to ban these technologies won't even eliminate the danger because it will only push these technologies underground where development would continue unimpeded by ethics and regulation.
We often go through three stages in examining the impact of future technology: awe and wonderment at its potential to overcome age-old problems, then a sense of dread at a new set of grave dangers that accompany these new technologies, followed by the realization that the only viable and responsible path is to set a careful course that can realize the promise while managing the peril.
The only viable approach is a combination of strong ethical standards, technology-enhanced law enforcement, and, most importantly, the development of both technical safeguards and technological immune systems to combat specific dangers.
And along those lines, I have a specific proposal. I do believe that we need to increase the priority of developing defensive technologies, not just for the vulnerabilities that society has identified since September 11, which are manifold, but the new ones attendant to the emerging technologies we're discussing this evening. We spend hundreds of billions of dollars a year on defense, and the danger from abuse of GNR technologies should be a primary target of these expenditures. Specifically, I am proposing that we set up a major program to be administered by the National Science Foundation and the National Institutes of Health. This new program would have a budget equaling the current budget for NSF and NIH. It would be devoted to developing defensive strategies, technologies, and ethical standards addressed at specific identified dangers associated with the new technologies funded by the conventional NSF and NIH budgets. There are other things we need to do as well, but this would be a practical way of significantly increasing the priority of addressing the dangers of emerging technologies. If humans are going to play God, perhaps we should look at who is in the game. Mr. Kurzweil, isn't it true that both the technological and scientific fields lack broad participation by women, lower socioeconomic classes and sexual and ethnic minorities? If so, shouldn't we be concerned about the missing voices? What impact does the narrowly defined demographic have on technology and science?I think it would be great to have more women in science, and it would lead to better decision making at all levels. To take an extreme example of the impact of not having sufficient participation by women, the Taliban have had no women in decision-making roles, and look at the quality of their decision-making.
To return to our own society, there are more women today in computer science, life sciences, and other scientific fields compared to 20 years ago, but clearly more progress is needed. With regard to ethnic groups such as Afro-Americans, the progress has been even less satisfactory, and I agree that addressing this is an urgent problem.
However, the real issue goes beyond direct participation in science and engineering. It has been said that war is too important to leave to the generals. It is also the case that science and engineering is too important to leave to the scientists and engineers. The advancement of technology from both the public and private sectors has a profound impact on every facet of our lives, from the nature of sexuality to the meaning of life and death.
To the extent that technology is shaped by market forces, then we all play a role as consumers. To the extent that science policy is shaped by government, then the political process is influential. But in order for everyone to play a role in playing God, there does need to be a meaningful dialog. And this in turn requires building bridges from the often incomprehensible world of scientific terminology to the everyday world that the educated lay public can understand.
Your work, Anne (Foerst), is unique and important in this regard, in that you've been building a bridge from the world of theology to the world of artificial intelligence, two seemingly disparate but surprisingly related fields. And Judy (Woodruff), journalism is certainly critical in that most people get their understanding of science and technology from the news.
We have many grave vulnerabilities in our society already. We can make a long list of exposures, and the press has been quite active in reporting on these since September 11. This does, incidentally, represent somewhat of a dilemma. On the one hand, reporting on these dangers is the way in which a democratic society generates the political will to address problems. On the other hand, if I were a terrorist, I would be reading the New York Times, and watching CNN, to get ideas and suggestions on the myriad ways in which society is susceptible to attack.
However, with regard to the GNR dangers, I believe this dilemma is somewhat alleviated because the dangers are further in the future. Now is the ideal time to be debating these emerging risks. It is also the right time to begin laying the scientific groundwork to develop the actual safeguards and defenses. We urgently need to increase the priority of this effort. That's why I've proposed a specific action item that for every dollar we spend on new technologies that can improve our lives, we spend another dollar to protect ourselves from the downsides of those same technologies. How do you view the intrinsic worth of a "post-biological" world?We've heard some discussion this evening on the dangers of ethnic and gender chauvinism. Along these lines, I would argue against human chauvinism and even biological chauvinism. On the other hand, I also feel that we need to revere and protect our biological heritage. And I do believe that these two positions are not incompatible.
We are in the early stages of a deep merger between the biological and nonbiological world. We already have replacement parts and augmentations for most of the organs and systems in our bodies. There is a broad variety of neural implants already in use. I have a deaf friend who I can now speak to on the telephone because of his cochlear implant. And he plans to have it upgraded to a new version that will provide a resolution of over a thousand frequencies that may restore his ability to appreciate music. There are Parkinson's patients who have had their ability to move restored through an implant that replaces the biological cells destroyed by that disease.
By 2030, this merger of biological and nonbiological intelligence will be in high gear, and there will be many ways in which the two forms of intelligence work intimately together. So it won't be possible to come into a room and say, humans on the left, and machines on the right. There just won't be a clear distinction.
Since we're in a beautiful house of worship, let me relate this impending biological -- nonbiological merger to a view of spiritual values.
I regard the freeing of the human mind from its severe physical limitations of scope and duration as the necessary next step in evolution. Evolution, in my view, represents the purpose of life. That is, the purpose of life -- and of our lives -- is to evolve.
What does it mean to evolve? Evolution moves toward greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity, and more of other abstract and subtle attributes such as love. And God has been called all these things, only without any limitation: all knowing, unbounded intelligence, infinite beauty, unlimited creativity, infinite love, and so on. Of course, even the accelerating growth of evolution never quite achieves an infinite level, but as it explodes exponentially, it certainly moves rapidly in that direction. So evolution moves inexorably closer to our conception of God, albeit never quite reaching this ideal. Thus the freeing of our thinking from the severe limitations of its biological form may be regarded as an essential spiritual quest.
One of the ways in which this universe of evolving patterns of matter and energy that we live in expresses its transcendent nature is in the exponential growth of the spiritual values we attribute in abundance to God: knowledge, intelligence, creativity, beauty, and love.
| | Join the discussion about this article on Mind·X! | |
|
|
Mind·X Discussion About This Article:
|
|
|
|
Re: Are We Becoming an Endangered Species? Technology and Ethics in the Twenty First Century
|
|
|
|
Technology is growing exponentially and our ability to handle these new tools safely and with maturity is a real question. Many of the debates here center around safety, so I will mention maturity.
Today, most technology is enjoyed by maybe 25% of the earths population. There is still much suffering despite technology. For example, many medical advances allow more people to be born and live longer. In much of the world, the socities are not prepared for this and thus there is rampant suffering. So the existence of technology is not a complete answer. If during the last 100 years, while we gave vaccines and antibiotics to the poorest people, we also included an equal amount of advice on family planning, education etc, then perhaps today we would not be facing many of the problems we face. The breeding ground for terrorists is, after all, places where there are lots of hungry mouths (Somalia, Sudan, Middle East). We often provide solutions for problems in an incomplete way, thus making the technology less ineffective.
So we make new vaccines and treatments to allow 10 billion hungry envious mouths on earth.
I think 3 billion well fed, educated, and content mouths are better, and that is not technology, that would reflect our ability to use technology constructively.
Maybe real maturity means spreading technology via a socialist system (sorry, I know it is wrong to question Capitalism!) as capitalism may not be optimal as we build super technologies. Perhaps we need to relinquish capitalism in favor of technological socialism to better share the waelth and the values that accompany it.
|
|
|
|
|
|
|
|
|
Re: Are We Becoming an Endangered Species? Technology and Ethics in the Twenty First Century
|
|
|
|
The pursuit of technology and ideological beliefs are two independent things. We can't say that certain political or organizational groups will be any better or worse, if they possessed certain levels of technological sophistication. That will only affect the symptoms of our concerns. Terrorists will always try to inflict harm on others because that is what their will is focused on. If they're not using jet airliners or GNR to inflict harm, then they'll resort to low-tech oppressive regimes, such as the Taliban (though the Taliban aren't technically terrorists, I consider their ideologies to be similar enough with those of Al-Qaeda). So, similar to earlier arguments, it should be education that alleviates such discontent, not the banishment of scientific research.
However, I am also against the idea that advances in technology in a socialistic world will enhance the quality of life for the same reason. Socialism will always fail, regardless of the amount of technology that the governing body possesses. The Soviet Union was on the cutting edge of technology, directly competing with the United States. Yet, their nuclear ambitions were more focused on creating bombs, rather than using the technology to benefit the people. With all their effort put into creating nuclear weapons, they still didn't have adequate safety standards and practices to prevent the Chernobyl disaster.
To the question of 'who is to play God?', it is true that it is up to the people to make themselves educated in order to actively participate in the decision making process. However, it is too easy to simply dismiss these concerns by saying 'they need to get themselves educated'. Many people don't have the means to get educated on such matters, let alone be interested. If we are to be serious in making a democratic process in which everyone can actively participate in deciding which direction the future of technology will go, then we must first establish a strong enough infrastructure that is capable of educating the masses, to raise the level of interest and education.
Even though I find Kurzweil's ideas regarding the 'post-biological' world a little unsettling, I do think it's better than the alternative. After all, art and science are connected. We wouldn't be able to relinquish one without relinquishing the other. If we find ourselves coming to that, then I don't see what the point of existence would be. The pursuit of technology is essential to further bring meaning to our lives. |
|
|
|
|
|
|
|
|
Re: Are We Becoming an Endangered Species? Technology and Ethics in the Twenty First Century
|
|
|
|
As Ray Kurzweil said, technology is neither the solution, nor the problem. It is only a tool, which when used properly can help society or if used irresponsibly can harm it. However, to think that we are at a point where we can make a choice about choosing to pursue technology or not is utterly idiotic. As a society we made this choice long ago, possibly when we elevated our status from mere animals to sentient beings. Technology at that time, may have consisted of merely using animal fur as protection against the elements but that too was techology. Having fed our desire for knowledge and understanding for so long, how can we even question the possibilty of closing the lid on 'Pandora's box'.
Our basic instincts may say survival at all costs but when we became sentient beings, we began to deny of our basic instincts and now we take pride in our ability to control them. Our instincts for survival take second place to our desire for knowledge because it is only through knowledge that we come ever closer to our ulitimate desire and that desire is GOD.
Knowledge is power and pursuit of this power can not be stopped, nor can it be controlled. This is especially true when you consider the information that can be found on the Internet.
So what options do we have?
Only one and that is to allow and encourage the pursuit of knowledge to take place in an evironment which provides guidelines on appropriate behaviour and ethical decision making. As Mr. Kurzweil stated, we need to set up an organization to monitor and govern this pursuit of knowledge. However, the political arena that exists today is not one which is capable of supporting such an organization.
Although we live in a global society, many nations are still thinking on a national scope rather than an international one. For instance, the US may create and enforce laws on the appropriate pursuit of technology but this does not stop someone from pursuing this technology in another country where there are no such restrictions. Hackers and other computer criminals are already using this to their advantage. As long as we continue to think nationally rather than internationally in terms of technology, we will not be able to avoid the perils that can come with it.
|
|
|
|
|
|
|
|
|
Re: Are We Becoming an Endangered Species? Technology and Ethics in the Twenty First Century
|
|
|
|
According to this article, Bill McKibben stated that we should accept what we have and don't try to increase technology. Whether it is good or bad, technology has been the past, it is the present and it will be the future. Every chip developer wants to make their chipset faster, with larger capacities and efficiencies. From Intel to AMD, there is, and always will be a battle for the best, which is fueled by the benefit$. These short term benefits blind the industry from the destructive possibilities of long term technologic behavior.
It has been stated that within 20 years, the computer will have as much power as 600 human brains, and there will be virtual professors, lovers and friends just to name a few of the foreseen events to take place. In my opinion, living in a world where a computer is 600 times as smart as myself is scary. Ideas found in movies such as The Terminator and I, Robot come into mind where humans and robots live side by side; which could eventually cause many glitches in society. If these computers can easily outsmart us, what's stopping them from overtaking mankind? The Ghost in the Shell theory comes into play in predictions like post-biological societies and so these robots will eventually be uncontrollable. They will be smart enough to conceal, or even destroy the 'Turn Me off' button and what's even more scary, is that their life expectancies will be limitless with rechargeable batteries. How will we explain these robots to our kids, and our kids' kids?
Kurzweil states that the defensive side to this will evolve as well.
And so any destructive technology will be vulnerable to technology made to keep these problems minor. Kurzweil gives a good example of the computer virus and how anti-virus programs have helped to keep these problems small. However, when viruses first emerged, there was no anti-virus software to deal with them and many computers were infected. The problem was in the fact that the world was unprepared and unaware of such ideas and threats. Society was far more acceptable for the virus since it didn't cause physical harm to any human. When the first robot kills, how will we justify it? What if the technologically sound professionals are unable to create an 'anti-kill' robot or patch in time for the next kill? What do we do then?
The fact that the world's ultimate defense lies in software and machines to stop destructive machines should open everyone's eyes, since we know definitely that software is not perfect. If this was the case, there would be no need for defense in the first place. Should we rely on software that may have bit conversions that overflow and cause rocket ships to blow up? I'm not trying to point any fingers, I'm a Computer Science student in my final semester, and I realize more and more each day that a software program is never really finished.
Then there's this issue that must not be overlooked as well: technology and the military. I've been researching information on this topic for a report that I must complete, and have found that the US Government is doing a great deal of military testing and research where the final result is to be a robot that can go anywhere a human can go. The final product is not anywhere close to being done (as per the articles I've read) but the fact that there could be robot wars instead of human wars is very likely in the future. These 'Humanoids' as they are called, will change the way the governments view war as they will be less hesitant of war. The public will be more lenient as well, since their sons and daughters will be at home instead of at the battle front. The problem I see with this, is that this robot army will be fully trained, while mankind becomes less tactical and trained to deal with battle situations caused by a for-loop in the robot's software, whose body says destroy which somehow becomes an infinite loop due to a 'glitch'.
In terms of the discussion on bans and technological regulations, I think they will not really work. Technology has come so far, that further technological advancements are inevitable. Technology saved my father's life when he had to replace a major artery. Too many possibilities for mankind are made reality with technology, and this is the main reason it continues to exist. However, with the thoughts of increasing human life, also comes the fear of shear global genocide. How do we ensure human survival? I think this problem is highly affected by the people who act on destructive ideas. Many feel that the likes of people from the East, which include Osama Bin Laden play a great role on destruction today. As stated in an earlier post, terrorists from around the world all have one main similarity: they all come from underdeveloped countries as compared to the West. Perhaps technology should first figure out how to increase the quality of life in these areas before we continue. This could possibly solve at least some of the problems the underdeveloped countries face, and may lower the numbers of terrorists. In my opinion, any person who purposely hurts another human is a terrorist. I think both the East and West severely fit this description, and so it's not just the East that we should be afraid of. People in our own backyards, the rulers of the Western countries have intent to kill and so technological advances should also be aware of this.
The local media, which is owned by the local governments, will obviously be a little bias when it comes to portraying the news. I'm certain that the West's news is totally different from the East's, and so it's hard to determine who is telling the truth, and thus who the real 'terrorist' is. So now educating the masses is also very bias, and thus my only real thoughts on how to avoid the ultimate destruction of mankind is by first creating equality throughout the world in terms of life, and then by declaring world peace. I understand that this is a long shot, but so is an overweight, untrained man/woman trying to defeat a trained and fully efficient killing machine.
|
|
|
|
|
|
|
|
|
Re: Are We Becoming an Endangered Species? Technology and Ethics in the Twenty First Century
|
|
|
|
There are two main arguments presented in this intriguing article written by Mr. Kurzweil, and in the lengthy commentary, which follows. One is that technology has become a double-edged sword in terms of its ethical and moral use. The other is that in the not-to-distant future (twenty to thirty years from our present time), man and machine will be inseparable personalities. (That is, the distinction between the two will no longer be clear.) I would like to personally state my reaction to each of these two viewpoints in turn.
First of all, with regards to the claim of the positive and negative consequences that technology has on humanity, I am in full agreement. This tradeoff is obviously a very well known fact, as there are so many examples in our recent and not-so-recent history that demonstrate how technology has been abused and misused. Events such as the two world wars, the brutal killing methods used during the holocaust, not to mention 9/11, are just a few instances, and humanity was (and still is) affected in adverse ways as a result. Yet direct physical injury of humans is not the only negative consequence. There exist all the privacy and security issues as well; examples such as computer viruses, spy ware, ad ware, malicious programs, hacking, software piracy, electronic identity theft, and so fourth. On the other hand, as much as technology has hurt humanity, there exist so many advantages and benefits that it has to (and continues to) offer us. One of the commentators mentioned, for example, how technology saved his father's life when he had to undergo serious surgery. Technology has also transformed and diversified the methods in which we communicate with others all around the world through e-mail and instant messaging, which are accessible via computers and cellular telephones. It has made our lives easier and more convenient.
I am not in agreement with the second claim. I feel it's an extreme exaggeration to state that there will come a time when it will no longer be possible to separate man from machine, despite the advances made in artificial intelligence. It cannot be argued that the procreation of human beings will always and continue to be such! But besides that, there is no doubt that human beings are indeed a unique creation with the most sophisticated form of language, communication, and learning capacity of all other species on earth. This learning capacity is encompassed in the enormous complexity of the human brain, which in my opinion can never be duplicated. I have taken two psychology courses during my university career. Through many videos I have seen in those classes, and through the psychology textbooks I have read; noted and respected psychologists often make the point that, although we have come so far in attempting to understanding human behaviour, we will never completely comprehend the human brain, and why it works the way it does. This is a philosophy in which I share. This is not to say that I don't believe artificial life forms (such as 'Data', on Star Trek The Next Generation) will ever be built. In fact, I do believe this. But what I'm attempting to convey is that such life forms will never achieve all the unique abilities that humans possess; that we can never build a brain that acts in the exact same manner as the human brain. Therefore, I am forced to conclude that man and machine will always be separable. Always!
Thank you for taking the time to read this commentary.
|
|
|
|
|
|
|
|
|
Re: Are We Becoming an Endangered Species? Technology and Ethics in the Twenty First Century
|
|
|
|
Again, very old form lol :P.
As I see it a lot of people don't really understand technology. Maybe that's why we have so many negative responses. Why are we afraid of technology? We have already mixed in with what we created and there's no stoping it. We will continue to mix with our creations. We took on the form of god, ever since we first descovered DNA.
Who doesn't want to live forever? No one is going to be force to live forever. If you choose not to live forever, why are you even worried about what going to happen in the future?
Ray Kurzweil, mentioned numerous of time about living on in a computer that has billions of time more power then the human brain. What makes every one think it's going to be like prison? What about wireless technology? Aren't we going to be able to run our mind on multiple robots with in the computer? What about virtual reality? With a computer brain running few thousands time more powerful then a biological human brain isn't it possible to live in a virtual world?
Robots also will never be able to out perform humans as our biological brains will never be used forever. We might one day start using computer brains instead. Our brains are so weak that we find it hard to do 2 things at once. Who wouldn't want a brain that runs as fast as a whole civilization combined?
What is the meaning of life? If we stop here we will die wondering what the meaning of life is. I'd rather die knowing that we are continuing onwards with technology then dying believing that we have stop technology and are safe from technology.
I'd rather live in a word full of people, robots, and space ships with the risk involved rather than the world of today. We have unlimited space out side of the earth, what makes people believe we won't use it or that we aren't technologically advanced enought to do it?
There is nothing to fear from technology, the only fear I have is dying before I can live forever. Or just dying.
Anyone who's against technology is either ignorant, a suicidal terroist, a person who lacks knowledge of technology, or to full of them selves. Please before you go against technology or before you say it's impossible go and research a bit more. Or you'll just sound as ignorant as people before you who said things today were impossible.
Aldune |
|
|
|
|
|
|
|