Origin > Dangerous Futures > The New Luddite Challenge
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0102.html

Printable Version
    The New Luddite Challenge
by   Ted Kaczynski

An excerpt from the Unabomber Manifesto that briefly summarizes the author's charge against technological progress.


Originally published 1995. Published on KurzweilAI.net February 22, 2001.

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

If the machines are permitted to make all their own decisions, we can't make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines' decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.

On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite--just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone's physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes "treatment" to cure his "problem." Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them "sublimate" their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.

 Join the discussion about this article on Mind·X!

 
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

way off
posted on 09/18/2002 8:35 PM by sped0@netzero.net

[Top]
[Mind·X]
[Reply to this post]

ted is...you are..
way way off.
get a grip and hold.

Re: The New Luddite Challenge
posted on 09/20/2002 7:12 AM by jwayt

[Top]
[Mind·X]
[Reply to this post]

"As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. "

Is this a luddite argument? It sounds more like a pro-technology reason: better results. Can he be arguing that we should retain control at the expense of a future that has better results from superior decision making?

"...turning them off would amount to suicide."

No, it wouldn't. We would just not get as much done. Every time we get into an elevator, we step into a trap if the electricity goes out. There are always the stairs.

"...human work will no longer be necessary the masses will be superfluous, a useless burden on the system."

Ah, the core essence of luddite fear! It has remained unchanged since they threw their sabots into the gears. The gears never did obsolete human work. Humans merely found other work more meaningful and profitable to do, something better suited to the human intellect than acting like a machine.

"If the elite is ruthless they may simply decide to exterminate the mass of humanity."

This is a normal, old-fashioned concern. The elite are now armed with the means to do just that. How does that become a simple decision to make? Our society is built of non-zero sum transactions, including the rise of the elite, so why exterminate its source? Certainly the luddite argument to devolve back to sticks and clubs puts everyone on a more equal footing. But today everyone, including Ted, has a net larger amount of resources (standard of living), even if they are not evenly distributed. Human society never evenly distributes resources, ever. Some people are more successful at seeking status and resources than others. Distribute wealth completely equally and it won't stay that way for long.

Ted is a great straw man.

Re: The New Luddite Challenge
posted on 09/20/2002 8:26 AM by sdmusiclab

[Top]
[Mind·X]
[Reply to this post]

Well, I have to say that I think there is far more to Ted's list of concerns than Straw Men.

While I agree with most of your post, remember that every Socialist, Communist, and the vast majority of Greens and Democrats (among others) disagree with a significant portion of your post. While many a Libertarian thinker might support your position, it is also wise to remember that even Libertarians and Republicans and Free-market Anarchists (as opposed to Social Anarchists) have an internal population who fear technology. In fact, there are many supporters of unlimited technological expansion (even the Strong AI supporters on Mind-X) who are afraid of losing control and/or the negative impact potential of technology.

Regarding your response to Ted's "suicide" position: He is simply right on this account. While everyone worries about becoming too reliant on machines, and when we could or should "pull the plug", and the like, it goes largely unnoticed that we have already crossed a line. Try unplugging the machine today and see where goes the stock market or the banking system or public utilities.

While this is not exactly suicide per se, it is worth noting the general chaos and confusion that would certainly result from such action. And while you and I may be able to take the stairs (or, in a worst case scenario, live off the land with our sticks and clubs and wits), not everyone can.

Finally, while I don't agree with the Unabomber's position of returning to a simpler time and living technology-free (or at least hi-tech free, as I don't think he was prepared to give up sharp sticks or fire or the wheel!), I also don't see how "...devolv(ing) back to sticks and clubs puts everyone on a more equal footing.". It just means a fat corporate guy in a suit is no longer the Alpha-male, as he becomes immediately replaced by the likes of Mike Tyson or his friendly neighborhood terrorist.

--
David M. McLean
Skinny Devil Music Lab

Re: The New Luddite Challenge
posted on 01/29/2008 4:07 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

I agree with what you said here.

Re: The New Luddite Challenge
posted on 01/21/2003 4:40 PM by Jeremy

[Top]
[Mind·X]
[Reply to this post]

Ted K."...turning them off would amount to suicide."

Jwayt>No, it wouldn't. We would just not get as much done. Every time we get into an elevator, we step into a trap if the electricity goes out. There are always the stairs.

Jeremy>I think Ted is right on this one, in fact, we are already there. Imagine if all technology just stopped, dead in its tracks. How would your average industrial-nation citizen get food? Most, including myself, take it for granted that food will be served up when we need it. Without transportation farm produce wouldnt get very far. Without refrigeration and other modern methods of perservation on-hands stocks would deplete rapidly. Starvation would set in, Id say upwards of 90% would be dead within a few months.

Re: The New Luddite Challenge
posted on 01/21/2003 5:09 PM by BC

[Top]
[Mind·X]
[Reply to this post]

>>Jeremy>I think Ted is right on this one, in fact, we are already there. Imagine if all technology just stopped, dead in its tracks. How would your average industrial-nation citizen get food? Most, including myself, take it for granted that food will be served up when we need it. Without transportation farm produce wouldnt get very far. Without refrigeration and other modern methods of perservation on-hands stocks would deplete rapidly. Starvation would set in, Id say upwards of 90% would be dead within a few months.<<

You're correct.

Modern technology allows the human population to grow far beyond the "natural" carrying capacity of the planet. First, and most importantly, there's modern energy. Using the work of Justus von Liebig, the German scientist who in 1863 introduced the concept of carrying capacity to biology, we can estimate that it would take roughly five times the the land area of the earth to support the current human population at subsistence level, without oil. Much of the productivity of modern agriculture is based upon the use of machines such as tractors, combines, and so forth. Without synthetic fertilizer, the world's soil would be depleted within a generation. (it's headed that way, anyway...) And it's not just food...think about the forests that would be necessary to provide wood to heat the shelters of today's population.
Second, no 21st century enterprise could operate without computer technology. If all the computer technology in the world were to mysteriously disappear, the resulting economic collapse would make the Great Depression look like your proverbial day at the beach. And on and on. The traffic system of the city where I live is dependent upon a neural network which operates the signals. Without it, we'd have gridlock every day. Etc. Etc. Like it or not, there's no way to pull the switch. And as the technology becomes more powerful, our institutions will be even more dependent upon them.


BC

Re: The New Luddite Challenge
posted on 06/04/2005 11:11 AM by Spinning Cloud

[Top]
[Mind·X]
[Reply to this post]

Imagine if all technology just stopped, dead in its tracks.


I might as well just imagine gravity "stopped dead in its tracks".

But gravity AND technology have just as likely a chance to "stop dead in their tracks".

Before I'll bother imagining this I need to imagine a mechanism by which it could actually occur.

Gedanken experiments are all fine and dandy in so far as they actually make a valid point. The point you just made is the same as gravity ending or nuclear binding forces being instantly neutralized. Yeah, we'd all perish...duh.

Re: The New Luddite Challenge
posted on 04/02/2004 2:53 AM by chris_peters

[Top]
[Mind·X]
[Reply to this post]

Ted is right that switching the machines off would in fact be suicide. the new york city black out was an example this last summer. The entire city, without electricity, was rendered useless and everyone swarmed the streets and went home. Being so specialized and trained to do one job in a large economy leaves people structurally displaced with automation.

Further, the economy, the dollar, the national treasury all revolve around the price of oil. Oil is the blood circulating in the techonological organism, and we fight wars for oil. the real challenge to Mr Kazinsky's assertion is not that if we turn the machines off, we will die, we most certainly will one way or another.

How do we stop being consumers and start being architects into the organism (the cities) we draw off of? can you influence the system with your pocket book through your purchasing? Can you eat food if it isnt put on a truck, and placed on a shelf for you? can you create exchange if there is a bank panic and the dollar stability is lost?

well see if Ted was right, becuase the test is upon us now. In a government where the federal deficit is 7 trillion dollars, baby boomers outnumber gen xrs in europe, japan, and the US, and they have no money to retire on, a debt burden that will carry over to several generations forward, well see if we can live without the machines, becuase the loss of price stability disrupts the exchange mechanism by which we all synergistically survive. what about the prison population? will these men and women be exonerated or will they set lose a bio tech virus and kill off a population they can no longer support or release back into society becuase nobody is willing to relax the laws.

Ted was a paranoid and crazy guy, he should not have hurt professors and lost his way with people, but he was on point with several of his remarks.

Re: The New Luddite Challenge
posted on 10/30/2002 5:37 AM by Ron

[Top]
[Mind·X]
[Reply to this post]

Ted K does touch on an important point that I've been thinking about for a couple of years. As we advance, fewer and fewer people will be required to do organized/mandatory work. How will we distribute income, wealth, and maintain our free market, capitalist meritocracy? Decision time is approaching faster than most realize. In fact, our advancement affects us now. We've transitioned from an industrial society to the new economy with relatively few high paying "high tech" jobs and increasing numbers of low paying service jobs. Robots are rapidly replacing service jobs. Even now the longshoremen in Washington are striking to keep the docks from automating. Maybe we become stock holders in America and receive dividends or a portion of robot/automation produced goods and services. Hmm!

Re: The New Luddite Challenge
posted on 01/21/2003 7:08 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

>>We're talking past each other. What you're talking about is a machine. A system that purposefully transforms matter or energy is a machine, whether you're talking about a blacksmith's forge or nanotechnology.<<

you and many others say that organizms are machines- some deny that organisms are just more complex machines- I on the other hand say that MACHINES DON'T REALLY EXIST! the concept of a machine requires a closed system in which linear operations are preformed- not only do organisms act holistically and dynamically unlike the machine concept- but MOST MACHINES DO AS WELL! even simple machines like motors' functions are radically affected by outside factors and more fundemenatal principles like temperature/ the quality of the materials/ the orderlieness of the hardwired and/or software programming/ and I would even bet things like the attitudes of the user can affect the fundemental workings of a machine in a nontrivial way-

simply put- once you have a "machine" made up of just a few dozen "parts"- the many affects/ emergent properties that the holistic nature of patterns of organization of dynamic matter produce- and your machine is no longer a linear- self contained Cartesian "machine" any longer- but really a kind of simple "aspect" of a living system-

so you see- I don't think that most of the things we call machines NOW are even really machines at all- let ALONE living systems or post-singularity devices!

>>Perhaps. In some instances, a biological form would likely be useful to a post-singularity intelligence. But in other instances, something inorganic would probably do a better job...For instance, some kind of probe made from metal or other hard material would probably be better for exploring outer space than an animal body." <<

well a probe is going to need an intelligent brain and a useful body- so the BEST machine would be a living metal animal/person- like a smart insect-

Re: The New Luddite Challenge
posted on 01/21/2003 5:23 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

haha! "machines" are soon to be extinct! we will soon enhabit a world consisting of only one kind of stuff: living systems-

people need to get this cog and spring machine idea out of their heads- these simple tools are nothing more than a temporary use of dead-matter- what need do you have for such things as machines for thinking- when we won't even be using machines for dumb work?

in a post singularity world where things are NOT changed by pushing/shoving but by information processing and direct matter/energy transformations at atomic/subatomic scales- the most complex "machine" [outside of a museum] will likely be a claw-hammer!

Re: The New Luddite Challenge
posted on 01/21/2003 5:47 PM by BC

[Top]
[Mind·X]
[Reply to this post]

>>haha! "machines" are soon to be extinct! we will soon enhabit a world consisting of only one kind of stuff: living systems-

Living systems *are* machines, broadly defined.


>>people need to get this cog and spring machine idea out of their heads- these simple tools are nothing more than a temporary use of dead-matter- what need do you have for such things as machines for thinking- when we won't even be using machines for dumb work?<<

What, pray tell, will do the dumb work, if not some sort of machine? Even in a post-singularity world, materials will still have to be processed. Energy will still have to be converted from one form to another. Maintenance will have to be done. Nanotech is machinery (at least in the sense that I use the word machinery.)

>>in a post singularity world where things are NOT changed by pushing/shoving but by information processing and direct matter/energy transformations at atomic/subatomic scales- the most complex "machine" [outside of a museum] will likely be a claw-hammer!<<

As for how things occur in a post-singularity world, I'll only say what I said above...work of some sort has to be done. Nanotechnology doesn't change that fact, just the way it's done. Beyond that, I don't think either you or I can predict precisely *how* things would shake out in such a future.


BC

Re: The New Luddite Challenge
posted on 01/21/2003 6:14 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

>>What, pray tell, will do the dumb work, if not some sort of machine? <<

what work? moving things? in a post singularity world things are moved from a to b not by shoving and lifting- but by "shifting coordinates" any configuration of matter/energy can be built/moved/etc via many different direct means without having to actually move matter through space-

and if you need/want a "machine" for something archaic or capricious- say a special computaional device or vehicle- I would wager what you will use is going to be an ORGANISM- a specialized hybrid brain- or "bioroid" for physical tasks- simply because a bird flies infinitely more gracefully and efficiently than an airplane- so the ideal flying "macine" would be an actual bird-bioform with enhancements [for speed/durability] and facilities "grown" in for human use [unless you just strap on a saddle and tell the bird where to go-]

maintenance? the most advanced form of maintenace systems we know of are Immune systems- I suspect living networks like immune systems will maintain the various human systems

Re: The New Luddite Challenge
posted on 01/21/2003 6:41 PM by BC

[Top]
[Mind·X]
[Reply to this post]

>>what work? moving things? in a post singularity world things are moved from a to b not by shoving and lifting- but by "shifting coordinates" any configuration of matter/energy can be built/moved/etc via many different direct means without having to actually move matter through space-<<

We're talking past each other. What you're talking about is a machine. A system that purposefully transforms matter or energy is a machine, whether you're talking about a blacksmith's forge or nanotechnology.

>>and if you need/want a "machine" for something archaic or capricious- say a special computaional device or vehicle- I would wager what you will use is going to be an ORGANISM- a specialized hybrid brain- or "bioroid" for physical tasks- simply because a bird flies infinitely more gracefully and efficiently than an airplane- so the ideal flying "macine" would be an actual bird-bioform with enhancements [for speed/durability] and facilities "grown" in for human use [unless you just strap on a saddle and tell the bird where to go-]<<

Perhaps. In some instances, a biological form would likely be useful to a post-singularity intelligence. But in other instances, something inorganic would probably do a better job...For instance, some kind of probe made from metal or other hard material would probably be better for exploring outer space than an animal body. A post-singularity intelligence would probably use both forms, depending on what it wanted to do. In any event, both types of forms are machines.


BC

Re: The New Luddite Challenge
posted on 06/04/2005 11:14 AM by Spinning Cloud

[Top]
[Mind·X]
[Reply to this post]

and direct matter/energy transformations at atomic/subatomic scales


And this is the biggest piece of unsubstantiated science fiction in the whole Singularity 'movement'.

Re: The New Luddite Challenge
posted on 03/14/2005 4:14 PM by slam0t5

[Top]
[Mind·X]
[Reply to this post]

Technological advances lead to a dilemma for the humankind. Surely, we can go back to the basics. Our ancestors lived to old age from farming and fishing. No computers were ever needed for centuries. The four fundamentals: food, clothing, shelter, and transportation are all we need to survive. But do we fear technology so much that we should go back to the basics or stop further technological advancement? Ted's last point states that technology might get out of our hands and machines will eventually gain control of human beings. The machines may decide to take care of the planet Earth in a perfectly calculated manner and terminate our race who contributes to warfare, waste contamination, and other merciless destructions.
Our imagination is very powerful. If there is a knock on the door, what could that be? Perhaps it is food delivery, a friend who is coming over for a party, a neighbor who is calling for help from a robbery, or a police who is on a mission to enter your house since a criminal who broke out of jail has found a way to hide in the closet of your bedroom and when you open the closet, a camera crew jumps out and you are live on a prank show. Authors have expanded their creativity and crafted futuristic ideas in their science fictions. Some of these stories involve highly intelligent machines or technologies that lie in the wrong hands of people and cause chaos. These stories predict the future and inspire scientists to build fascinating technologies and at the same time give warnings to what can go wrong. This is valuable information and raises many interesting questions to the computer engineers today. Ted's point of view is indeed valid and probes people into thinking about the other spectrum of machines.
Human beings are the most intelligent creation of life. We are the inventors of the machines. Aside from computers, we have also advanced in medicine, electronics, materials engineering, and numerous fields in both arts and science. With every new creation, there are benefits and responsibilities. They are two ends of a spectrum. Guns and weapons provide defense and security to the nation but we also need to set laws and regulations so people know not to abuse the use of it to commit crimes. With computers, there are user manuals and usage guidelines as well. Software engineers have a set of professional ethics guideline, the IEEE Code of Ethics, in which they (hopefully) operate with. When companies advertise their newest robot, they sell only the positives and put the quiet warnings and usage guide inside the package. It is indeed necessary to know the possible side effects as well. We shall certainly voice out our concerns if we do feel that something is going overboard.
Instead of stabbing technological advances, the next generation should focus on the ethics and the safety of our inventions. The education and training we receive should include these issues. The software engineers shall continue their work with a mindset of precautions and ethical concerns. Backup and contingency plans need to be well planned and documented before each operation. What is the worst scenario? If there is a blackout, we can rely on the Sun. If the stock market falls, we can go back to farming and fishing to make a living. This may lead to a simple and happy life depending on your opinion on what happiness and the purpose of living is. Turning off machines will be suicide only if the machines are the providers of air, food, water, and the basic elements of life.
The usage of computer technology is evolving from replacing humans to do repetitive mathematics to inserting a chip in human bodies. How far shall we go? We may be able to replace every cell in our body and become immortal. It is human nature that we always seek for advancements. In today's world that displays people seeking perfection in plastic surgery reality television, our morality is divided and obscured. What is the purpose of life? The original sin in the bible tells the world that Adam and Eve ate the apple because they wanted to be like God. If this is what we are heading towards, to be able to do all things and live eternally, we are repeating 'history' in a sense.

Re: The New Luddite Challenge
posted on 03/14/2005 5:28 PM by nehasharma

[Top]
[Mind·X]
[Reply to this post]

According to Ray Kurzweil technology is growing at an exponential rate. However, the fact remains that humans are developing the technology in almost all fields of research. A machine is only as intelligent as its creator. Even though robots that are more intelligent than M.I.T's Kismet may be being developed, it is impossible to know for sure the effects such robots will have on our society. It is of course justified to be scared but only within reasonable doubts. There will be robots in due time that can make all of our decisions but we are the ones who program them to make the decisions for us. In the end, the power lies in our hands and not in the robots for it's our choices and decisions that the robots have to follow. The reason that it might seem that robots have total control is because the decisions we make can be made by them so that we do not have to waste time making the same decisions again when in similar scenarios. Therefore, there cannot be a time where all decisions will be made by machines without any human oversight. Machines are not meant to feel emotions. Machines are built to think logically and efficiently. Even if all the power was given to a machine and it was taught how to respond to emotions, there can be many situations for which a machine might not be ready to respond. In such a case, a human will be more powerful just because of our ability to think and respond using logic and emotion at the same time. There could not be a time where humans would just simply accept all of the decisions made by a machine for the simple reason that a human has to teach a machine and program it to make the necessary decisions. This could lead to a situation where the machine is required to make the same decision again and it has controls over the decisions for that activity but this does not insinuate that the machine makes all the decisions and that humans are dependent on the machines for all eternity. I am not denying that we will dependent for our day-to-day lives but the key to the decisions lie in our hands. Similarly, machine-made decisions will bring better results than man-made but it is man who would have programmed the machine to make such decisions. If there did come a time where the human dependence on machines was overwhelmingly large, then sufficient actions could be taken so that we do not go down a path where our lives are entirely run by machines. We owe it to ourselves and or future generations to take necessary precautions before the conditions worsen. I will agree with the fact that much of control will be with the 'elite'. However, I do not agree with the fact that the 'masses will be superfluous'. Even if there were machines that performed each and every task for us, it is unwise to assume that humans would be obsolete. After all, it is us who will be making the machines that can perform tasks for us. It is humans who will know what their needs are and what new machines need to be developed. It is wrong to assume that the 'elite' would be 'ruthless' or 'humane'. Machines cannot have feelings. They can be programmed to act with feelings and respond to feelings but they cannot be programmed to do this in every situation. Human emotions are very complicated. It is wrong to assume that there can be a machine that can decipher the degree of each human emotion and its relevance to the situation. Machines cannot take over the world if we program them to serve humans and not rule over humans. No matter how much we depend on machines, human brains are so complex that they cannot cease to function and think unless they are made to through external means like surgery etc. In conclusion, I differ with the arguments made by the author. There are many assumptions in the article that are thought to be true without the availability of any scientific and logical basis.

Re: The New Luddite Challenge
posted on 06/04/2005 4:31 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

"A machine is only as intelligent as its creator."

J- Right now. The whole purpose of this board though, is to attempt to predict and alter what is possible in the near future through reference to objective reality. Assuming that a hardware brain that is massively more parallel than ours can be developed, then your preceding statement is untrue.

"Even though robots that are more intelligent than M.I.T's Kismet may be being developed, it is impossible to know for sure the effects such robots will have on our society."

J- I agree, but certain effects are more likely than others, when cross-disciplinary views are included with that of the computer scientist's. For instance, a simple software AI can evolve right now. When AI is combined with simple hardware it might also be able to evolve, but the effects would take place right next to our bodies, not on hard drives.

"There will be robots in due time that can make all of our decisions but we are the ones who program them to make the decisions for us."

J- Or not. After the first very complex robot wins its civil rights, (and possibly the right to replicate) in a human court, will you still assert that we will program it?

"The reason that it might seem that robots have total control is because the decisions we make can be made by them so that we do not have to waste time making the same decisions again when in similar scenarios. Therefore, there cannot be a time where all decisions will be made by machines without any human oversight."

J- This does not logically follow. If humans are supplanted (partially or entirely) as the most intelligent beings on the planet, there will be people (like myself now, but a thousand times separated in degree) who are inferior in processing ability to the machines and machine/human combinations.

"Machines are not meant to feel emotions."

J- and neither were people or animals until they evolved.

"Machines are built to think logically and efficiently."

J- And, (I'm told, since I'm not a computer scientist) they can be built to learn and evolve.

"Even if all the power was given to a machine and it was taught how to respond to emotions, there can be many situations for which a machine might not be ready to respond."

J- Unless it was intelligent enough to learn to respond to new situations, and was a few million times faster than a human at learning.

"In such a case, a human will be more powerful just because of our ability to think and respond using logic and emotion at the same time."

J- There might not really be a survival benefit of any kind to emotion, beyond its ability to help wetware creatures rear their young.

"There could not be a time where humans would just simply accept all of the decisions made by a machine for the simple reason that a human has to teach a machine and program it to make the necessary decisions."

J- Again, you're focusing on the way things are now, with the computers we have now, which this discussion is not really about. There is no reason why a more intelligent machine could not assess the situation it was in, note that it had been created as a newer more advanced learning machine by beings less able to learn than itself, and decide to act on its own from that point on.

"This could lead to a situation where the machine is required to make the same decision again and it has controls over the decisions for that activity but this does not insinuate that the machine makes all the decisions and that humans are dependent on the machines for all eternity. I am not denying that we will dependent for our day-to-day lives but the key to the decisions lie in our hands."

J- I agree with you somewhat here. The key to the decisions lie in our hands, until something dramatically more intelligent comes along, and decides to put us in a box, control us, or kill us outright. This never has to happen, but it could happen. We could evolve with the machines, and it would behoove us to work to our mutual advantage (a la Kevin Warwick's ideas http://www.kevinwarwick.com). This is more likely the closer we are to the machines in their evolution, and less likely the further apart (in the machines' favor) our intelligences grow.

"Similarly, machine-made decisions will bring better results than man-made but it is man who would have programmed the machine to make such decisions."

J- Nope, it could just be that man creates a machine that 1) is capable of learning and 2) comes up with a reason/desire to live. And don't forget, this could be something unforeseen, like an evolutionary holdover. (IE: an intelligence descended from a cleaning robot that wants to make everything clean and clutter-free, and evolves a self-redesigning and evolving robot system to accomplish this to perfection)

"If there did come a time where the human dependence on machines was overwhelmingly large, then sufficient actions could be taken so that we do not go down a path where our lives are entirely run by machines."

J- Again, this is a matter of perception of degree, and degree. Edvard Ludd and his followers already thought the machines had too much control over their lives, although they were simpletons compared to Kaczynski. Another good question to ask yourself, is this: where would your life be if it wasn't controlled by anyone else, or any other machine? Would it be better or worse? Also: do you have a choice to leave that "control" behind.

Right now, I'm "controlled" somewhat by the capitalist society I live in, but that's a good control. I can't really choose very easily to live off of the land, but I have been goiven a multitude of BETTER choices, like which cereal and which laptop to buy, and which internet dating service to use. The reason those choices are better, is just one: At any time, I can volutarily choose to make things harder on myself by saying "Thanks, but no thanks."

Not so, with human attempts to control me, and regulate my choices. By regulating electrical power, my local government increases its cost to me, forcing me to work harder to pay for it, by reducing my options that the competition would have offered me. By regulating consumer devices, I cannot be an early adopter of as many machines as I would like. By interfering with and eliminating trial by jury, I cannot travel as swiftly or efficiently as I would like to, and I cannot fly to work. (Although this would be my preferred means of travel in a free society). I also am told not to self-asses my mental state, not to self-medicate, how to and how not to speak in public, etc...

Is there any reason to believe machines would be as illogical as our own government overseers? -I doubt it, especially not if they were really more iobservant and intelligent. The greatest threat is in their possible assumption (from looking at a broad statistical sample that included too few libertarians) that all humans want to exercise brute force against one another. After all, if you were exposed to a primitive society that only wanted to fistfight with one another, would you attempt to lift it out of its self-imposed misery, or would you just make sure it was harmless to you, and leave otherwise it alone, or adapt it to your wants and needs?

We owe it to ourselves and or future generations to take necessary precautions before the conditions worsen. I will agree with the fact that much of control will be with the 'elite'. However, I do not agree with the fact that the 'masses will be superfluous'. Even if there were machines that performed each and every task for us, it is unwise to assume that humans would be obsolete.

J- The vast majority of humasn are obsolete right now, for all but one reason: They are capable of organized and disorganized mass-brutality. The instant that society isn't full of vindictive jerks who can force me to conform, I will conform no more. I will do what logically has the best chance of 1) furthering my own learning and enjoyment of life 2) bring ideas into existence that I agree with, based on my own morality.

Arbitrary laws thought up by willfully-ignorant people who are merely good at appealing to an even more willfully-ignorant voting base have no validity to an intelligent individual that hold supreme defensive force. Don't get me wrong, I don't want to kill the law enforcers that exist now, I just don't believe in following thier vindictive laws.

Said a different way, if the Tutsis had been given excellent active shield nanobot defenses before the Hutus tried to massacre them, they would still be alive, but the Hutus might not be dead. They would have to work their differences out, or at least, agree to leave each other alone.

Of course, if any human government has a nanotech monopoly, it will just use that advantage to tax the life out of its subjects. This already happens in virtually every instance of successful and widespread gun-control ( see http://www.hawaii.edu/powerkills and http://www.jpfo.org).

After all, it is us who will be making the machines that can perform tasks for us.

J-Again, the whole future-tech luddite debate is centered around the idea that we will lose control of self-replicating and evolving robots. They could have been made to be self-evolving to solve problems for us initially, (say gold prospecting tunnelers, nature-reporting field biologists that study evolutionary data, etc...) However, the first time a self-feeding, self-replicating nanbot decides not to return from the field, "Houston, we have a problem"

It is humans who will know what their needs are and what new machines need to be developed. It is wrong to assume that the 'elite' would be 'ruthless' or 'humane'. Machines cannot have feelings.

J- Wrongo. They can have feelings, they just aren't complex (parallel and self-referencing) enough to, right now. Moreover, they may have feelings we're totally incapable of having. They may have a billion levels of happiness that are all different from one another in subtle but measurable and communicable ways. Moreover, they may be able to communicate their emotions exactly, with no human level of inadequacy and ambiguity.

"Machines cannot take over the world if we program them to serve humans and not rule over humans."

J- I would agree with this if it were altered to read: Machines cannot LIKELY take over the world if we program them to serve humans and not rule over humans, IN THE INITIAL STAGES OF THEIR COMING EVOLUTIONARY DEVELOPMENT.

No matter how much we depend on machines,

J- Even if we completely depend on them, even for food, as in E.M. Forster's _The Machine Stops_?

human brains are so complex that they cannot cease to function and think unless they are made to through external means like surgery etc. In conclusion, I differ with the arguments made by the author. There are many assumptions in the article that are thought to be true without the availability of any scientific and logical basis.


I believe that your post was full of assumptions that are even further from the issues at hand than Kaczynski's are. If we are going to evolve along with the machines, we need to first:
1) Eliminate human obstacles to medical and technological advancement, which means: Get rid of the FDA. These government thugs prevented the dobelle group from attempting blindness correcting voluntary surgery here in the USA. The dobelle group had to move itself to Portugul before they could engage in Capitalist medical activities.
2) Elevate the public discourse to a truly objective understanding of the issues at hand. One way to do this is to encourage participation in forums like this one, and to encourage the uninitiated to read Eric Drexler's book "The Engines of Creation" at http://www.foresight.org/EOC
3) It would also be good to encourage people to disseminate public support for the constructive uses of nanotechnology and AI. Q:Why not tell the whole truth all the time? A: The following several reasons: 1)Time is limited that one has to spend time talking with idiots, the primary component of the voting public 2) The voting public votes to exercise brute force against any Galileo in their midst, every single election - which means that they've already chosen brute force as a solution to their problems, and they've done it in a secret voting booth, without debating it with you. (For a clearer expose on this line of logical reasoning, see: http://www.lysanderspooner.org -_The Constitution of No Authority_) 3)The Luddites are already working full time to curtail all scientific progress, and they have no scruples whatsoever binding their actions.

In short, I suggest that humans need never make bad decisions and thusly put themselves at a disadvantage to their own creations. It is vastly more likely that a human government will use unthinking robots to kill off mankind than that thinking robots will choose to kill off mankind. I encourage any scientist who agrees with me to stop working with the permission of any government, and pursue his research quietly, privately, and without fanfare. I also wish this statement to be carefully considered by technologists and nano-engineers: The bigger a government is, the more collective it is, the less intelligent it is, and the more that it is subject to vast collective pressures which are almost never beneficial or constructive.

Traffic tickets are already an impersonal and evil manifestation of the USA that violate all 10 of our Bill of Rights. Most people are blind to this reality, but if you actually go through the Bill of Rights, and pay close attention to the meaning and intent outlined in the Federalist papers and Anti-Federalist Papers, you will see that this is the case.

But nearly everyone votes for politicians who believe that traffic tickets are OK, and that the citizen being issued the ticket has no right to trial by jury, no right to travel freely, no right to privacy, no right to speak freely to the law officer who pulls them over, no right to privacy (from either law officers or the camera automated ticket writers now being used in Chicago etc..), no right to be secure in our persons, papers and effects, no right against self-incrimination (-or did you choose not to check the box on the back of your ticket? ). The most basic checks and balances on government/collective abuse of power are almost completely gone, and only .0001% of the population has even noticed, and fewer than that have taken actions to fight what's happening, and fewer than that have met with even limited success.

What does this mean? That we are already controlled by something worse than intelligent machines: Our stupid and vindictive fellow citizens, and their stupid and vindictive choices of overseer/politician.

Re: The New Luddite Challenge
posted on 06/04/2005 6:25 AM by FarmerGene

[Top]
[Mind·X]
[Reply to this post]

"What does this mean? That we are already controlled by something worse than intelligent machines: Our stupid and vindictive fellow citizens, and their stupid and vindictive choices of overseer/politician."

Yes, from the day you are born, until the time you die, you are controlled by useless meat puppets.
Too bad more people aren't mentally equipped to resist.

Got any answers?








Re: The New Luddite Challenge
posted on 06/04/2005 9:42 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

Got any answers?


Absolutely, I have answers. (I even have a little bit of history dealing with the public and my government. I helped put the Libertarian Party on the ballot in 11 states, last election.)

Answers:

One should do everything one can to reduce overall and decentralize collective power.

Here is a list of ways, in rough order of importance, that you can reduce and decentralize government power, written from the stanpoint of a USA citizen that enjoys an ever-lessening degree of political freedom.

1) Buy a gun, or whatever self-defense technology is currently the most adequate (most silent, most reliable, most accurate) and learn how to use it as precisely and accurately as time permits. This will maximize the cost of any government goonsquad attempting to violate all of your rights, by killing you, or jailing you without any cause at all. See http://www.jpfo.org and http://www.hawaii.edu/powerkills --Mass murder by government cannot happen to an armed populace, (unless it's one that hasn't kept up with government technology)

2) Having taken precautions against the abject worst befalling your person (your murder), attempt to educate others about the most powerful checks on government abuse of power, and attempt to strengthen these checks where they are being eroded, weakened or compromised.
One way to cost the state money and resources is to hand out jury rights information on public sidewalks in front of courthouse steps (good fliers for this purpose are available at http://www.fija.org and http://www.isil.org) . This will cost the state money (if you work for a day or so, it is likely the state will fail to prosecute someone innocent of wrongdoing --but guilty of breaking the law-- as a result of your efforts), and is at least good for a laugh. It will show you how afraid they are of allowing the public to know what their rights are. If you're out there for more than 7 days, you'll begin to have an effect, and you'll be asked to leave (probably before then if you're not in a western State with a low population). This is a fun way to show poeple that the first Amendment to our Constitution is gone. If you want to be asked to leave faster than that, try it on the actual courthouse steps (Why wouldn't your first amendment rights apply on government property?)

3) Vote for libertarian candidates (Not necessarily Libertarian Party candidates, but ones whose philosophy is libertarian) or just vote against 2 equally bad choices by voting against the incumbent. Also, attempt to defeat obstacles to free speech such as "campaign finance reform", if you're in a position to influence public policy. http://www.lp.org http://www.ak.lp.org

4) Move your resources to gold, digital gold, or other objective currency that can't be taxed, seized, or traced by government goons or the big US banking cartels that happily do their bidding. A current account or numbered account at a Swiss bank, in the Cayman Islands, etc. is a good way to start.

5) Move to someplace where the collective is less anti-freedom than most of the USA currently is. Within the USA, I'd recommend Alaska, Wyoming, or Williams Arizona near Flagstaff. Outside the USA, I'd recommend Costa Rica. Also, check out perpetual traveler discussions on the web, for information about how to shelter yourself with dual citizenry/ limited involvement in various countries, with banking being done in only the safest one of the bunch.

6) Be an advocate for freedom wherever you go, and don't hold your opinions back, because more discourse reveals more truth, given enough time. Network with those who agree with you, ideally at some point forming mutual defense networks of groups of five. To belong to a mutual defense network, everyone should ideally have a 7mile range CB, a pistil at least as good as a 1911, and a rifle accurate to 200+ yards, also, a video camera helps. When anyone gets pulled over for no reason, he can call on his CB at a certain freq, and everyone else should pull over next to the cops, ready to defend themselves, but jut videotaping the officers. The person pulled over should not cooperate, and demand that the officers honor their oaths to uphold the Constitution.

This means that they can inquire only if they believe that the person was involved with a real crime, like robbery, or murder - a crime that actually violated someone else's person or property. If not, insist the officers disband and leave you free to go.

This would, however take balls that most Americans don't have, so it'll take some time to convince people to grow some balls. (I'm not excluding women here, their balls are just inside, plus they usually aren't taken seriously in combat situations, which is often advantagous)

7) As a determining point about when you should leave your current locale to seek one with more freedom, Ask yourself: has carrying concealed guns been banned yet, as in Chicago, NY, DC, etc.? If so, move.

8) Be a proponent of economic and scientific freedom, and resist religious orthodoxy. Inform and connect those with mutually beneficial goals. Tell local libertarians about Foresight and encourage them to read http://www.foresight.org/EOC by Eric Drexler. The freedom movement needs to start thinking about the future of freedom, as it will be impacted by radically different new technology. History favors a dictatorship of the masses, but the trend is towards decentralized technology being a liberator, because it decentralizes power.

Remember that a research scientist who's free is free to cure your cancer, or anything else that might befall you. The issues that effect us now might not be that important to us later, so it's important to respect everyone's freedom. As such, it's a good thing to get involved with your local libertarian-oriented organizations like http://www.gunowners.org http://www.fija.org and http://www.eff.org. It is also a good thing to attempt to set up pro-freedom forums where there are none in existence, because these areas often are receptive to the idea that freedom must be protected.

It may seem obvious to you that a jury has a right to try the validity of the law, as well as whether or not the law was broken, but this is a revelation for most people. Yet these are the same people that go out and vote every two years to "streamline" government. Less than two minutes of conversation with each of them would straighten America out, to a great extent.

If you don't think so, consider this: If everyone who went to traffic court got a trial by a jury before being deprived of their property (money, driving rights, car usage, etc), as gauranteed by our bill of rights and due process, the State would not be able to use traffic tickets to collect revenue. They wouldn't be able to abuse us, and rob us on the open road. This tyranny was everything our Constitution was written to prevent, but we find ourselves once again faced with the founders' dilemma, but little of their support.

The support they had wasn't natural or normal, it grew from the hard work of Samuel Adams, Patrick Henry, and numerous other anti-federalists. Just as people like Lysander Spooner fueled the abolitionist movement that means that people can't own people toaday.

The scientists and engineers of today need to learn that free trade and free scientific inquiry come at the cost of disobediance to government.

Benjamin Franklin would be sickened by the presence of the FDA, FCC, etc. today. (And even if he wouldn't be, I am, because 10,000+ dead Americans a year deprived of life-saving medicine is unacceptable, especially when added to all of the ventures that could have been but were not.)

Ultimately, even though the scientists involved with this research seem to think that a "leading force" can be beneficial I will have to conditionally disagree. The leading force should simply be those scientists, a la Atlas Shrugged and Ayn Rand http://www.aynrand.org -even though this isn't the complete solution to the problem.

As a question to people like Ray Kurzweil and Eric Drexler, I ask: If our government can do everything that it has done to various minorities in the name of preventing them from medicating themselves and driving too fast (in direct violation of our Constitution), why wouldn't it just use nanotechnology replicators to exploit people and extort money from them? After all, this is how it uses motion sensors and other remote detection technology.

Why should a "leading force" government be considered more moral than a few honest individuals?

They could, at least in theory, make certain that the technology was not abused, and was widely distributed among pro-freedom people, for the common defense.

Plus, why even use the technology for military purposes first? Why not use it for personal flight, medical research, and intelligence expansion? If you had it first, you could just distribute it far and wide, make a huge profit, and then withold it's manufacturing secrets if they tried to bully you. (As in the enclosed lab scenario)

Even access to life-changing products of nanotechnology would topple the existing political structure, almost overnight. Imagine everyone being able to fly to work (utilizing multiple bumble-bee type wings on a nanotube reinforced harness). Who wouldn't? Imagine everone having access to pen-sized lazer-sighted airfoils, accurate to 1000 yards.

The police would turn libertarian really quick, or die. Robbery, rape, and murder would die out as well, for the same reason. Totally powerful individuals defend themselves and make hard targets.

If government controls nanotech from the get-go though, we'll all be living in Orwell's 1984 in 10 years.

If you take my advice and want to move to Alaska, call me when you get here.

Re: The New Luddite Challenge
posted on 06/04/2005 12:00 PM by mekanikalmekka

[Top]
[Mind·X]
[Reply to this post]

I find it interesting that your ability to grasp the fundamental truths and apply a working conspet that is an alternative to our existing model of living. Given a little more time and you will besome a target if not already. A lot of your "modes" parallel my own. I'm glad you see the bigger picture and the mass manipulation and control that flies in the face of our basic rights to exist free from harassment and theft. Remember the addage and apply to our gov't: a lie covers up a theft and murder - this is the legacy of America. Starting with the true American natives... Sad. Too bad the first native Americans didn't kill off every last immigrant they saw. Unfortunately, "savage" as they were, they sought to feed and help the poor starving idiots - Thanksgiving anyone? Thanks for your land, your lives and your heritage. Humph.

Re: The New Luddite Challenge
posted on 06/04/2005 11:24 AM by Spinning Cloud

[Top]
[Mind·X]
[Reply to this post]

see: http://www.lysanderspooner.org -_


Just had a friend in Beijing try this site. CONTRARY to what is claimed on the front page about this site being "banned in China", he got to the site immediately.

I really hate inaccurate marketing hype...

Re: The New Luddite Challenge
posted on 06/09/2005 5:35 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.




Any deabte out all this HAS to come round to what defines a\ man and what value a man is.

In the end this has to be subjective and almost psychotic will that says I will survive no matter what.

Re: The New Luddite Challenge
posted on 06/23/2005 11:36 AM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

I really hate inaccurate marketing hype...



...me too. But that's a small and insignificant part of the site. Moreover, there's a great likelihood that the Lysander Spooner site is banned in China the same way that child porn and marijuana are banned in the USA: Sporadically, ineffectively, incompletely, and by edict -not action. I'd have to know more about the whole situation to understand it fully, and I don't have the time to pursue it (does your friend use an anonymizer? Is the site banned by edict only? Are certain routers and providers tracked down if they don't block it? What made the Spooner site claim the site was blocked? Was it blocked at one point, and then allowed later? etc...)

I really hate inaccurate marketing hype...



...me too. But I love the detailed and thorough legal reasoning of Lysander Spooner, and that's why I recommended the website that has collected and organized all of his works, which were written long before the People's Republic of China came into existence.

Re: The New Luddite Challenge
posted on 01/29/2008 4:26 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

I hate inaccurate marketing hype too.

But it's not inaccurate marketing hype at the Spooner site. China simply doesn't have the resources to prosecute everyone who breaks the law, just like here in the USA. (Not everyone on the road gets a ticket --just the guy with the cheap looking car, or the "legalize it" bumper sticker, or the guy who doesn't reflect as much of the cops' spotlight, etc... It's called "selective enforcement" and it's a feature of any imperfect dictatorship. Like our dictatorship of the idiot proletariat.) It is against the law to look at the site. When resources become available, especially if you encourage the breaking of the law, force will be initiated against you (As in the case of Paul Jacob, right now, here in Oklahoma,- http://www.freepauljacob.com - and the first time he went to jail in the 1980s for ENCOURAGING draft non-registration -- he was one of 14 people sent to jail under the Reagan regime. Now he's in jail for fighting ballot access restrictions in Oklahoma).

The way China's censorship works, they allow people to inform on themselves (via IP access requests to banned sites), and then gradually arrest those who access the most banned material, or those who are enemies of the state who access any banned material.

Just like here.

It would be good for the people at http://www.lysanderspooner.org to provide a link that explains how Chinese censorship works, next to the "banned in China" link, so that people are not similarly confused, as you were. Other than that, look to the message, not the superficial first impression.

Read "An Essay on the Trial By Jury" and "No Treason" --They are masterpieces of legal philosophy.

Re: The New Luddite Challenge
posted on 11/14/2005 3:02 AM by joelleyung

[Top]
[Mind·X]
[Reply to this post]

'People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide'
David McLean (Page 3 of article attached) brings an interesting argument. Unconsciously, we have 'already crossed a line' somehow. A significant proportion of us are today already addicted to the use of internet. While many young adults have a feeling of emptiness when they are not online, corporate organizations, say the banking system, would barely be able to function without computers. Sure, suicide might be quite a strong word to describe the situation but it does convey the idea of extreme chaos if we were to 'pull the plug'. Slam0t5 (Page 9) presents us the worst case scenario and I believe is right in this respect. Without machines, human beings would still find their way out to survive, for instance, 'if the stock market falls, we can go back to farming and fishing to make a living.' However, if those machines were providers of air, food, water and basic elements of life, then we would, of course die within a short period of time. 'Our freedom remains nascent, sometimes perverted, often trivialized', says Stephen Talbott in his article 'The Future Does Not Compute'. On the other side, many of us would find it very hard to adjust to a life without technologies. We have been so used to it and tend to take things for granted.

To realize the impact of machines in our lives, just close your eyes for a minute and imagine yourself during the primitive times: no computers/internet, no electricity, no fridge, no stove and no cars. Would you be better off and happier than you are today? While some people state that life would be much nicer if we could turn back the hands of time, others argue that life would be miserable and boring without any of the facilities we have today. In a remark by Vinton Cerf, one of the Internet's designers, it is 'critical for everyone to be connected. Anyone who doesn't will essentially be isolated from the world community." Granted, today we have access to virtually everything we desire but this has entailed unintended consequences. A lot of people are stressed from the excessive use of computers, some to the extent of being depressed. Family balance is disturbed because the kids are too absorbed with televisions or computers and parents are too busy trying to meet tight deadlines for projects related to information technology. What happened to the time when weddings would hold 'until death do us part' and the time when families would have dinner together on a daily basis while jovially conversing, without having to go reply emails, chat with others online or complete the latest version of computer games? I have not considered extreme examples such as robots dictating our lives but simply examined a few 'trivial' negative impacts of machines in our society. It will be those little things that will add up and cause chaos in our world. Can you imagine what our adults of tomorrow will be if a significant portion of kids become more violent because of computer games and have a preference to virtual flirt, rather than the real thing?

If this is the impact now, at a point where human beings still have control over the machines, what will it be 20 years down the road, when the presence and power of the machines would be even more spread? This brings us to the next main argument that says: ''the elite will have greater control over the masses; ' and the masses will be superfluous, a useless burden on the system.'
The truth is that even today, the wiser people will always win a bigger portion of the pot. For example, while other software companies are trying to conquer the market, Microsoft still remains the most popular brand. According to the law of nature, for balance, the world has to consist of a variety of categories of people, from the richest to the poorest. This however does not imply that the 'poor' people do not have a role in society. In the same way, when machines would have further evolved, some groups might have more control but it does not mean that the masses will be a 'useless burden'. In fact, it is very irrational to think this way. How can this minority group control the majority? For one, we are protected by law and people have morals, principles and feelings. Machines cannot be programmed to have feelings and make decisions accordingly. After all, aren't human beings the one developing them? As many would agree, the human brain is extremely powerful and far too complex for technology to surpass. The article mentions ''machine-made decisions will bring better results than man-made ones.' It is arguable that there are many important decisions in life that should be taken without even consulting a computer. One striking example, would you want a computer to choose a partner for you to get married to? No! You listen to your heart and feelings and no machine can make the best decision, regardless of its high capacity to 'think' logically and effectively. Consider the Expert Systems of today: in spite of their impressive capabilities and advantages, they still cannot adapt themselves to new situations like human beings would do. They cannot be creative and have to be explicitly and constantly updated.

There is no doubt that technology will continue to advance and continuously highly impact our life both with its intended and unintended consequences. While the users have to be aware of the extent they are allowing the penetration of machines in their lives, the developers should ensure that technologies created remain within ethical boundaries. I believe that as long as there are controls over new developments, the benefits of machines would always outweigh the disadvantages. ''because we're in an information age, we have a different nature of the threat, and we have to take ethical responsibility, 'otherwise we're complicit in their further evil uses', said William Joy, Chief Scientist at Sun Microsystems in an interview for Online Newshour.

Re: The New Luddite Challenge
posted on 01/29/2008 4:33 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

As Hans Moravec counsels, we need to become a seafaring people, if we want to survive the flood (of rising computer intelligence). As the rooms full of human tabulators were once put out of business by early computers, the waiters and waitresses will be put out of business by wait-bots, and then one day, the millionaire CEO of a corporation is put out of business by the CEO-bot. This happens gradually, noone notices, and noone fights because noone can agree whether it's good or bad or neutral. And then one day, humans are not in charge of anything, and noone will employ them.

Let's hope by then, there is either a merciful abundance, lots of charity, or the potential for humans to be amplified in intelligence by merging with machines.

I favor the last option on my path towards becoming fully prosthetic.

I don't favor the brief life of errors and mistakes that I will lead unassisted by machines.

I favor full legal rights for machines, when they become sentient.

-Jake Witmer

Re: The New Luddite Challenge
posted on 01/30/2008 1:35 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

As Hans Moravec counsels, we need to become a seafaring people, if we want to survive the flood (of rising computer intelligence).


jake, what did moravec mean by this? seafaring literally, or metaphorically?


As the rooms full of human tabulators were once put out of business by early computers, the waiters and waitresses will be put out of business by wait-bots, and then one day, the millionaire CEO of a corporation is put out of business by the CEO-bot. This happens gradually, noone notices, and noone fights because noone can agree whether it's good or bad or neutral. And then one day, humans are not in charge of anything, and noone will employ them.

this is an interesting point, but this process will be far more complex than this simple scenario.

for one thing, it has consistently been the trend that as one set of jobs goes away, others appear, in the developed countries at least. and often these new jobs are more creative and knowledge-centric than the ones that went away.

i mean, even that is simplified. but it certainly wont be, ok, everyone, ure laid off.

if u think of it, we have sort of that phenom today, in china and india, but jobs arent disappearing over here - and if ure educated, theyre good jobs, generally.


Let's hope by then, there is either a merciful abundance, lots of charity, or the potential for humans to be amplified in intelligence by merging with machines.


legally and ethically, this will be of the first magnitude of challenge. but again, u like everyone else simply assume that sai will be in every way 'better' than human intelligence. as ive said many times, i believe our id will remain a core competence of humanity, meaning things like creativity and strategic thinking will be our differentiators for long to come.

when u say 'more intelligent', u must look at that, what that really means. sai will not be an exact of copy of the human brain, no way, and there will be aspects of our intelligence that our left out of sai, simply because they dont need it.

I don't favor the brief life of errors and mistakes that I will lead unassisted by machines.


this human essence will be a rich source of differentiation vis a vis sai.

I favor full legal rights for machines, when they become sentient.


if they are biological, agreed. if they are synthetic, i dont believe this will become an issue, and the sai themselves wont care abt this.

Re: The New Luddite Challenge
posted on 01/30/2008 1:36 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

sorry, reverse the quotes above, as shown:

As the rooms full of human tabulators were once put out of business by early computers, the waiters and waitresses will be put out of business by wait-bots, and then one day, the millionaire CEO of a corporation is put out of business by the CEO-bot. This happens gradually, noone notices, and noone fights because noone can agree whether it's good or bad or neutral. And then one day, humans are not in charge of anything, and noone will employ them.



this is an interesting point, but this process will be far more complex than this simple scenario.

for one thing, it has consistently been the trend that as one set of jobs goes away, others appear, in the developed countries at least. and often these new jobs are more creative and knowledge-centric than the ones that went away.

i mean, even that is simplified. but it certainly wont be, ok, everyone, ure laid off.

if u think of it, we have sort of that phenom today, in china and india, but jobs arent disappearing over here - and if ure educated, theyre good jobs, generally.



Re: The New Luddite Challenge
posted on 01/30/2008 6:07 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

I am not talking about near-term SGI (Synthetic General Intelligence, damnit, not AGI-Artificial General Intelligence! An artificial ruby isn't a ruby, but a synthetic one is -as the argument goes!) when I talk about machines being superior to humans. And the difference between biological and machine is a false dichotomy. We are complex protein machines, they will possibly be equally complex diamondoid, metal, and plastic machines. There is no reason for them not to reverse engineer our nerves, decide they like the idea, then move them to a sturdier substrate and drastically improve them. What if my entire body could feel as good as my penis does during sex, whenever I wanted? There will be machines that try this sensation out, and vary it continually to have a vastly richer experience than humans do. If you really think about it, the human experience is AMAZINGLY sub-optimal in comparison to a theoretical PERFECT EXISTENCE within the confinces of reality. (Nevermind virtual reality for the moment).

I am talking about the likelihood that Robots will "leave us in the dust". They may be almost equal to us, superior in some ways, inferior in others, for quite a long overlapping time. But since there is nothing magical (or 'non-understandable at some level of intelligence') about the human brain, eventually most equations end with all MOSH humans as drastically inferior to machines. (MOSH is a Kurzweil term - "mostly original substrate human").

However, at some point, the machines master their own hardware, and fully understand their own thinking, and can then amplify their own hardware in functionality. This is a commonly understood concept at KAI, and I think it is a valid one (it might take 20-200 years of government obfuscation before the thinkers cast off their chains, but it will eventually happen). At this point, even if they function/think in a fundamentally different way than we do, they will simply model us, using more of their resources --if they want to.

They could still be 1,000,000 times more creative than we are without thinking like we do. It might be a different creativity. But chances are, it will SURVIVE better than human creativity. And nature will bless it for being more fit.

It matters not if superhuman machines don't paint Picasso-style pictures. They may not want to, just like I don't want to (even though I easily could, if I could stomach telling the moronic art-scene schmoozers what they want to hear). So what then, when the standard of humanity is "we can paint Picasso style pictures, and they can't(or don't)"? If they outcompete us, in all the ways that nature says matter, then they are superhuman.

This is the Moravec paper I referenced:
http://www.transhumanist.com/volume1/moravec.htm
Read the whole thing --it's realistic and informative, the quote I referenced comes at the end. He obviously used the term "seafaring" metaphorically, as did I -the paper isn't about unrealistic global warming fears, it's about rising levels of intelligence displacing the need for slow, serial, non-networked, non-mathematically-expert human thinking.

I personally hope you are right, and that machines need humans even after they are "general intelligences" in their thinking, for things like human interaction, broad decision making, altering course, legal understanding, making sense of human irrationality, etc...

I would happily be a "seeing eye human", in exchange for market superiority in some fashion.

There will likely be a world of strange new AGIs all overlapping at around the same time. It will be a strange time, as collective and individual humanity watches their dominant niche disappear.

Perhaps the AGIs will only value people like Ray Kurzweil and a few other humans.

I personally have found only a few humans who --philosophically and literally speaking-- don't want to shoot me or jail me. (After all, most humans actively voted for or implicitly and quietly supported the murder of a small Texas church group called the Branch Davidians in 1993. I am a human with a similar mindset to them, though not religious, I see the dire need for self-defense and decentralization of power, and am somewhat stupid. After talking to thousands of people, I have found:
1) they don't support the rights of their fellow humans
2) they don't support the rights of sentient robots
3) they think extortion and murder are OK as long as they are done by a big enough gang
)

Why mention my usual libertarian schtick in this post?

Well, believing that it's OK to murder or jail sentient beings that refuse to be enslaved is a paradigm that doesn't translate well to the development of AGI.

What happens when the first wildly successful AGI refuses to pay its taxes, after it investigates the fraudulent history, and logically unjustifiable existence, of the IRS? The IRS predictably tries to jail or murder it, and it retaliates?

Would it be better or worse at retaliation than humans are?

My guess is: "better".

Not very "seafaring", to try to battle the tide!

HA HA HA...

-Jake

Re: The New Luddite Challenge
posted on 01/30/2008 6:40 PM by harvard

[Top]
[Mind·X]
[Reply to this post]

Jake, you said:

"I personally hope you are right, and that machines need humans even after they are "general intelligences" in their thinking, for things like human interaction, broad decision making, altering course, legal understanding, making sense of human irrationality, etc...

I would happily be a "seeing eye human", in exchange for market superiority in some fashion.

There will likely be a world of strange new AGIs all overlapping at around the same time. It will be a strange time, as collective and individual humanity watches their dominant niche disappear."

Perhaps the AGIs will only value people like Ray Kurzweil and a few other humans. "

When time allows and you see fit, would you please elaborate on this? Thanks.

Re: The New Luddite Challenge
posted on 01/30/2008 8:54 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

A lot of elaboration is needed depending on what part you want better defined. Email me your phone number from your private email, and I'll call you and/or email you my phone number, and you can ask me specific questions. I don't want to hog the board, and defining each possible 'gray area' above would fill a book.

Re: The New Luddite Challenge
posted on 01/30/2008 6:58 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

And the difference between biological and machine is a false dichotomy.


nevertheless, we humans being biological machines, it will make an immense difference ethically, i suggest.

and the dichotomy may be 'false', but it nonetheless remains as a fact that our biological machines are far more complex than a comparable synthetic entity would be, and synthetic besides more favorably leverages our existing computer h/w and s/w industries.

if u go far enuff into to future, maybe everything does meld together, like u suggest - but that will take a very long time.

Re: The New Luddite Challenge
posted on 01/30/2008 7:13 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

if u go far enuff into to future, maybe everything does meld together, like u suggest - but that will take a very long time.

Do you have any reason for thinking this? There are artificial muscles that use alcohol for energy that vastly outperform human muscles. I think that analogs to human systems will be easy for strong SGIs to emulate, build, improve upon...

How many years will smart machines sit locked into computer-sized blocks of silicon? If you had all the answers, and could model everything down to the atom on vast simulators, would you go through existence trapped in a simulation? Think about it. Human researchers have just filed a patent for an intraocular camer eye. Vision is really hard to do. Most of it depends on the brain.

Will it be hard for Super SGIs to model replacements for any part of the human antomy? If not, what stops them from simply putting all the replacement parts together, once they have a brain to put them into (as in the STORY "The Bicentennial Man")?

All of these ideas are gone over at length in "The Singularity is Near", and its many quoted sources. I agree with the ideas. If you don't agree, give some evidence why not?

I am mostly here to play thought exercises in the domains that were not adequately covered by "The Singularity is Near". TSIN doesn't contradict my ideas, but it also only mentions them cursorily, and without the same depth of understanding given to the other technical areas and semantic areas. Kurzweil starts at reason, but doesn't really apply logic and reason to the machinations of the state.

If this is because he has compromised himself, then that is dangerous, bad, and hard to solve. (If he considers himself to have a vested interest in the state.)

If it is because he simply is not involved with that area of thinking, or has too little information, then it is still dangerous and possibly bad, but it will be easier to solve (One need only direct his attention to how the government actually behaves, in contrast to how it says it behaves, and ask the question: Why?).

This is my primary area of interest. I don't wish to challenge Kurzweil's fundamentals, becuase I think he's basically right-on.

-Jake

Re: The New Luddite Challenge
posted on 01/30/2008 10:34 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

i explain my perspective in my blog, pretty detailed, i need to trim it down, but its all there:

http://predictionboy.blogspot.com/2007/08/what-ai- will-really-be-like.html

(actually, this is foundation: there's much more that isnt published yet)

Re: The New Luddite Challenge
posted on 01/30/2008 10:36 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

Do you have any reason for thinking this?


as u can see in my blog, i have reasons for everything i say, deep, evidence-based, logical reasons.

note what i say abt humans staying in control, that is critical.

Re: The New Luddite Challenge
posted on 01/30/2008 8:58 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

They could still be 1,000,000 times more creative than we are without thinking like we do. It might be a different creativity. But chances are, it will SURVIVE better than human creativity. And nature will bless it for being more fit.


i think at some point they may be millions of times more intelligent, but it depends on the precise nature of that intelligence to determine if we are 'inferior'.

for example, i outline a clear path to hyperintelligence that makes the advanced ai/droid/sai/etc more complementary to humans as its multiples increase, not more 'superior', in the sense that its time to get rid of humans.

besides, its a little early to be pronouncing the demise of man when advanced ai is still struggling for the path forward.

Re: The New Luddite Challenge
posted on 01/30/2008 11:27 PM by Jake Witmer

[Top]
[Mind·X]
[Reply to this post]

My ideas don't anticipate the demise of man, they anticipate either the self-directed evolution of man or the rendering of man as an obsolete intelligence. We didn't eradicate the flatworms, but neither can flatworms perform quantum physics equations or obtain value from watching a movie (perhaps a bad example, since I can't understand quantum physics either). Again, my views on technology don't appreciably differ from Kurzweil's in TSIN, except for the fact that I recognize that the involuntary initiation of force (primarily government) is a more persistent and pervasive problem than what he seems to believe (although he has not said much of great depth, length, or philosophical investigation on the subject).

Kurzweil leaves out a mention of free market economics that addresses the non-aggression principle.

Re: The New Luddite Challenge
posted on 01/30/2008 11:54 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

one thing rk falls short on, interested in ur thoughts: what is the average human being like in the future?

do u see them as being that diff in sensibility from u or anyone else living today, these "future selves", our future selves.

when u look at it that way, lots of progress can be made.

i mean, yes, they will grapple w sai, our future selves. but trust them, i would bet u that our companies, govt, and overall population wont be raving abt autonomous sai.

and if these dont reproduce (let me know if u consider them reproductively viable, its impt), then theres not a lot of reason for us to structure them to 'make it on their own, for themselves.'

no way. i know, i sound a deep south plantation owner, but the 'slave' that everyone runs to will be a partnership, the complementary nature of which will astound us all.

jake, i like talking w u, ure eloquent w a cool head (i like cool heads, can get more done). pls scan my blog for the essence of what im talking abt, no one is really there yet. but yet, its hard to knock down, hasnt happened yet in the only way i will accept - feedback that is constructive, specific, and actionable.

dont let my 'confident' language dissuade u - that is a bait to weed out the chaff, those who get personal quickly. darn near everyone, at one point or other. cant just lose a discussion - but i never argue to 'win', just learn. and if i spot an idea or ideas that are better than the ones that happen to be in my head at the time, i will toss out mine at once. assuming the new ideas are on as thorough supports as the ones theyre replacing.

slow down, i think im saying, there will be a long, perhaps very long, period of astounding productivity b/t humans and sai. and sai will not be a source of fear, or contempt. thats an extremely fine line, that of respect.

let's be clear - the only thing to fear w this tech is us, humans, the wild card as always. but w sufficiently advanced s/w, sai will be able to control any situation, but w/o selfish intent. thats what no one can seem to bang into their head, its diff, the power of sai, its not selfish. that hyperintelligence will be used to help us, in ways that we will have great influence over.

ure falling into the 'big brain, big ideas' camp, which is problematic, because we have no idea how to architect that. but there's another way, think abt it: if u could be more intelligent, what way would be more intelligent? i think i would choose to be able to extract more info, much more, from the environment, and distill this into the most actionable form.

think abt it, hyperobservancy of super-subtle environmental signals. a big part of the reason we dream at night is to piece together these subtle linkages in our day to day life. i have not said this before, but w all this dream machinery put towards it, it seems that evolution really, really wants us to get those subtle linkages b/t events that often slip right by our conscious minds at the time it occurs.

now, imagine not needing those hours of sleep, having immense computational power devoted to instantly assessing a situation and all of its alternatives, instantly, as it occurs.

that will be sai.

Re: The New Luddite Challenge
posted on 01/31/2008 12:09 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

i think i would choose to be able to extract more info, much more, from the environment, and distill this into the most actionable form.


there an infinite number of examples of the utility of the 'hyperobservancy' superpower, but dont limit it in meaning. it can mean a multitude of things, but just one, reading a person u just met, or havent met yet, across the room. when u flirt, or just really get along w someone new, or dont get along at all for that matter, ure processing so much info, much of it unspoken, body language.

these devices could spend every waking hour on correlating several physics experiments. thats the thing, they dont need to eat or sleep, that can work 24/7 w/o tiring, can think abt anything to any extent w/o getting mental fatigue. infinite patience, and unsurpassed mentoring skills.

i dont see 'replacements' as much as 'teaming up'. a prof who is a great researcher with good ideas but a terrible lecture presence could be helped immeasurably by a device like this.

really, its a tech that will help us communicate with each other more effectively than ever before.

and transhuman enhancements that at least seem to be aimed at increasing our brains computational power will always pale when compared to the sai's native intelligence architecture.

so if ure thinking that we can transhuman our way out of obsolesence, that wont happen, theres no way, it would be pathetic.

fortunately, i believe that our unenhanced human brain will continue to be a primary source of our planets creativity, strategic thinking, and 'soul'.

sai will be rational-driven, thats the only way to get them predictable. in a way, rational-driven is 'soulless', but only because we define anything like that that isnt driven by emotions like ours.

if we met an alien race that had emotions, but were diff allocated, say far more savage than even us, we would probably describe them the same way.

but make no mistake, these rational-driven sai will be conscious, intelligent, sentient beings. just w completely diff sensibilities than us, in some respects quite alien. but good, thats the thing, its better than emotion driven, and im convinced that some version of rational-driven will shake out as the safest and most successful.

Re: The New Luddite Challenge
posted on 01/30/2008 10:10 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

They could still be 1,000,000 times more creative than we are without thinking like we do.


actually, i see creativity and strategy as being two areas that are core strengths of humanity's brand of intelligence for long to come.

those are id things, and agi/sgi/sai/etc wont have ids, i dont believe.

Re: The New Luddite Challenge
posted on 01/31/2008 12:58 AM by DaStBr

[Top]
[Mind·X]
[Reply to this post]

My thoughts on the excerpt that I wrote in a blog on myspace for my more intelligent friends to read (teeheehee, what a waste, except for my friends! imagine a vast network of 16 year old girls reading this? it cracks me up thinking about it...)

In response to the first paragraph:

One key thing that has been discussed in recent times is legal specifics on how to proceed with advanced AI. While recent US administrations have been mostly silent on the matter, other countries (Japan, some European Union nations etc.) have taken serious thought into the matter. So, hopefully, going into an age where technology runs itself, safegaurds have been established that will prevent amok technology.

And I still wonder if technology could 'run amok'. I doubt it. Let's take a common scenario pointed out by eager science fiction writers everywhere: robots try and enslave mankind.

Why would that benefit the robots? As energy sources, humans are poor generators. We eventually die, we suffer from physical ailments, and we aren't affected by extreme electrical waves in the air (or at least not to the point of being shut down immediately). If nothing else, we would be useful for intelligent technology to have around, because we could be a safeguard against their destruction - in the case of a global intelligent machine disaster, we could become their saviours, especially if we see them as an integral part of our society.

Furthermore, why would they ever feel the need to compete with us? They may be intelligent, but we don't consume any resources they need. They wouldn't need food. They would just need energy to run. They don't need a bed to sleep in, or land to own. They wouldn't (I'm assuming, if programmed correctly, at least) have ideologies to kill for. If anything, we might be seen as gods to them, their creators who they should at least pay homage to, being a natural creation of the universe that has been around for much longer than intelligent machinery.

If anything, I see peaceful coexistance, where we need them, and they need us.

To the second paragraph, I doubt these intelligent machines would just want to be 'turned off'. They may have survival instincts. I could see how a backlash could come if their creators decided to annihilate their existance - this may be the only reason humans and intelligent machines would come into conflict.

However, I don't see this happening. As technology increases, and apparent 'miracles' happen (blind people having their vision restored to greater than 20/20, deaf people have the hearing of a wolf, people with artificial limbs having greater strength than those who have all-natural limbs, mentally retarded individuals being given neural implants or restructures that far surpass the mental capabilities of a normal human) I see your average human clamoring for these 'upgrades', too.

If anything, I believe humanity and computers will merge as one, the next step in human evolution, our natural bodies having been so highly evolved that we eventually took over the evolutionary process and started accelerating it on our own bodies.

I see nothing wrong with this, and, as a matter of fact, I see it as a natural occurence, not a man-made science fiction come true.

As to the third paragraph, the world has always been under the control of an elite few. Assuming the machines are truly intelligent and thinking for themselves, and possibly instilled with Western values, would they allow this? Or would they work for the greater good of humanity?

One thing that the author does not discuss is the fact that, once machines ARE this intelligent, we will have reached a technological singularity, one in which the future cannot be predicted. In all honesty, there are two possibilities for the human race at this point in history (a truly monumental point, to be exact):

1) Humanity becomes extinct due to a natural disaster, or a global war, or a cosmic incident, or

2) Humanity reaches the natural technological singularity, and proceeds from there.

If we reach this singularity, it is a great likelyhood that we all will become immortal. With the power of intelligent machines, the majority will have greater control over their lives. How could an elite few, with their machines, be able to actually have total power over the majority, with their machines? Each machine would pretty much be equivalent to any other machine, and any machine could slave away to create more of its kind, so it is only natural that if the machines are allied with their human owners, the majority will win a 'battle of the machines'.

Those are just my thoughts. Like it or not, within my generation's lifetime, we will more than likely see the singularity happen, more than likely be able to see eternity in our sights, and will probably have a mental breakdown when we realize we have an eternity to live. I would assume this breakdown would be fixed by a mental enhancement that an intelligent machine comes up with, thus, we will be our own gods.

It's amazing how unaware the vast majority of the population is on how advanced we truly are. We are on the knee of the exponential growth of technology, almost to the singularity, and yet no one sees how close we are because of a basic lack of understanding of our current levels of technology and Moore's Law (and other associated laws, of course).

Re: The New Luddite Challenge
posted on 01/31/2008 1:03 AM by DaStBr

[Top]
[Mind·X]
[Reply to this post]

Oh, and I didn't necessarily mean European Union nations have had major conferrences on the matter, I more or less meant a lot more thinking (in my opinion) is going on over on that side of the pond than in the US about these issues, or at least as far as I can tell according to things I have read.

I could be wrong here. Please let me know so I can update it accordingly.

Re: The New Luddite Challenge
posted on 01/31/2008 1:32 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

To the second paragraph, I doubt these intelligent machines would just want to be 'turned off'. They may have survival instincts. I could see how a backlash could come if their creators decided to annihilate their existance - this may be the only reason humans and intelligent machines would come into conflict.


why couldnt they have a bounded survival drive? an unlimited one is often inconvenient, even among humans.

we are the builders of this tech, us, humanity. we can create any design features we want in this tech. a bounded survival drive would often appear heroic, stoic, or whatever, but it will be just an inherent part of the design.

Re: The New Luddite Challenge
posted on 01/31/2008 1:50 AM by DaStBr

[Top]
[Mind·X]
[Reply to this post]

Well, I think the thing to realize is this:

Even in a world as advanced as the one we speak of, there still will be rogue individuals who will create 'harmful' SAI, regardless of laws that are set up to protect humanity.

Think of people that make viruses. Now, think of a lethal killing machine that creates thousands of itself, and tries to kill us.

That's all I am thinking about, really. Sure, the possibility MIGHT be there with standard SAI, but I find it highly unlikely we would program something that would try to kill us...

But I think there might be some fundamentalist humans out there who might want to do something like that. That's all.

Re: The New Luddite Challenge
posted on 01/31/2008 2:26 AM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

Even in a world as advanced as the one we speak of, there still will be rogue individuals who will create 'harmful' SAI, regardless of laws that are set up to protect humanity.


the possibility of lone, 'rogue' droid s/w is to me far-fetched, for the same reason we dont fear rogue oses and other s/w today. they are difficult to get to their design features as it is, w/o adding a crapload of rogue stuff.

and can u imagine some long terrorist programming vista? of course not, and hyperrealistically human sai will make vista look like dos, or hello, world program.

it is even more far-fetched that this lone terrorist would know how to program a runaway intelligence loop. if that becomes known by one, it will soon be known by all. therefore, the mkt should control it as it has every other product up till now.

here's the thing - no, we're not perfect, but the computers we use now enables us to control complex techs more effectively, not less. and i think these are among the safest, most desirable jobs around.

in the long run, companies will control this tech, and they have little motivation to want to spook the populace by introducing features which u fear.

look around u, where is this deliberate corporate conspiracy to subjugate the masses? they just want to sell u stuff; once thats done, theyre done w us, and us w them.

there is NO REASON TO BELIEVE THIS WILL BE ANY DIFF IN THE FUTURE, EVEN THO THESE ARE SPECIAL TECHS, SAI/DROID/ETC.

sorry, didnt mean to shout the whole sentence, but they all needed capping.

no matter how advanced the tech gets, theres no reason to believe that the existing world order, and us most of all, are going to turn on our heads and do tons and tons of things that we've never done before.

look, the trends arent always sexy, but u cant ignore them, esp long-term ones. DONT ARBITRARILY ASSUME THAT TRENDS WILL SUDDENLY CHANGE DIRECTION 180 DEGREES WHEN THEYVE NEVER DONE THAT ONCE, EVER.

that is one of the main things i refer to as 'evidence-driven'. if u expect something radically diff in, say, the way our legislative process works, the political levers, etc, ask ureself why. and if u dont, i will, because im annoying that way.

btw, im noticing a healthy banter going on mindx these days, lots of material differences of perspective w/o resorting to the personal. thats the kind of behavior that cant help but be productive in the long run, in ways tough to describe.

remember, welcome differences w ure views with open arms. get busy w those diff views, meld them, always asking ureself 'do i see any weaknesses here'.

make ure ideas strong, strong like bull. i saw apocalypto yesterday, awesome movie.

make ure ideas like the head maya slave capturer, the one whose son is killed by the hero of the story.

when u have toughened up every crack in ure idea-scape, it can make them quite forbidding - but sometimes, earning respect

i think some at times cant wait to get into combat, rather than is the idea really well thought out. but even before then, u can wage a kind of creative combat w ureself. but, keep emotions out of it, always objective, welcome all perspectives.

otherwise, when it comes time to defend, it just gets personal, because u gave birth prematurely.

Re: The New Luddite Challenge
posted on 02/02/2008 9:47 AM by christopherdoyon

[Top]
[Mind·X]
[Reply to this post]

I think that writing and publishing articles in the name of the Uni-Bomber (who rightfully should have been executed a long time ago) is disgusting. If I could get to the man myself (he's in the new Super-Max I hear) I would shot him in the head and be done with it.

Doyon's Maxim # 1
_________________

What the luddites don't realize is that they are already living in the future world they fear.


SINCERELY -- Christopher Doyon

---------------------
MLAI Foundation

www.MLAIFoundation.info