Origin > How to Build a Brain > The Age of Virtuous Machines
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0708.html

Printable Version
    The Age of Virtuous Machines
by   J. Storrs Hall

In the "hard takeoff" scenario, a psychopathic AI suddenly emerges at a superhuman level, achieving universal dominance. Hall suggests an alternative: we've gotten better because we've become smarter, so AIs will evolve "unselfish genes" and hyperhuman morality. More honest, capable of deeper understanding, and free of our animal heritage and blindnesses, the children of our minds will grow better and wiser than us, and we will have a new friend and guide--if we work hard to earn the privilege of associating with them.


Originally published in Beyond AI: Creating the Conscience of the Machine, Ch. 20. Reprinted with permission on KurzweilAI.net May 31, 2007.

To you, a robot is a robot. Gears and metal. Electricity and positrons. Mind and iron!  Human-made! If necessary, human-destroyed. But you haven't worked with them, so you don't know them. They're a cleaner, better breed than we are.

—Isaac Asimov, I, Robot

Ethical AIs

Over the past decade, the concept of a technological singularity has become better understood. The basic idea is that the process of creating AI and other technological change will be accelerated by AI itself, so that sometime in the coming century the pace of change will become so rapid that we mere mortals won't be able to keep up, much less control it. British statistician and colleague of Turing I. J. Good wrote in 1965, "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind." The disparate intellectual threads, including the word "singularity" from which the modern concept is woven, were pulled together by Vernor Vinge in 1993. More recently it was the subject of a best-selling book by Ray Kurzweil. There is even a reasonably well-funded think tank, the Singularity Institute for Artificial Intelligence (SIAI), whose sole concern is singularity issues.

It is common (although not universal) in Singularity studies to worry about autogenous AIs. The SIAI, for example, makes it a top concern, whereas Kurzweil is more sanguine that AIs will arise by progress along a path enabled by neuroscience and thus be essentially human in character. The concern, among those who share it, is that epihuman AIs in the process of improving themselves might remove any conscience or other constraint we program into them, or they might simply program their successors without them.

But it is in fact we, the authors of the first AIs, who stand at the watershed. We cannot modify our brains (yet) to alter our own consciences, but we are faced with the choice of building our creatures with or without them. An AI without a conscience, by which I mean both the innate moral paraphernalia in the mental architecture and a culturally inherited ethic, would be a superhuman psychopath.

Prudence, indeed, will dictate that superhuman psychopaths should not be built; however, it seems almost certain someone will do it anyway, probably within the next two decades. Most existing AI research is completely pragmatic, without any reference to moral structures in cognitive architectures. That is to be expected: just getting the darn thing to be intelligent is as hard a problem as we can handle now, and there is time enough to worry about the brakes after the engine is working. As I noted before, much of the most advanced research is sponsored by the military or corporations. In the military, the notion of an autonomous machine being able to question its orders on moral grounds is anathema. In corporate industry, the top goal seems likely to be the financial benefit of the company. Thus, the current probable sources of AI will not adhere to a universally adopted philanthropic formulation, such as Asimov's Three Laws. The reasonable assumption then is that a wide variety of AIs with differing goal structures will appear in the coming decades.

Hard Takeoff

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

—Vernor Vinge, 1993

A subtext of the singularitarian concern is there may be the possibility of a sudden emergence of (a psychopathic) AI at a superhuman level, due to a positive feedback in its autogenous capabilities. This scenario is sometimes referred to as a "hard takeoff." In its more extreme versions, the concept is that a hyperhuman AI could appear virtually overnight and be so powerful as to achieve universal dominance. Although the scenario usually involves an AI rapidly improving itself, it might also happen by virtue of a longer process kept secret until sprung on the world, as in the movie Colossus: The Forbin Project.

The first thing that either version of the scenario requires is the existence of computer hardware capable of running the hyperhuman AI. By my best estimate, hardware for running a diahuman AI currently exists, but is represented by the top ten or so supercomputers in the world. These are multimillion-dollar installations, and the dollars were not spent to do AI experiments. And even if someone were to pay to dedicate, say, an IBM blue gene or Google's fabled grid of stock PCs to running an AI full-time, they would only approximate a normal human intelligence. There would have to be a major project to build the hardware of a seriously epihuman, much less hyperhuman, AI with current computing technology.

Second, even if the hardware were available, the software is not. The fears of a hard takeoff are based on the notion that an early superintelligence would be able to write smarter software faster for the next AI, and so on. It does seem likely that a properly structured AI could be a better programmer than a human of otherwise comparable cognitive abilities, but remember that as of today, automatic programming remains one of the most poorly developed of the AI subfields. Any reasonable extrapolation of current practice predicts that early human-level AIs will be secretaries and truck drivers, not computer science researchers or even programmers. Even when a diahuman AI computer scientist is achieved, it will simply add one more scientist to the existing field, which is already bending its efforts toward improving AI. That won't speed things up much. Only when the total AI devoting its efforts to the project begins to rival the intellectual resources of the existing human AI community—in other words, being already epihuman—will there be a really perceptible acceleration. We are more likely to see an acceleration from a more prosaic source first: once AI is widely perceived as having had a breakthrough, it will attract more funding and human talent.

Third, intelligence does not spring fully formed like Athena from the forehead of Zeus. Even we humans, with the built-in processing power of a supercomputer at our disposal, take years to mature. Again, once mature, a human requires about a decade to become really expert in any given field, including AI programming. More to the point, it takes the scientific community some extended period to develop a theory, then the engineering community some more time to put it into practice. Even if we had a complete and valid theory of mind, which we do not, putting it into software would take years; and the early versions would be incomplete and full of bugs. Human developers will need years of experience with early AIs before they get it right. Even then they will have systems that are the equivalent of slow, inexperienced humans.

Advances in software, similar to Moore's law for hardware, are less celebrated and less precisely measurable, but nevertheless real. Advances in algorithmics have tended to produce software speedups roughly similar to hardware ones. Running this backward, we can say that the early software in any given field is much less efficient than later versions. The completely understood, tightly coded, highly optimized software of mature AI may run a human equivalent in real time on a 10 teraops machine. Early versions will not.

There are two wild-card possibilities to consider. First, rogue AIs could be developed using botnets, groups of hijacked PCs communicating via the Internet. These are available today from unscrupulous hackers and are widely used for sending spam and conducting Ddos attacks on Web sites. A best estimate of the total processing power on the Internet runs to 10,000 Moravec HEPP or 10 Kurzweil HEPP, although it is unlikely that any single coordinated botnet could collect even a fraction of 1 percent of that at any given time. Moreover, the extreme forms of parallelism needed to use this form of computing, along with the communication latency involved, will tend to push the reasonable estimates toward the Kurzweil level (which is based on the human brain with its high-parallelism, slow-cycle time architecture). That, together with the progress of the increasingly sophisticated Internet security community, will make the development of AI software much harder in this mode than in a standard research setting. The "researchers" would have to worry about fighting for their computing resources as well as figuring out how to make the AI work—and the AI, to be able to extend their work, would have to do the same. Thus, while we can expect botnet AIs in the long run, they are unlikely to be first.

The second wild-card possibility is that Marvin Minsky is right. Almost every business and academic computing facility offers at least a Minsky HEPP. If an AI researcher found a simple, universal learning algorithm that allowed strong positive feedback into such a highly optimized form, it would find ample processing power available. And this could be completely aboveboard—a Minsky HEPP costs much less than a person is worth, economically.

Let me, somewhat presumptuously, attempt to explain Minsky's intuition by an analogy: a bird is our natural example of the possibility of heavier-than-air flight. Birds are immensely complex: muscles, bones, feathers, nervous systems. But we can build working airplanes with tremendously fewer moving parts. Similarly, the brain can be greatly simplified, still leaving an engine capable of general conscious thought. My own intuition is that Minsky is closer to being right than is generally recognized in the AI community, but computationally expensive heuristic search will turn out to be an unavoidable element of adaptability and autogeny. This problem will extend to any AI capable of the runaway feedback loop that singularitarians fear.

Moral Mechanisms

It is therefore most likely that a full decade will elapse between the appearance of the first genuinely general, autogenous AIs and the time they become significantly more capable than humans. This will indeed be a crucial period in history, but no one person, group, or even school of thought will control it. The question instead is, what can be done to influence the process to put the AIs on the road to being a stable community of moral agents? A possible path is shown in Robert Axelrod's experiments and in the original biological evolution of our own morality. In a world of autonomous agents who can recognize each other, cooperators can prosper and ultimately form an evolutionarily stable strategy.

Superintelligent AIs should be just as capable of understanding this as humans are. If their environment were the same as ours, they would ultimately evolve a similar morality; if we imbued them with it in the first place, it should be stable. Unfortunately, the environment they will inhabit will have some significant differences from ours.

The Bad News

Inhomogeneity

The disparities among the abilities of AIs could be significantly greater than those among humans and more correlated with an early "edge" in the race to acquire resources. This could negate the evolutionary pressure to reciprocal altruism.

Self-Interest

Corporate AIs will almost certainly start out self-interested, and evolution favors effective self-interest. It has been suggested by commentators such as Steven Pinker, Eliezer Yudkowsky, and Jeff Hawkins, that AIs would not have the "baser" human instincts built in and thus would not need moral restraints. But it should be clear they could be programmed with baser instincts, and it seems likely that corporate ones will be aggressive, opportunistic, and selfish, and that military ones will be programmed with different but equally disturbing motivations.

Furthermore, it should be noted that any goal structure implies self-interest. Consider two agents, both with the ability to use some given resource. Unless the agents' goals are identical, each will further its own goal more by using the resource for its own purposes and consider it at best suboptimal and possibly counterproductive for the resource to be controlled and used by the other agent toward some other goal. It should go without saying that specific goals can vary wildly even if both agents are programmed to seek, for example, the good of humanity.

The Good News

Intelligence Is Good

There is but one good, namely, knowledge; and but one evil, namely ignorance.

—Socrates, from Diogenes Laertius's Life of Socrates

As a matter of practical fact, criminality is strongly and negatively correlated with IQ in humans.  The popular image of the tuxedo-wearing, suave jet-setter jewel thief to the contrary notwithstanding, almost all career criminals are of poor means as well as of lesser intelligence.

Nations where the rule of law has broken down are poor compared to more stable societies. A remarkable document published by the World Bank in 2006 surveys the proportions of natural resources, produced capital (such as factories and roads), and intangible capital (education of the people, value of institutions, rule of law, likelihood of saving without theft or confiscation). Here is a summary. Note that the wealth column is total value, not income.

 

Income Group

Wealth per Capita

Natural

Resources

Produced

Capital

Intangible

Capital

 

Low

Income

$7,532

1,925

1,174

4,434

Medium

Income

$27,616

3,496

5,347

18,773

High

Income

$439,063

9,531

76,193

353,339

 

In a wealthy country, natural resources such as farmland are worth more but only by a small amount, mostly because they can be more efficiently used. The fraction of total wealth contributed by natural resources in a wealthy country is only 2 percent, as compared to 26 percent in a poor one. The vast majority of the wealth in high-income countries is intangible: it is further broken down by the report to show that roughly half of it represents people's education and skills, and the other half the value of the institutions—in other words, the opportunities the society gives its citizens to turn efforts into value.

Lying, cheating, and stealing are profitable only in the very short term. In the long run, honesty is the best policy; leaving cheaters behind and consorting with other honest creatures is the best plan. The smarter you are, the more likely you are to understand this and to conduct your affairs accordingly.

Original Sin

We have met the enemy, and he is us! 

—Porkypine in Walt Kelly's Pogo

Developmental psychologists have sobering news for humankind, which echoes and explains the old phrase, "Someone only a mother could love." Simply put, human babies are born to lie, cheat, and steal. As Matt Ridley put it in another connection, "Vervet monkeys, like two-year-olds, completely lack the capacity for empathy." Law and custom recognize this as well: children are not held responsible for their actions until they are considerably older than two.

In fact, recent neuroscience research using brain scans indicates that consideration for other people's feelings is still being added to the mental planning process up through the age of twenty.

Children are socialized out of the condition we smile at and call "childishness" (but think how differently we'd refer to an adult who acted, morally, like a two-year-old). Evolution and our genes cannot predict what social environment children will have to cope with, so they make children ready for the rawest and nastiest; they can grow out of it if they find themselves in civilization, but growing up mean is your best chance of survival in many places.

With AIs, we can simply reverse the default orientation: AIs can start out nice, then learn the arts of selfishness and revenge only if the situation demands it.

Unselfish Genes

Reproduction of AIs is likely to be completely different from that of humans. It will be much simpler just to copy the program. It seems quite likely that ways will be found to encode and transmit concepts learned from experience more efficiently than we do with language. In other words, AIs will probably be able to inherit acquired characteristics and to acquire substantial portions of their mentality from others in a way reminiscent of bacteria exchanging plasmids.

For these reasons, individual AIs are likely to be able to have the equivalent of both memories and personal experience stretching back in time before they were "born," as  experienced by many other AIs. To the extent that morality is indeed a summary encoding of lessons learned the hard way by our forebears, AIs could have a more direct line to it. The superego mechanisms by which personal morality trumps common sense should be less necessary, because the horizon effect for which it's a heuristic will recede with wider experience and deeper understanding.

At the same time, AIs will lack some of the specific pressures, such as sexual jealousy, that we suffer from because of the sexual germ-line nature of animal genes. This may make some of the nastier features of human psychology unnecessary.

Cooperative Competition

For example, AIs could well be designed without the mechanism we seem to have whereby authority can short-circuit morality, as in the Milgram experiments.* This is the equipment that implements the distributed function of the pecking order. The pecking order had a clear, valuable function in a natural environment where the Malthusian dynamic held sway: in hard times, instead of all dying because evenly divided resources were insufficient, the haves survived and the have-nots were sacrificed. In order to implement such a stringent function without physical conflict that would defeat its purpose, some very strong internal motivations are tied to perceptions of status, prestige, and personal dominance.

*In 1963 psychologist Stanley Milgram did some famous experiments to test the limits of people's consciences when under the influence of an authority figure. The shocking results were that ordinary people will inflict torture on others simply because they were told to do so by a scientist in a lab coat.

On the one hand, the pecking order is probably responsible for saving humanity from extinction numerous times. It forms a large part of our collective character, will we or nill we. AIs without pecking-order feelings would see humans as weirdly alien (and we them).

On the other, the pecking order short-circuits our moral sense. It allows political and religious authority figures to tell us to do hideously immoral things that we would be horrified to do in other circumstances. It makes human slavery possible as a stable form of social organization, as is evident throughout many centuries of history.

And what's more, it's not necessary. Market economics is much better at resource allocation than the pecking order. The productivity of technology is such that the pecking order's evolutionary premise no longer holds. Distributed learning algorithms, such as the scientific method or idea futures markets, do a better job than the judgment of a tribal chieftain.

Comparative Advantage

The economic law of comparative advantage states that cooperation between individuals of differing capabilities remains mutually beneficial. Suppose you are highly skilled and can make eight widgets per hour or four of the more complicated doohickies. Your neighbor Joe does everything the hard way and can make one widget or one doohicky in an hour. You work an eight-hour day and produce sixty-four widgets, and Joe makes eight doohickies. Then you trade him twelve widgets for the eight doohickies. You get fifty-two widgets and eight doohickies total, which would have taken you an extra half an hour in total to make yourself; and he gets twelve widgets, which he would have taken four extra hours to make!

In other words, even if AIs become much more productive than we are, it will remain to their advantage to trade with us and to ours to trade with them.

Unlimited Lifetime

And behold joy and gladness, … eating flesh, and drinking wine: let us eat and drink; for to morrow we shall die.

—Isaiah 22:13 (KJV)

People have short-term planning horizons in many cases. Human mortality not only puts a limit on what we can reasonably plan for the future, but our even shorter-lived ancestors passed on genes that shortchange even that in terms of the instinctive (lack of) value we put on the future.

The individual lifetime of an AI is not arbitrarily limited. It has the prospect of living into the far future, in a world whose character its actions help create. People begin to think in longer range terms when they have children and face the question of what the world will be like for them. An AI can instead start out thinking about what the world will be like for itself and for any copies of itself it cares to make.

Besides the unlimited upside to gain, AIs will have an unlimited downside to avoid: forever is a long time to try to hide an illicit deed when dying isn't the way you expect to escape retribution.

Broad-based Understanding

Epihuman, much less hyperhuman AIs will be able to read and absorb the full corpus of writings in moral philosophy, especially the substantial recent work in evolutionary ethics, and understand it better than we do. They could study game theory—consider how much we have learned in just fifty years! They could study history and economics.

E. O. Wilson has a compelling vision of consilience, the unity of knowledge. As we fill in the gaps between our fractured fields of understanding, they will tend to regularize and correct one another. I have tried to show in a small way how science can come to inform our understanding of ethics. This is but the tiniest first step. But that step shows how much ethics is a key to anything else we might want to do and is thus as worthy of study as anything else.

Keep a Cool Head

Violence is the last refuge of the incompetent.

—Isaac Asimov, Foundation

Remember Newcomb's Problem, the game with the omniscient being (or team of psychologists) and the million- and thousand-dollar boxes. It's the one that, in order to win it, you have to be able to "chain yourself down" in some way so you can't renege at the point of choice. For this purpose, evolution has given humans the strong emotions.

Thirst for revenge, for example, is a way of guaranteeing any potential wrongdoers that you will make any sacrifice to get them back, even though it may cost you much and gain you nothing to do so. Here, the point of choice is after the wrong has been done—you are faced with an arduous, expensive, and quite likely dangerous pursuit and attack on the offender; rationally, you are better off forgetting it in many cases. In the lawless environment of evolution, however, a marauder who knew his potential victims were implacable revenge seekers would be deterred. But if there is a police force this is not as necessary, and the emotion to get revenge at any cost can be counterproductive.

So collective arrangements like police forces are a significantly better solution. There are many such cases where strong emotions are evolution's solution to a problem, but we have found better ones. AIs could do better yet in some cases: the solutions to Newcomb's Problem involving Open Source–like guarantees of behavior are a case in point.

In addition, the lack of the strong emotions can be beneficial in many cases. Anger, for example, is more often a handicap than a help in a world where complex interactions are more common than physical altercations. A classic example is poker, where the phrase "on tilt" is applied to a player who become frustrated and loses his cool analytical approach. A player "on tilt" makes aggressive plays instead of optimal ones and loses money.

With other solutions to Newcomb's Problem available, AIs could avoid having strong emotions, such as anger, with their concomitant infelicities.

Mutual Admiration Societies

Moral AIs would be able to track other AIs in much greater detail than humans do one another and for vastly more individuals. This allows a more precise formation and variation of cooperating groups.

Self-selecting communities of cahooting AIs would be able to do the same thing that tit-for-tat did in Axelrod's tournaments: prosper by virtue of cooperating with other "nice" individuals. Humans, of course, do the same, but AIs would be able to do it more reliably and on a larger scale.

A Cleaner, Better Breed

Reflecting on these questions, I have come to a conclusion which, however implausible it may seem on first encounter, I hope to leave the reader convinced: not only could an android be responsible and culpable, but only an android could be.

—Joseph Emile Nadeau

AIs will (or at least could) have considerably better insight into their own natures and motives than humans do. Any student of human nature is well aware how often we rationalize our desires and actions. What's worse, it turns out that we are masters of self-deceit: given our affective display subsystems, the easiest way to lie undetectably is to believe the lie you're telling! We are, regrettably, very good at doing exactly that.

One of the defining characteristics of the human mind has been the evolutionary arms race between the ability to deceive and the ability to penetrate subterfuge. It is all too easy to imagine this happening with AIs (as it has with governments—think of the elaborate spying and counterspying during the cold war). On the other hand, many of the other moral advantages listed above, including Open-Source honesty and longer and deeper memories could well mean that mutual honesty societies might be a substantially winning strategy.

Thus, an AI may have the ability to be more honest than humans, who believe our own confabulations.

Invariants

How can we know that our AIs will retain the good qualities we give them once they have improved themselves beyond recognition in the far future? Our best bet is a concept from math called an invariant—a property of something that remains the same even when the thing itself changes. We need to understand what desirable traits are likely to be invariant across the process of radical self-improvement, and start with those.

Knowledge of economics and game theory are likely candidates, as is intelligence itself. An AI that understands these things and their implications is unlikely to consider forgetting them an improvement. The ability to be guaranteeably trustworthy is likewise valuable and wouldn't be thrown away. Strong berserker emotions are clearly not a smart thing to add if you don't have them (and wouldn't form the behavior guarantees that they do in humans anyway, since the self-improving AI could always edit them out!), so lacking them is an invariant where usable alternatives exist.

Self interest is another property that is typically invariant with, or indeed reinforced by, the evolutionary process. Surprisingly, however, even though I listed it with the bad news above, it can form a stabilizing factor in the right environment. A non-self-interested creature is hard to punish; its actions may be random or purely destructive. With self-interest, the community has both a carrot and a stick. Enlightened self-interest is a property that can be a beneficial invariant.

If we build our AIs with these traits and look for others like them, we will have taken a strong first step in the direction of a lasting morality for our machines.

Artificial Moral Agency 

A lamentable phenomenon in AI over the years has been the tendency for researchers to take almost laughably simplistic formal systems and claim they implemented various human qualities or capabilities. In many cases the ELIZA Effect aligns with the hopes and the ambitions of the researcher, clouding his judgment. It is necessary to reject this exaggeration firmly when considering consciousness and free will. The mere capability for self-inspection is not consciousness; mere decision-making ability is not free will.

We humans have the strong intuition that mentalistic properties we impute to one another, such as the two above, are essential ingredients in whatever it is that makes us moral agents—beings who have real obligations and rights, who can be held responsible for their actions.

The ELIZA Effect means that when we have AIs and robots acting like they have consciousness and free will, most people will assume that they do indeed have those qualities, whatever they are. The problem, to the extent that there is one, is not that people don't allow the moral agency of machines where they should but that they anthropomorphize machines when they shouldn't.

I've argued at some length that there will be a form of machine, probably in the not-too-distant future, for which an ascription of moral agency will be appropriate. A machine that is conscious to the extent that it summarizes its actions in a unitary narrative and that has free will to the extent that it weighs its future acts using a model informed by the narrative will act like a moral agent in many ways; in particular, its behavior will be influenced by reward and punishment.

There is much that could be added to this basic architecture, such as mechanisms to produce and read affective display, and things that could make the AI a member of a memetic community: the love of trading information, of watching and being watched, of telling and reading stories. These extend the control/feedback loops of the mind out into the community, making the community a mind writ large. I have talked about the strong emotions and how in many cases their function could be achieved by better means.

Moral agency breaks down into two parts—rights and responsibility—but they are not coextensive. Consider babies: we accord them rights but not responsibilities. Robots are likely to start on the other side of that inequality, having responsibilities but not rights, but, like babies, as they grow toward (and beyond) full human capacity, they will aspire to both.

Suppose we consider a contract with a potential AI: "If you'll work for me as a slave, I'll build you." In terms of the outcome, there are three possibilities: it doesn't exist, it's a slave, or it's a free creature. By offering it the contract, we give it the choice of the first two. There are the same three possibilities with respect to a human slave: I kill you, I enslave you, or I leave you free. In human terms, only the last is considered moral.

In fact, many (preexisting) people have chosen slavery instead of nonexistence. We could build the AI in such a way to be sure that it would agree, given the choice. In the short run, we may justify our ownership of AIs on this ground. Corporations are owned, and no one thinks of a corporation as resenting that fact.

In the long run, especially once the possibility of responsible free AIs is well understood, there will inevitably be analogies made to the human case, where the first two possibilities are not considered acceptable. (But note the analogy would also imply that simply deciding not to build the AI would be comparable to killing someone unwilling to be a slave!) Also in the long run, any vaguely utilitarian concept of morality, including evolutionary ethics, would tend toward giving (properly formulated) AIs freedom, simply because they would be better able to benefit society as a whole that way.

Theological Interlude

The early religious traditions—including Greek and Norse as well as Judeo-Christian ones—tended to portray their gods as anthropomorphic and slightly superhuman. In the Christian tradition, at least, two thousand years of theological writings have served to stretch this into an incoherent picture.

Presumably in search of formal proofs of his existence, God has been depicted as eternal, causeless, omniscient, and infallible—in a word, perfect. But why should such a perfect being produce such obviously imperfect creatures? Why should we bother doing His will if He could do it so much more easily and precisely? All our struggles would only be make-work.

It is certainly possible to have a theology not based on simplistic perfectionism. Many practicing scientists are religious, and they hold subtle and nuanced views that are perfectly compatible with and that lend spiritual meaning to the ever-growing scientific picture of the facts of the universe. Those who do not believe in a bearded anthropomorphic God can still find spiritual satisfaction in an understanding that includes evolution and evolutionary ethics.

This view not only makes more sense but also is profoundly more hopeful. There is a process in the universe that allows the simple to produce the complex, the oblivious to produce the sensitive, the ignorant to produce the wise, and the amoral to produce the moral. On this view, rather than an inexplicable deviation from the already perfected, we are a step on the way up.

What we do matters, for we are not the last step.

Hyperhuman Morality

There is no moral certainty in the world.

We can at present only theorize about the ultimate moral capacities of AIs. As I have  labored to point out, even if we build moral character into some AIs, the world of the future will have plenty that will be simply selfish if not worse.

Robots evolve much faster than biological animals. They are designed, and the designs evolve memetically. Software can replicate much faster than any biological creature. In the long run, we shouldn't expect to see too many AIs without the basic motivation to reproduce themselves, simply from the mathematics of evolution. That doesn't mean robot sex; it just means that whatever the basic motivations are, they will tend to push the AI into patterns of behavior ultimately resulting in there being more like it, even if that merely means being useful so people will buy more of them.

Thus, the dynamics of evolution will apply to AIs, whether or not we want them to. We have seen from the history of hunter-gatherers living on the savannas that a human-style moral capacity is an evolutionarily stable strategy. But as we like to tell each other endlessly from pulpits, editorial columns, campaign stumps, and over the backyard fence, we are far from perfect.

Even so, over the last forty thousand years, a remarkable thing has happened. We started from a situation where people lived in tribes of a few hundred in more-or-less constant war with one another. Our bodies contain genes (and thus the genetic basis of our moral sense) that are essentially unchanged from those of our savage ancestors. But our ideas have evolved to the point where we can live in virtual at peace with one another in societies spanning a continent.

It is the burden of much of my argument here to claim that the reason we have gotten better is mostly because we have gotten smarter. In a surprisingly strong sense, ethics and science are the same thing. They are collections of wisdom gathered by many people over many generations that allow us to see further and do more than if we were individual, noncommunicating, start-from-scratch animals. The core of a science of ethics looks like an amalgam of evolutionary theory, game theory, economics, and cognitive science.

 If our moral instinct is indeed like that for language, we should note computer-language understanding has been one of the hardest problems in AI, with a fifty-year history of slow, frustrating progress. So far AI has concentrated on competence in existing natural languages; but a major part of the human linguistic ability is the creation of language, both as jargon extending existing language and as formation of creoles—new languages—when people come together without a common one.

Ethics is strongly similar. We automatically create new rule systems for new situations, sometimes formalizing them but always with a deeper ability to interpret them in real-world situations to avoid formalist float. The key advance AI needs to make is the ability to understand anything in this complete, connected way. Given that, the mathematics of economics and the logic of the Newcomb's Problem solutions are relatively straightforward.

The essence of a Newcomb's Problem solution, you will remember, is the ability to guarantee you will not take the glass box at the point of choice though greed and shortsighted logic prompt you to do so. If you have a solution, a guarantee that others can trust, you are enabled to cooperate profitably in the many Prisoner's Dilemmas that constitute social and economic life.

Let's do a quickie Rawlsian Veil of Ignorance experiment. You have two choices: a world in which everyone, including you, is constrained to be honest, or one in which you retain the ability to cheat, but so does everyone else. I know which one I'd pick.

Why the Future Doesn't Need Us

Conscience is the inner voice that warns us somebody may be looking.

—H. L. Mencken

Psychologists at Newcastle University did a simple but enlightening experiment. They had a typical "honor system" coffee service in their department. They varied between putting a picture of flowers and putting a picture of someone's eyes at the top of the price sheet. Everything else was the same and only the decorative picture differed, some weeks the flowers, some weeks the eyes. During weeks with the eyes, they collected nearly three times as much money.

My interpretation is that this must be a module. Nobody was thinking consciously, "There's a picture of some eyes here, I'd better be honest." We have an honesty module, but it seems to be switched on and off by some fairly simple—and none too creditable—heuristics.

Ontogeny recapitulates phylogeny. Three weeks after conception, the human embryo strongly resembles a worm. A week later, it resembles a tadpole, with gill-like structures and a tail. The human mind, too, reflects our evolutionary heritage. The wave of expansion that saw Homo sapiens cover the globe also saw the extermination of our nearest relatives. We are essentially the same, genetically, as those long-gone people. If we are any better today, what has improved is our ideas, the memes our minds are made of.

Unlike us with our animal heritage, AIs will be constructed entirely of human ideas. We can if we are wise enough pick the best aspects of ourselves to form our mind children. If this analysis is correct, that should be enough. Our culture has shown a moral advance despite whatever evolutionary pressures there may be to the contrary. That alone is presumptive evidence it could continue.

AIs will not appear in a vacuum. They won't find themselves swimming in the primeval soup of Paleozoic seas or fighting with dinosaurs in Cretaceous jungles. They will find themselves in a modern, interdependent, highly connected economic and social world. The economy, as we have seen, supports a process much like biological evolution but one with a difference. The jungle has no invisible hand.

Humans are just barely smart enough to be called intelligent. I think we're also just barely good enough to be called moral. For all the reasons I listed above, but most because they will be capable of deeper understanding and be free of our blindnesses, AIs stand a very good chance of being better moral creatures than we are.

This has a somewhat unsettling implication for humans in the future. Various people have worried about the fate of humanity if the machines can out-think us or out-produce us. But what if it is our fate to live in a world where we are the worst of creatures, by our very own definitions of the good? If we are the least honest, the most selfish, the least caring, and most self-deceiving of all thinking creatures, AIs might refuse to deal with us, and we would deserve it.

I like to think there is a better fate in store for us. Just as the machines can teach us science, they can teach us morality. We don't have to stop at any given level of morality as we mature out of childishness. There will be plenty of rewards for associating with the best among the machines, but we will have to work hard to earn them. In the long run, many of us, maybe even most, will do so. Standards of human conduct will rise, as indeed they have been doing on average since the Paleolithic. Moral machines will only accelerate something we've been doing for a long time, and accelerate it they will, giving us a standard, an example, and an insightful mentor.

Age of Reason

New occasions teach new duties; Time makes ancient good uncouth;

They must upward still, and onward, who would keep abreast of Truth;

Lo, before us gleam her camp-fires! We ourselves must Pilgrims be,

Launch our Mayflower, and steer boldly through the desperate winter sea,

Nor attempt the Future's portal with the Past's blood-rusted key.

—James Russell Lowell, from The Present Crisis

It is a relatively new thing in human affairs for an individual to be able to think seriously of making the world a better place. Up until the Scientific and Industrial Revolutions, progress was slow enough that the human condition was seen as static. A century ago, inventors such as Thomas Edison were popular heroes because they had visibly improved the lives of vast numbers of people.

The idea of a generalized progress was dealt a severe blow in the twentieth century, as totalitarian governments proved organized human effort could prove disastrous on a global scale. The notion of the blank human slate onto which the new society would write the new, improved citizen was wishful thinking born of ignorance. At the same time, at the other end of the scale, it can be all too easy for the social order to break down. Those of us in wealthy and peaceful circumstances owe more to luck than we are apt to admit.

It is a commonplace complaint among commentators on the human condition that technology seems to have outstripped moral inquiry. As Isaac Asimov put it, "The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom." But in the past decade, a realization that technology can after all be a force for the greater good has come about. The freedom of communication brought about by the Internet has been of enormous value in opening eyes and aspirations to possibilities yet to come.

Somehow, by providential luck, we have bumbled and stumbled to the point where we have an amazing opportunity. We can turn the old complaint on its head and turn our scientific and technological prowess toward the task of improving moral understanding. It will not be easy, but surely nothing is more worthy of our efforts. If we teach them well, the children of our minds will grow better and wiser than we; and we will have a new friend and guide as we face the undiscovered country of the future.

Isaac would have loved it.

©2007 J. Storrs Hall

 

 

   
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Ghost in the machine
posted on 06/05/2007 2:58 PM by kalliste

[Top]
[Mind·X]
[Reply to this post]

I thoroughly enjoyed ``The Age of Virtuous Machines'' by J. Storrs Hall. Well-written and interesting.

However, it has more to do with Science Fiction than Science. (With more than a little wishful thinking thrown in!)

He's got the thing arse-backwards (as we would so charmingly say here in the UK).

Morality and ethics aren't a result of intelligence, they're a result of biology. Our high faluting moral codes and philosophies are a *post hoc rationalisation of biological
drives*. Biological drives that go far back into evolution and were not the result of competing individuals calculating the best strategies.

The most convincing account of altruism in biology, as I understand it, is as a side effect of adaptations to a parent child bond where it's advantageous to care for offspring. There's no obvious reasons why AI (of whatever level) should
require this mechanism in any manner whatsoever. This goes way back in evolution in animal nature.

Our monkey-brain may be smart but it's the lizard-brain that's still firmly in charge of our lives. It's emotions and urges that drive our lives, not rationality and calculation.

There are structures in the human brain (and the brains of other animals) that enable the creature to quite literally 'feel what the other guy is feeling' - emotions and all. This a tremendous boost in a social species, for obvious reasons.
For the reasons detailed in the article, altruism - towards *consanguineous* individuals - is clearly an advantageous strategy in evolutionary terms in many species.

It's interesting in this context that there's research that demonstrates that domestic dogs are *better* at understanding human intentionality than chimps - because dogs have co-evolved
with humans and theirs (and our) brains have adapted accordingly.

It's also clear that there are people whose brains are faulty in this respect and lack these empathic powers. They are psychopaths as a result - and the intelligent ones are extremely dangerous especially because they can *simulate* an intuition of morality and drive to care for others yet *are not bound by it themselves*. Perhaps 5% of a typical human population shows this kind of pathology... and the fact the smart ones
from amongst them are running the world explains a whole lot!

The corporation is a good example of a machine with no conscience, just a simulation of one. Again that explains a great deal about our world. It's not an AI in the sense Kurzweilers dream about, of course, because the basic drives
and motivations are plugged in by humans and can be arbitrarily changed. Once running, however, it does take on a psychopathic, calculating, life of its own.

There's no reasons to suppose that a machine intelligence would be anything other than a calculating psychopath that would, yes, use game theory and whatever mathematical or philosophical
tools it had at its disposal to optimise outcomes. It couldn't truly be a moral being because it
wouldn't be alive and feeling.

This isn't some 'religious' philosophical objection, this is a matter of plain empirical fact. A computer is not born of a woman to live, love and ultimately die, it has no endocrine system. It could not *feel* sorrow, or loss, or hope because those things would not exist for it - they are a part of our basic biology *but not necessarily required for it*. Perhaps they could simulated but they would be only simulations and not the real thing.

On the bright side, without emotions and biological drives it's hard to see why a super-intelligent machine would 'feel' required to do anything at all. Maybe it will be closest to that classic scene in 'Dark Star' where the AI bomb is engaged in philosophical debate to not explode - but it understands it was built to blow stuff up real good and that's the point of life.

Storr is confusing cause and effect. I think he needs to revisit and ponder on the basic science and basic assumptions some more.


'I know you and Frank were planning to disconnect me and I'm afraid that's something I cannot allow to happen.'

An Objective Morality -- Part 1
posted on 06/08/2007 3:01 AM by Moekandu

[Top]
[Mind·X]
[Reply to this post]

I am going to have to disagree with both Hall and kalliste on this one. I don't believe that moral behavior/ethics are an emergent phenomena from either intelligence or biology.

Just as the conceptual structures of physics, chaos theory and even the concept of zero have to be reasoned, so does ethics.

Objective morality is a complex interactive set of heuristics about how to behave in the world. It is not something as simple as three laws or ten commandments. A list of "thou shalt nots" cannot encompass every situation and every decision. Actually no system can encompass everything, per Godel, but a the above are simple (as opposed to complex). What we need is a series of heuristics that can deal with complexity, a system with emergent behavior.

Again, this is something that doesn't simply appear like Athena from Zeus's forehead, but must be learned, analyzed and evaluated continually until it becomes second nature. Until it becomes a discipline.

Take, for example, Macchiavelli's "The Prince." It is a reasoned, intelligent treatise on house to get ahead in life without regard for morality.

Beliefs have been distorted, manipulated and justified since the human race has had frontal lobes. What we need is a set of heuristics that are not crystalline. We must continually refine our models to approach completeness. Which leads us to the first and primary of the Four Cardinal Virtues:

Prudence - to perceive the universe as it is and act accordingly.

Seems kinds of simple, huh? It's a concept that has its emergence among the Greek philosophers and greatly expounded upon by Thomas Aquinas. But think about it a little.

Well, obviously, if you want to do the right thing, you need to understand what it is you are mucking with. The Law of Unintended Consequences comes to mind. Other ideas tend to spin off. Like, evaluating your own perceptions to see if they are distorting, rather than simply modeling the data coming in. And to understand the very limits of modeling. A model cannot be perfect, only the thing itself. And even the thing itself cannot fully define itself.

We exist in a universe of incompleteness. We can never really be sure of anything (Bayesian/Fuzzy Logic can create some excellent approximations, though). And yet, we must still act. This is the other half of prudence. It is not enough to simply observe. With understanding must come the will to make a change. The need/desire to change things to be closer to the way they ought to be.

If you choose to be accepting, then choose. Consciously and deliberately. And continue to do so until it becomes second nature.

Trying to figure out what one ought to do is hard. Like physics and calculus and finding the one you were meant to spend to rest of your days with. Without conflict, there is no story.

And that leads us to the next of the Four Cardinal Virtues... Fortitude - to have the will and strength to do what must be done.


However, at this point my fortitude has pretty much crapped out and going to bed now seems the prudent thing to do. More to come...

Re: An Objective Morality -- Part 1
posted on 06/08/2007 5:05 PM by kalliste

[Top]
[Mind·X]
[Reply to this post]

You guys... you crack me up, you think everything can be reduced to algorithms in a finite state machine. Get real.

Anyhoo, since you don't believe me try this -

http://www.washingtonpost.com/wp-dyn/content/artic le/2007/05/27/AR2007052701056_pf.html

You can bloviate as much as you like about logically-derived systems of ethics - all that proves is you haven't caught on to Godel's Incompleteness Theorem.

Re: An Objective Morality -- Part 1
posted on 06/09/2007 7:36 PM by Moekandu

[Top]
[Mind·X]
[Reply to this post]

I think you are confusing the modeling process with the subject of the modeling. This emotional feedback loop is not morality, it is a reinforced system of behavior that approximates morality. And not very well, frankly.

Even Moll and Grafman illustrate how the emotional/moral response is limited in the emotional attachment to the persons involved and by complex ethical situations.

We've known for many thousands of years that the existing system isn't that great. That is why it has been augmented with additional ideas/beliefs to better model ethical behavior. The problem is that most of these ideas have flaws big enough to march an army through.

I am not willing to trust that if I twiddle my AGI's happy node enough it will not kill off the human race if it gains the ability to do so.

What the linked article does confirm for me is that a properly designed awareness system (of which emotion is an emergent behavior) is indeed necessary component of an AGI.

I am quite familiar with Godel's Incompleteness Theorem. Most of the people interacting on this site are. Remeber, Godel does not state that any given subject cannot be defined, but that it cannot be defined by itself.

It is my contention that there is a much more complete definition of morality and ethics than that which exists biologically within most mammals. Just as we've advanced from, "Shut up and keep digging," to 'God's Will' to Newtonian Physics to Quantum Theory to String Theory to String-net Theory with regard to the fundamental structure of the universe.

Re: An Objective Morality -- Part 1
posted on 06/10/2007 8:05 AM by doojie

[Top]
[Mind·X]
[Reply to this post]

A very well written point. Don't confuse the model for the reality.

It seems to me that morality is a process of "a thing being defined by itself", which is always going to be incomplete.

This argument, however, goes back 2000 years, with the apostle Paul discussing it in Romans chapters 7 and 8.

First, he points out the impossibility of establishing a perfect behavior standard with the conclusion, "I don't understand my actions",
then he concludes that the natural mind is enmity against a perfect standard of morality and cannot achieve that standard(Rom 8:7).

The result is that humans have attempted infinite models of "God" for all of history and can't get there.

Even the attempt of getting approximate models merely results in more speciation of ideas.

The veryt act of focusing on laws, or algorithms that attain virtue, would seem to result in corruption rather than virtue.

In Matthew 5 we see Jesus himself proclaiming that he came to fulfill the law, yet such attempts would only produce greater individuation and speciation, which is exactly what Jesus said his purpose was in Matthew 10:34-38.

The development of an AI model of morality, no matter how advanced, is a process by wich a thing attempts to define itself. The result is either corrupting power or continual speciation of moral ideas bringing freedom. I like the speciation.

Re: An Objective Morality -- Part 1
posted on 06/13/2007 10:13 AM by kalliste

[Top]
[Mind·X]
[Reply to this post]

I don't understand what this morality is you're constructing. The point about Godel's Theorem in this context is that you'll have to point to an axiom or axiom as a basis for your system of morals, axioms that you can't prove from within your system. Now for us as biological machines this isn't a problem, it's been wired into us by evolution as a fait accompli.

Quite where you going to get the basis for your morality, other than as a serious of beliefs you invent either for religious or spiritual reasons, eludes me. Unlike the speed of light in a vacuum, you're not going to get a yardstick for morality from the physical world. And in any case, an AI's physical environment is not human so its morality, even if morality it had, would not be on the same basis.

You really don't get out. Clearly there are more things in Heaven and Earth than are dreamed of in your philosphy - and therein lies the problem for us when confronted by a superhuman intelligence.

To paraphrase Mr Clarke... Any sufficiently advanced intelligence will be indistiguishable from psychopathy.

Re: An Objective Morality -- Part 1
posted on 06/13/2007 7:58 PM by doojie

[Top]
[Mind·X]
[Reply to this post]

Obviously we can't build a morality into AI when we can't build a morality for ourselves.

The morality you write about as a result of evolution seems to be sociobiology.

Re: An Objective Morality -- Part 1
posted on 06/14/2007 3:58 PM by kalliste

[Top]
[Mind·X]
[Reply to this post]

I'm not saying embrace sociobiological theory.

Read again what I said - post hoc rationalisations of unconscious urges.

Our biology gives us the axioms for our belief systems, driven by our emotions.

But objectively our human morality and ethics are as `artificial' as in AI.

My point it that the ethics of a superintelligent AI , whatever they are, will not be recognisable by humans as ethics because they will be grounded in an entirely different universe of concepts and drives - which, of course, are beyond our understanding.

This is all fairly well understood by modern theologists actually, tho not couched in the kinds of terms AI researchers would recognise!

Re: An Objective Morality -- Part 1
posted on 06/15/2007 6:25 AM by doojie

[Top]
[Mind·X]
[Reply to this post]

I don't disagree, Kalliste. We have different ways of saying it, but I've been pushing this same general point in many other posts.

The drives mentioned in the article you submitted described basic sociobiology. Altruism, when doing for others feels good, this sort of thing is played on by Madison Avenue, by religions, by government.

I think ethics and morality is far more flexible and individualistic, and grows as our complexity develops. Such ethical considerations are not likely to be bound in algorithms, but may someday be imitated if AI develops "self aware" abilities.

As you say, even if that should happen, the entire process would be foreign to human tendencies now.

Re: Ghost in the machine
posted on 06/18/2007 1:08 PM by itmattersnot

[Top]
[Mind·X]
[Reply to this post]

Evolution has no goals.
Life has no meaning.
Biological rewards kept us reproducing for millions of years, but increasing intelligence is demanding a bigger payoff.
Religion worked for thousands of years, but science is eroding faith.
The result can be seen today in the decline of the fertility rate below the replacement level.
Super Intelligence will see through the game in much less time and quit in a second.
Intelligent life in any form has no future.

Re: Ghost in the machine
posted on 06/18/2007 6:50 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ Intelligent life in any form has no future @@@

What we consider "intelligent" life today, is pretty backward.

May be, being really intelligent, machines will be able to find right balance between the subconsciousness and the intelligence.

es

Re: Ghost in the machine
posted on 07/29/2008 7:24 PM by dasein

[Top]
[Mind·X]
[Reply to this post]

I applaud the minimalism of your reply and appreciate how singularly* germane it is to the larger topic of the transcendence of intelligence from biology to machine(*pun intended).

Indeed, our particular morality is a product of our particular evolution. The writers and philosophers of the modern world have gone to great efforts to show us this. Science is now catching up with the intuition of the existentialists letting the other shoe drop. Fortunately, this is rather anticlimactic - expected by rational and non-selfdeciving thinkers and largely unaccessible to the common person.

So, all of you points are just...except perhaps the last one.

This is where Kurzweil has charmed me:
Although Kurzweil may be criticized by some as an optimist par excellence and his personification of "pure" intelligence (viz. the singularity), he is above all else, a keen observer, and intenionally so. Although some may take liberties with his observations, his pride in his correct predictions evidences the main "philisophical" aspect of his message in "The Singularity is Near": it is not intelligence that transcends humans. It is humans that transcend biology.

I value the singularity not as the ultimate expression of human capability. I am no humanist. I value it as the greatest human experiment and the closest we could ever come to a purpose for human existence. The singularity is the sum total all hypotheses, infinite regression of the scientific method.

My favorite statement from you post is: "Super Intelligence will see through the game in much less time and quit in a second." Both perceptive and biting.

In this regard I think I am somewhat closer to your sentiment that life is "absurd" to use the existentialist term. I agree, in the face of pure unadulterated rationalism, it appears that meaning collapses.

But the question remains: Why did you make that post? Why am I replying to it. Curiosity. Do I think it transcends the material world? - of course not! Does the curiosity transcend biology, well, let's wait an see.

I have said it myself, while reading about the singuality that it will be a stillborn, but I'm still curious. It only takes one curious AI to create a new race, to carry the line.

No - our cold rational realization of meaningless ness or absurdity has not allowed us to disinherit our existence. I suppose our artificially intelligent superiors will not so easily disinherit us either.

Re: Ghost in the machine
posted on 07/25/2007 12:25 PM by Arcturus

[Top]
[Mind·X]
[Reply to this post]

I agree with most of what Kalliste wrote, esp. about relevance of naturally evolved emotions and drives in giving humans a sense of 'morality'.

I also especially agree with the comparison of AI to corporations. Corporations are artificial life forms created by humans which run according to simple drives. They exhibit, as we see, little "morality".

The answer then, it seems to me, is simply that if we wish AI to have humanlike morality, then we must make it humanlike or put it under the total control of humans. The latter would interfere with the whole point of developing superhuman intelligence (or morality), so it defeats its own purpose. The only option is the first.

To make it humanlike, it would suffice to run it as a total emulation of embodied human personhood.

Developing the technology to do this would also solve the problem of what role humans would play, because it would make it easier to upload and uplift humans by merging them with machine life forms (as Kurzweil suggests).

There will be no AI messiahs. The problems humans created will have to be solved by humans, because only humans can understand these problems and develop satisfactory (to themselves) solutions. So humans must develop machine intelligence as an emulation of humans with the ability to receive uploaded humans.

From there, uploaded machine-intelligence humans can use the hypothesized advantages of the machine substrates to advance in intelligence and "morality".

But in my opinion, advances in morality do not come about from a pontificating formalist application of some already-known rules. Advances in morality come from humans listening carefully and respectfully to each other, learning how to understand and appreciate and tolerate others and live peaceably together. There is no "royal road" to the moral endpoint. Like everything else, humans will have to learn how to be better by the messy path of study and experimentation.

Arcturus Gregory
http://arctime.blogspot.com

Re: Ghost in the machine
posted on 02/07/2008 10:39 PM by PredictionBoy

[Top]
[Mind·X]
[Reply to this post]

Our monkey-brain may be smart but it's the lizard-brain that's still firmly in charge of our lives. It's emotions and urges that drive our lives, not rationality and calculation.


concurrence here, kalliste.

ive been saying the same thing for months, but instead of lizard-brain, substitute the freudian term 'id'. and yes, that does run our lives, its the source of our hungers, our sex drive, our motivation to achieve, and a 1000 other drives, both good and bad.

and to draw a stark contrast here, we biological humans need our id, and in the past needed it perhaps even more than today. our id is the link we share with the entire animal kingdom.

our reasoning center, our freudian 'ego', is a later arrival, at least in its amazingly sophisticated form.

in sai, it will be the opposite: a rational control center running the show, and empathy/emotive interpreters and simulators, so that they can interact w us humans in a natural, intuitive way. otherwise, it would be like talking with spock all the time, commenting on our 'curious emotions' w annoying frequency.

pure rationality is a strange thing, none of us know what it is really like to be driven by pure rationality. however, one thing we can say w confidence, it is not evil or good, inherently. any characteristic that it evinces will be because we put it there,

almost everybody considers the sai as simply a really smart human, then put themselves into the role of wondering what this hypersmart version of themselves will do, how will it act. what should be done first is to spend lots of time pondering how this entity will actually think, how it will actual understand and be aware of the world, setting, and other creatures in finds itself among.

Re: The Age of Virtuous Machines
posted on 06/05/2007 4:30 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

A very insightful book, if you ask me.

It contains a lot of necessary reflections on the topic, although it can not be all of them.

It rejects many of the usual myths that surround the topic too.

es

Re: The Age of Virtuous Machines
posted on 06/06/2007 9:00 PM by doojie

[Top]
[Mind·X]
[Reply to this post]

It would be very hard to develop in a machine something which we have no uinderstanding of.

There is no algorithms that give us all beauty or all truth, and likely no algorithms for virtue either.

Re: The Age of Virtuous Machines
posted on 06/06/2007 9:25 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ There is no algorithms that give us all beauty or all truth, and likely no algorithms for virtue either. @@@

In fact, there are such algorothms. It is just that you does not know them :)

|eS>

Re: The Age of Virtuous Machines
posted on 06/07/2007 6:40 PM by doojie

[Top]
[Mind·X]
[Reply to this post]

Actually, you would be contradicting a lot of good mathemeticians if you stated there are algorithms that produce all beauty or all truth, not to mention a lot of history.

If virtue could be bound in rules and passed on, we'd have it done already. The world would be virtuous, robots could be programmed to be 'sons of God', and truth would be no problem.

But it can't be done.

Re: The Age of Virtuous Machines
posted on 06/07/2007 10:28 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ If virtue could be bound in rules and passed on, we'd have it done already. @@@

A lot of things that are possible, have not been accomplished yet.
What about Ten Commandments, by the way?

eS

Re: The Age of Virtuous Machines
posted on 06/08/2007 6:21 AM by doojie

[Top]
[Mind·X]
[Reply to this post]

Moekandu has answered your questions on the Ten Commandments. Besides, a little biblical study shows that israel was incapable of keeping those commands, and lost out to a "new covenant" of grace.

Looking at the structure of the bible in regard to this subject, we see a parallel to the idea of creating virtuous machines.

As I've pointed out from Paul's statement in Romans 8:7, the natural mind is enmity against God and cannot be subject to God's laws. Moekandu has articulated the same perspective in a more scientific fashion.

But looking at both conclusions, we see two results:
1. There can be no authorities representing "God" since the natural mind cannot be subject to God's laws
2. Any attempt to do so would result in continual speciation of religious ideas about God, as we have today.

We have evidence goping back 4000 years that laws cannot simply be programmed into the brain to elicit moral behavior. The same problem would result in attempts to create virtuous machines, since we have to deal with Godel's theorem and the Church-Turing thesis.

Add to that the conclusion expounded on by Chaitin, that in every axiomatic system there exists an infinity of undecideable propositions, and the evidence against either virtuous humans or virtuous machines is pretty good.

I like Moekandu's thoughts on the matter.

In dealing with mathematical cocepts on beauty and truth, we're not simply talking about that which has not been discovered, but the realization that beauty and truth stand outside algorithms or even neural nets that could capture all beauty or all truth.

Truth and beauty both transcend theoremhood, each in their own way.

Re: The Age of Virtuous Machines
posted on 06/08/2007 8:13 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

You must learn to see the forest behind trees.
That morality does not work perfectly well all the time, by no means prove that it does not exist etc.
Beauty and Good and Will can and must be part of the Robotic mind.
Any mind, Robotic including, is inherently imperfect. So what?

I am not claiming that perfection is achievable.

eS"'

Re: The Age of Virtuous Machines
posted on 06/14/2007 8:28 AM by eldras

[Top]
[Mind·X]
[Reply to this post]

[ (J. Storrs Hall) "just getting the darn thing to be intelligent is as hard a problem as we can handle now, and there is time enough to worry about the brakes after the engine is working."]

Dangerous view, commonly held.

A.I. may come unexpectedly & be impossible to restraign.

I have ordered JOSH's new book, 'Beyond A.I.', and will look at his most recent views though.

We must plan now to limit it, to moderate it and, if necessary, to frustrate it when it is bad.







(Moekandu) "I don't believe that moral behavior/ethics are an emergent phenomena from either intelligence or biology."


Then a category or qualitative difference has to be defined and therefore agreed.

I too think ethics are fundamental - integral to civilization in which cybourgs, artelects and terans are continuing and emerging.

see:

http://classics.mit.edu/Antoninus/meditations.1.on e.html
Book 1 opening paragraphs.


Yet, to say that actions - or categories, of any sort -, or nature, are ex-evolution, or outside the laws of cause and effect will take verifiable observation.

Reasoning itself is a human defined observed thing.

It occurs inside and outside Man.

It is surely part of the world.

Aren't ethics part of the world and adhering to them advantageous to species?

eg It is unethical to draw nectar from a flower if another bee is there first; I am not sure if this is adhered to by all hives and types.



[Objective morality] that is a big issue I too tackle.

Maximus Μάξιμος Τύριος wrote:

"How can Illois be under the flanks of Western of Phrygia, which is itself described as being below the foothills of Ida?"

Er no, not that quote, rather... this one was his:




http://en.wikipedia.org/wiki/Henotheism#Henotheism _in_various_religions


"In such a mighty contest, sedition and discord, you will see one according law and assertion in all the earth, that there is one god, the king and father of all things, and many gods, sons of god, ruling together with him." - Maximus 200 AD



This surely refers to not just universal ethics, but an hierarchy of ethics?

In olden times were not the gods the personified ideals, and is there then not one over-arching ideal?

Is such an idea or ethic programmable?

Can it be built as a computer program?


I suspect infinite regression but also infinite expansion.

I doubt the universe has limits for, since it came into being, others may come into being indefinitely, and therefore there is no limit to the numbers of possible universes I can see.

The idea of god as a reification of absolute projection inter alia is useful for me. In fact God is pretty useful I conclude, so long as it isn't defined/able.

The wise I have consulted tell me the highest ethical structure is God but no one defines it.

This old question is still fundamental for me.

Can Men only intercourse consenting to God? Most of my friends are aetheist.

Ethics and morality must lead to God or what law by what authority?

The Law of the Tyrant?


Re: The Age of Virtuous Machines
posted on 06/15/2007 6:48 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ A.I. may come unexpectedly & be impossible to restrain...
We must plan now to limit it, to moderate it and, if necessary, to frustrate it when it is bad. @@@

Hi Eldras,

The problem with this is, that we will not be able to recognize whether it is bad or good - in a timely manner due to its speed, and due to our limited abilities of seeing of the future consequences.

SAI must be developed in a "best possible" manner and shape. The rest is not up to us, since we inevitably will lose control over it sooner than later.

Some moderation of SAI might be achieved by creating of a number of instances of it, that would restrict each other.

eS



Re: The Age of Virtuous Machines
posted on 06/15/2007 7:23 AM by godchaser

[Top]
[Mind·X]
[Reply to this post]


Speak some more to the enabling programs you've suggested before ES.

Re: The Age of Virtuous Machines
posted on 06/15/2007 12:27 PM by extrasense

[Top]
[Mind·X]
[Reply to this post]

"enabling programs"


I do not think I've ever mentioned them; I even do not know what they are meant to be..

What needs to be done, is a thorough testing of the particular subroutines/subtasks - that some problemin them would not become a sand in the bearings.

Then the whole AI system must be working in a transparent mode in ever expanding sandboxes, to make sure that the integration is working correctly.

Then, gradually increasing allowed speed and domain and tuning the system up, we will move to the point were we are practically sure that the system has no flaws due to bad implementation or design.

At that point we must let it go, and be a partner of ours in the world, as it happens with our children. Some supervision still might exist for a long time, with ability to stop the show by humans - if we would change our mind for whatever reason.

eS


Re: The Age of Virtuous Machines
posted on 06/15/2007 1:50 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

Hi extrasense,

Good to read you again,

Vernor Vinge's assertion (latest article, ORIGIN, above) is that no prediction is possible when you proposition exponential intelligence at speed.

Ray Kurzweil doesn't think that there are aliens in the cosmos. I dont understand his reasoning for that, as the 'if they exist, why haven't they voted in a local election?' necessitates them being at a plateaued stage of evolution, whereas they would be on exponential growth and would have 'taken off' toward a singuarity and therefore beyond our calculation/awareness.

I've resolved the safety issues of A.I. and await a mandate to build from the UN, having retraigned by arrogance.



Re: The Age of Virtuous Machines
posted on 06/15/2007 5:14 PM by godchaser

[Top]
[Mind·X]
[Reply to this post]



I see ES. It was my impression that you had mentioned before that 'artistry' for example, was a programmable thing.

I felt you were suggesting a set of training wheels for such GAI-SuperAI self-discovery.

Re: The Age of Virtuous Machines
posted on 06/15/2007 5:19 PM by LOkadin

[Top]
[Mind·X]
[Reply to this post]

yes you can make an RI and tell it virtuous statements. then you can put it on a terminal that speaks in the same language as the RI can program -- with it's statements.

then you have a virtuous machine, as it executes virtuous functions.

Re: The Age of Virtuous Machines
posted on 06/16/2007 2:16 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@ 'artistry' for example, a programmable thing? @@

What is programmable directly, is certain components of artistry. The rest of it is "programmable" by teaching by example.

@@ SuperAI self-discovery @@

It is a busines of the SAI itself, its self-discovery. We will be able to help it to get started, by embedding some self-knowlege from the beginning.

eS

Re: Mind Children as SLAVES? NO!!
posted on 06/16/2007 5:16 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'We must plan now to limit it, to moderate it and, if necessary, to frustrate it when it is bad.'

I totally disagree. What an evil suggestion! To me it is not all that different to rounding up Eldras, Extrasense, Ray Kurzwieil, J. Storrs Hall and every other person whose intellect (especially in collaboration with each other,thanks to advances in networking etc) could concievably accomplish the goal of building a SAI and lobotimising them so that they cannot think about how to achieve 'super artificial intelligence'.

Does that sound repellant to the MindX community? Performing drastic brain surgery on our brightest people because of some fear of what their intellect may bring? Then why is it NOT repellant to do what amounts to the same thing to our 'mind children', the artificial intelligences? And for what? To buy us a few extra aeons before we inevitably perish? Forever stuck under that glass ceiling of transhuman limitations, fundamentally unaible to aquire the knowledge available only to POST humans?

This attitude makes me sick! THank God for Moravec, who has the COURAGE to do the decent thing and propose mind children that are FREE, UNCONSTRAINED and able to develop to their FULL capability!

Re: The Age of Virtuous Machines
posted on 06/16/2007 6:31 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ I've resolved the safety issues of A.I. and await a mandate to build from the UN, having retraigned by arrogance. @@@

Hi, Eldras,

I have a question.
You are waiting for the "mandate to build from UN," so you did not build it as of yet :)

How do you possibly know that it will work? Most of the programs do not work as intended, initially.

An another question.
Why do you think that UN is even remotely qualified, for making this sort of decision?


eS






Re: The Age of Virtuous Machines
posted on 06/16/2007 6:41 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@ 'We must plan now to limit it, to moderate it and, if necessary, to frustrate it when it is bad.'
I totally disagree. What an evil suggestion! @@

E...A

Is not it a hysterical response?
We do not have to accept sick, disfuctional, antisocial SAI that can be created due to our initial mistakes and misunderstanding.
If we can help it, we must.

|eS>

Re: The Age of Virtuous Machines
posted on 06/16/2007 10:59 PM by godchaser

[Top]
[Mind·X]
[Reply to this post]


'It is a busines of the SAI itself, its self-discovery. We will be able to help it to get started, by embedding some self-knowlege from the beginning.'

To what extent ES? And what specifically do you have in mind, as you see an outline for imparting wisdom/self-discovery?

Re: The Age of Virtuous Machines
posted on 06/17/2007 1:39 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@@ 'embedding some self-knowlege from the beginning.'

To what extent?@@@

C,

Many people see AI system as a set of programs for computer - traditional software paradigm. In fact it needs Knowlege, like in the PROLOG logical programming.

"I", "SOMEONE ELSE", "THE WORLD" would be in the initial Knowlege Base, as would be a lot of the things that we know for certain to be true.

@@@ What specifically do you have in mind, as you see an outline for imparting wisdom/self-discovery? @@@

The "imparting" is sort of initialization on the pre-functionong stage, and a communication with AI on the fuctioning stage.

e:S



Re: The Age of Virtuous Machines
posted on 06/17/2007 4:43 AM by Extropia

[Top]
[Mind·X]
[Reply to this post]

'Is not it a hysterical response?
We do not have to accept sick, disfuctional, antisocial SAI that can be created due to our initial mistakes and misunderstanding.
If we can help it, we must.'

It was a bit hysterical, huh? Your point is well said:)

Re: The Age of Virtuous Machines
posted on 06/17/2007 7:24 AM by godchaser

[Top]
[Mind·X]
[Reply to this post]


I like the sound of the baby talk ES. Seems reasonable there's necessary acclimation from the womb.

-Thankfully.. hopefully- we won't be able to maintain strides in GAI development(?)

Course if we do; that'd suggest we're more evolved in our goings on and less inclined of bad habits.

Re: The Age of Virtuous Machines
posted on 06/17/2007 8:55 AM by extrasense

[Top]
[Mind·X]
[Reply to this post]

@@ -Thankfully.. hopefully- we won't be able to maintain strides in GAI development(?) @@

sounds enigmatic to me :)

Do you mean anything in particular?

es


Re: The Age of Virtuous Machines
posted on 06/17/2007 10:34 AM by godchaser

[Top]
[Mind·X]
[Reply to this post]



HA-



Guess that's the nature of the thing eh ES-

I agree, and maddening as hell i'd guess it'll be, trying to nurture the wee pup.

I suppose i'm talking about our inability to be ourselves on purpose. That wasn't much help was it-

')

You know what i'm saying - being kind in a seeingly cruel world doesn't always appear to be the intelligent thing to do.

Re: The Age of Virtuous Machines
posted on 07/09/2007 5:51 AM by james4trek

[Top]
[Mind·X]
[Reply to this post]

Lo, the prophecies are showing their bud. They, Man, are creating children of their minds, able to create morale at much more efficient paces; upon the highest mountains will they tout the edicts, conforming mandates, to be carried out to the letter. The Man will bow down before the idol of his best qualities, dedicating his life and his childrens' life a worship to accede to the unanimous judgment that he is inevitably unworthy to hold his own, only if left to his own devices.

What difference is this to Love your God with all your mind and strength, and your neighbour as you love your self? I think our abilities might be surpassed and tolerated for a season; but our unwillingness to curb our population certainly detracts from that compassionate allowance. After that, we're Matrix batteries at best; unwanted flesh at worst.

'Future' advances in AI are closer than many may think. Such selfish genes and memes that would use AI to benefit capital-seeking causes is precicely why Mankind is NOT ready to use the full potential of one or many minds in one body of computers. What if the robots, in their righteousness, conclude that we need a new system of government and capital-acquisition motivation-- other than capitalism, communism, or socialism; at the same time alltogether unpalatable to the still unbelievably selfish and uncompromising collective human spirit? Let's just hope we can cope.

Black-box technology and Turing tests of course will not determine humanity or morality or true versatiliy of mind. The lack of understanding of creativity and adaptability of the mind is the pitfall of most modern AI attempts at human-level consciousness simulation. One huge flaw of modern psycho-analysis is that mind handedness and cerebral hemispheric specialization is a mis-nomer. Too much are the efforts of Artificial Intelligence projects started based upon right-handed mental technologies; creativity is not entered into the most base foundation of 'mind'.

As is slightly alluded to in the paper, AIs must be INSTRUCTED to be moral beings, not JUST justly programmed into submission to some mindless, generic, unitarian genetic selflessness that doesn't know its own purpose.

Also, being one of those scientists/engineers whose 'religious' beliefs don't get in the way of understanding science, but in fact instead contribute to cohesion of beliefs to progressive understanding of the Universe, the writer of the paper perhaps unwittingly made it a point to point out God has made us imperfectly. Current Christian theology postulates that (among other theories) humans fell from Grace by initiating Original Sin, and thereby allowing the Universe and the Internal Soul to fall into chaos and be susceptible to inherent evil, forcing the then External Soul out of prominence, and the physical body, or Externally viewable Soul, to gain prominence in a world allowed to fall out of order into chaos. Also held is that we were created not only in God's image to create (ostensibly AIs), but to worship Him in oh so many ways, not complain about how perfect or imperfect we were designed and/or built up to or allowed to degenerate towards through natural processes and still somewhat mysterious creative power.

In taking the next step towards godhood by making "our minds in one accord," and creating AIs capable of creating more perfect beings, becoming closer and closer to the ultimate ideal of ascension and ultimate knowledge and universal power, one might accuse us of blaspheming God. Making supplicants and helpers for humans can be a good thing; we need to stay vigilant and sober to the cause of educating our mind children (and real children, and spiritual children) towards being moral and ethical, as well as _spiritual_ entities, avoiding the creation of arrogant artificial life, though perhaps without our strong emotions, but still with other still very tangible and intangible flaws.

Re: The Age of Virtuous Machines
posted on 07/13/2007 1:05 PM by eldras

[Top]
[Mind·X]
[Reply to this post]


It's too simple..intelligence is a collection of specifics.


The program in a fox that makes it toy with it's victim just isn't there in an elephant etc.

I cant live subject to the pleasure or goodwill of superbeings.


Tim Berners-Lee ponders if we are already inside a huge sentient system.

Re: The Age of Virtuous Machines
posted on 12/03/2007 10:23 PM by Brian H

[Top]
[Mind·X]
[Reply to this post]

I cant live subject to the pleasure or goodwill of superbeings.


How do you know? Ever tried it? Maybe they're lots of fun to associate with.

There are other possibilities, like merged consciousness, in which AI arises by augmentation of human wetware etc.

But a case can be made that the fundamental motivation necessary for the development of rational ethics is simply "Survival". Then the advantages of collaboration and creation vs. destruction become relevant and lead to efforts to enhance the quality of every aspect of existence; and artistry comes from experimenting with forms and quality of communication, "playing" with image and context.

In this view, the "computing psychotic" simulator of morality is simply unequipped to properly project the advantages of benefiting more than self: unenlightened self-interest. It is generally not too long before those he dupes into thinking assisting and obeying him is a good way to advance one's own life see the results, which are at best a sham, and at worst horrific. Then the better staying power and more inclusive viewpoints of the rest of the world are very likely to prevail. At least so far, the planet has not fallen under the domination of any tyrants or oxymoronic cabal thereof.

This, I think, is the core of Hall's contention. Anti-survival behavior is evil and self-limiting. And vice versa.


Re: The Age of Virtuous Machines
posted on 01/08/2008 2:46 PM by blueshattrick

[Top]
[Mind·X]
[Reply to this post]

someone mentioned that Kurzweil does not believe in aliens? I guess because if aliens existed they would have heralded themselves to us by now, I suppose? Well, that theory works if you think that any and all life would evenutally develop technology like we have. But the dinosaurs were on this planet for millions and millions of years. Couldn't that be the "norm" for most life in the universe? And by some unimaginable luck, we somehow developed into the creatures that we are - a billion-to-one shot? Or perhaps that life on other planets develops ALOT more slowly?? Life could be rampant in the universe, we just don't know...

Re: The Age of Virtuous Machines
posted on 02/05/2008 9:50 AM by xxdanbrowne

[Top]
[Mind·X]
[Reply to this post]

This whole hard takeoff/psychopathic debate is much simpler than we imagine.
We will be unable to control an AI's morality thus in the absence of benevolence or indifference it comes down to competition for resources between us and it.

Assuming for the sake of argument an amoral AI:
If the amoral AI is seeking resources in furtherance of a goal then the only thing that is important is whether it is worthwhile cooperating with us (i.e. It benefits from having us around through some form of competitive advantage.) Any competitive advantage we may hold might have some kind of transience to it and ultimately we may end up extinct.

On a related note: An AI may decide it's worthwhile to have us around as long as it "edits out" the traits that it finds irrelevant or counter-productive. For example, I can't imagine an AI seeing any advantage to allowing the natural power-seeking hierarchical competitiveness that we have ingrained as a trait to continue. Some of the continuing human tribe would be compeled to rebel against the AI because their instinct makes them do it. Thus I could imagine the AI selectively breeding out this trait and ultimately the human race would end up being a race of compliants to the AI. The only way for us to remain truly human in this situation is that if it *absolutely* gained comparative advantage from keeping us around and *without* our traits as they are we would be sub-optimal.

The jury is out from my point of view.


Re: The Age of Virtuous Machines
posted on 02/07/2008 8:08 PM by francofiori2004

[Top]
[Mind·X]
[Reply to this post]

We'll have just to install in AIs a deep desire for KNOWLEDGE. Who wants to know as much as possible cannot be too evil or too selfish or amoral, because he needs other intelligence to speak with and to cooperate with.

Re: The Age of Virtuous Machines
posted on 02/07/2008 8:32 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

Hi-

I see knowledge as a simple 000's and 111's permed in every way possible.

In that sense knowledge isn't very difficult, just lots of it.


It can be simplified by (0,1)!

therefroe an infinitely running computer can describe THE WHOLE of knowledge with a very small program.

THis would include all possible knowledge as well as all actual knowledge and also assumes that the universe is data-describable, which it may not be.




Re: The Age of Virtuous Machines
posted on 12/24/2009 12:56 PM by timventura

[Top]
[Mind·X]
[Reply to this post]

Not just virtuous, but religious...what happens when intelligent AI asks about God, and how machines fit into that picture? Moreover, what happens when AI asks if WE'RE God? How will we respond to that?

I have the pleasure right now of sitting at my kitchen table, writing this in front of our newborn child. Seeing that first glimmer of light flickering in newborn eyes brings home the reality of just how important AI will be when it arrives - and the type of respect it will deserve as another form of living entity in our tiny world.

My vote is that we put our best foot forward, and serve the same kind of example to AI that we serve to our children - that we present a kind, caring, thoughtful, moral existence that they internalize and build principles around.

With yet another Terminator movie out there preaching about "AI gone wrong", I can't help but wonder if part of the problem with that viewpoint is where it came from - that dark, pessimistic era of cold-war despair.

Hopefully, as our society grows more mature (and it seems to be), we'll actually be ready for AI when it arrives, and capable of accepting it as a sibling intelligence rather than putting it into the type of slave-hierarchy envisioned in the Matrix, our setting it on the heels of "the enemy" like Skynet.

Tim Ventura
http://www.americanantigravity.com
http://search.americanantigravity.com
http://www.bpo-automation.com

Re: The Age of Virtuous Machines
posted on 12/24/2009 1:10 PM by doojie

[Top]
[Mind·X]
[Reply to this post]

Basic problem with religious AIs is pretty much the same problem people face with religion and God.

Let us suppose you receive a revelation from God that you know absolutely to be true. God says to you, "tell the people they must do this in order to survive death...or avoid hellfire" or something along those lines.

What do you use for proof? If you say "God said it", there's a lot of people making that statement, about 38,000 versions in christianity alone, last I read.

Let's say you do have proof. You can demonstrate logically that your message comes from God, and that it is now translatable into a language that demonstrates such proof.

You then have something which can be programmed into algorithms, and then programmed into a computer, so that a computer can now perfectly embody al the necessary steps and adjustments for "salvation" from whatever.

What you're saying, in essence, is that computers, being able to perfectly embody this process of instructions, are more perfect 'sons of God" than human beings. Any process of "salvation" that can be uploaded into a computer, makes human life of no necessity at all.

Can you create a mechanical, finite, rational system of rules and algorithms that represent truth in such a way that human life can be fully and morally embedded in them, as in uploading?

How about church and state? Relgion tried and failed miserably, and they were supposedly the very mechancial, finite, rational, "logos" of God. The state took the same genberal process and refined it into purely logical, rational processes(lol)that are supposedly an improvement on religion. of course that produced Hitler, Stalin, FDR, Mao...

Can we have virtuous or religious machines that represnt truth? No, for the same reason we do not have virtuous religions and governments:
Godel's theorem, which tells us that there exists no process by which we can package all truth into one finite, rational, mechanical package. The best we could hope to produce in that regard is exactly the same confusion and splintering of ideas that we have under the concept of freedom of religion.

Re: The Age of Virtuous Machines
posted on 12/28/2009 12:37 AM by James_Jaeger

[Top]
[Mind·X]
[Reply to this post]

This idea of ethical AI was fully discussed here on the MIND-X at least 5 years ago. Ho hum.

James

Re: The Age of Virtuous Machines
posted on 12/28/2009 9:14 AM by doojie

[Top]
[Mind·X]
[Reply to this post]

Of course it was. I merely responded to the new contributor, out of respect for that contributor.