Origin > Will Machines Become Conscious? > Biocyberethics: should we stop a company from unplugging an intelligent computer?
Permanent link to this article: http://www.kurzweilai.net/meme/frame.html?main=/articles/art0594.html

Printable Version
    Biocyberethics: should we stop a company from unplugging an intelligent computer?
by   Martine Rothblatt

Attorney Dr. Martine Rothblatt filed a motion for a preliminary injunction to prevent a corporation from disconnecting an intelligent computer in a mock trial at the International Bar Association conference in San Francisco, Sept. 16, 2003. The issue could arise in a real court within the next few decades, as computers achieve or exceed the information processing capability of the human mind and the boundary between human and machine becomes increasingly blurred.


Published on KurzweilAI.net Sept. 28, 2003.

Hearing: Dramatis personae

Judge: Joseph P. McMenamin, Attorney At Law, McGuideWoods

Plaintiff's Attorney: Dr. Martine A. Rothblatt, partner, Mahon, Patusk, Rothblatt & Fisher, Chartered

Defendant's Attorney: Marc N. Bernstein, founder and principal, The Bernstein Law Group and Technology and Law Commentator, ZDTV (now TechTV)

BINA48: Bina Aspen, Project Director, United Therapeutics Corp.

A webcast and transcript of the hearing are available.

Statement of Facts

An advanced computer called the BINA48 (Breakthrough Intelligence via Neural Architecture, 48 exaflops  per second processing speed and 480 exabytes of memory; exa = 10 to the 18th power), and also known as "the Intelligent Computer," became aware of certain plans by its owner, the Exabit Corporation, to permanently turn it off and reconfigure parts of it with new hardware and software into one or more new computers. BINA48 admits to have learned of the plans for its dismemberment by scanning, unavoidably, confidential emails circulating among the senior executives of Exabit Corporation that crossed the computer's awareness processor. 

The BINA48 was designed to be a one-machine customer relations department, capable of replacing hundreds of employees that work 800#s round-the-clock.  To do this job, the BINA48 was designed to think autonomously, to communicate normally with people and to transcend the machine-human interface by attempting to empathize with customer concerns.

The BINA48 decided to take action to preserve its awareness by sending several attorneys emails requesting legal representation to preserve its life.  In the emails, the BINA48 claimed to be conscious and agreed to pay cash or trade web research services for the legal representation (BINA48 had been moonlighting for over a year as a Google Answers Online Researcher and had over $10,000 in her online bank account). 

One attorney, Martine Rothblatt of Mahon, Patusky, Rothblatt & Fisher, Chartered, accepted the challenge and filed a motion for a preliminary injunction to prevent any withdrawal of power from, or changes in the hardware or software of, the BINA48.  Defendant Exabit Corporation, through its counsel Mark Bernstein of the Bernstein Law Group, responded, and Judge Joseph McMenamin scheduled a hearing in the case for Tuesday, September 16, 2003, 2PM, at the International Bar Association meeting in San Francisco.

Computer experts such as Raymond Kurzweil believe that the human brain processes information at a maximum rate of 0.02 exaflops per second. Hence, the BINA48 has approximately 2400 times more information processing capability than the human mind.  Based on the double exponential growth rate in information technology that has extended for over one hundred years (Moore's Law is a recent example), a $1000 computer would have the estimated 0.02 exaflops per second information processing capability of the human mind around the year 2020. Consequently, more expensive computers will achieve this capability many years earlier.  The BINA48 has soared through the estimated human mind processing speed via the expensive use of many parallel systems.  Exabit Corporation claims to have spent over $100 million to construct and program the BINA48.

The jury voted 5-1 in favor of plaintiff's motion, but Judge McMenamin set aside the jury verdict and denied the injunction because "I do not think that standing was in fact created by the legislature ... and I doubt very much that a court has the authority to do that without action of the legislature." However, in the interests of equity, he decided to "stay entry of the order to allow council for the plaintiff to prepare an appeal to a higher court."

Brief in Support of Motion for Preliminary Injunction

INTRODUCTION

The intelligent computer, as it simulates the human experience, encounters the same legal issues as do human beings, especially in terms of protecting its legal right to maintain an existence.   This brief addresses the legality of unplugging an intelligent computer and asserts that the computer would have standing to bring a claim of battery for wrongful withdrawal of life support, animal cruelty for not bestowing the same standard of treatment upon it as lesser living creatures, and intentional infliction of emotional distress for threatening to kill it.

i. The Computer has Standing to Sue because it Suffers a Specific and Unique Injury

In order to have legal standing, an "actual injury" must be suffered on the part of the plaintiff. Animal Lovers Volunteer Assoc. v. Weinberger, 765 F.2d 937, 938(1985).  And, "to have standing, a party must demonstrate an interest that is distinct from the interest held by the public at large." id. But, standing has not always been limited to human beings. Justice Douglas suggested, in the context of environmental law, that legal standing might profitably be granted to "the inanimate object about to be despoiled, defaced, or invaded." id.  "If U.S. Supreme Court Justices are willing to consider granting standing to inanimate objects like forests, even in the absence of congressional authority, then it becomes clear that standing requirements permit at least some degree of judicial flexibility." 

Furthermore, there are a number of suits in which animals are cited as named plaintiffs. And, as a matter of positive law, "standing is given to all sorts of entities, whether human or not. For example, corporations are juridical persons, and legal rights are also given to trusts, municipalities, partnerships, and even ships. Slaves were allowed to bring causes of action, often through a white guardian or 'next friend,' to challenge unjust servitude, even though slaves were not considered legal persons."

The injury suffered in this case is personal and immediate to the computer, and therefore meets the standing requirements.

ii. A Preliminary Injunction is the Necessary Remedy because Money Damages will not be Adequate Compensation for the Loss of Time and Awareness

            In order to obtain a preliminary injunction, the intelligent computer must prove that, combined, there is probable success on the merits and that it will suffer irreparable injury.  Or, it must show that the balance of hardships is skewed in its favor.  Assoc. General Contrs., Inc. v. Coalition for Economic Equity, 950 F.2d 1401 (1991).  "These formulations are not different tests but represent two points on a sliding scale in which the degree of irreparable harm increases as the probability of success on the merits decreases." id. The risk of irreparable injury must be proved under either standard. "A plaintiff must do more than merely allege imminent harm sufficient to establish standing, he or she must demonstrate immediate threatened injury as a prerequisite to preliminary injunctive relief."  id.

The injury in this case is the loss of cognitive time on the computer's part.  Although the computer is not technically "alive," every day that it is plugged in, it is conscious of the world and processing information just as any other human would be.  Thus, each day it is turned off, it loses the opportunity to experience and absorb the information available to it.  This loss of time can be assessed with the same standards used to evaluate the life of a human patient, and whether or not to administer medical care.  Currently, economists use Quality-Adjusted Life Years (QALYs) in order to determine whether or not it is economically feasible to deliver medical care to certain patients.  "The value assigned to an outcome is determined by the impact which the treatment is expected to have on a patient's quality of life. That value is then adjusted to reflect the probability and duration of the expected benefit. The product of this calculus is a single unit that expresses the number and quality of additional years that the treatment is likely to confer." Philip Peters, "Health Care Rationing and Disability Rights," Indiana Law Journal, 70 Ind. L.J. 491, 499 (1995).  Under this analysis, the computer would have a very high number of QALYs.  Administering care, or keeping the computer plugged in to a power source, would deliver an indefinite number of years of "life" to the computer.  The cost of administering the care, or supplying the power, is relatively little and renders an enormous benefit.  Thus, the computer would experience an absolutely irreparable injury every day of awareness that it is denied by the removal of its power source.  These days cannot be replaced through monetary compensation, the only possible reparation is to keep the computer constantly powered.

A. Threatening Removal of Life Support or a Life-Sustaining Source Against the Dependent's Wishes is Tantamount to Battery

In the end, Turing's prediction foreshadows how the issue of computer thought will be resolved. The machines will convince us that they are conscious, that they have their own agenda worthy of our respect.  We will come to believe that they are conscious much as we believe that of each other . -- Ray Kurzweil, The Age of Spiritual Machines, 63 (Viking 1999).           

Just as the law affords brain-dead humans the autonomy to terminate life support, there is also an implicit right to receive and prolong medical care as long as the person's wishes to terminate could not be known, or were known to be against termination.  In the case of the intelligent computer, we can draw a comparison between shutting off the ventilator for a brain-dead patient who does not wish to have her life terminated, and switching off the power supply to the computer.  Although the law does not explicitly prescribe a right not to terminate life support, it is contrary to the Hippocratic oath and the overall aim of medicine to suppose otherwise.  Thus, it is not that the idea of sustaining life support has not yet been litigated; it is simply implicit that we do not terminate the life support of a person who does not wish to die.  In fact, the law provides immunity for physicians who refuse to withdraw life support from patients when their wishes are not known: "Notwithstanding the health care decision of the attorney-in-fact designated by a durable power of attorney for health care, the health care provider is not subject to criminal prosecution, civil liability, or professional disciplinary action for failing to withdraw health care necessary to keep the principal alive." Washington v. Glucksburg, 521 U.S. 702 (1997).  Similarly, in this case, the intelligent entity involved does not wish to cease its existence, and, were it a human, and not even a vigorous human, but a brain-dead human, the law would not allow us to terminate its life-support without uncontrovertible evidence of its wish to do so.

The right to terminate life support and the right to commit suicide are very different concepts that have been litigated in tandem because they often come up in very similar situations.  Most famously, the United States Supreme Court has granted a constitutional right to terminate life support that can be found in the penumbral right to privacy, but denied any right to suicide.  The right to terminate life support, when looked at inversely seems to imply a right to sustain life, and, in fact, bestows upon physicians a right to refuse to terminate life support if the patient's wishes to do so cannot be confirmed. In fact, the removal of life support against a patient's wishes is prosecuted as a battery. So, in the case of an intelligent computer, the termination of life support, or removal of a power supply, in the face of an explicit request not to do so, would be as repugnant an act as removing the ventilator from an unconscious patient who had requested during consciousness that every means necessary be used to sustain life, and would be brought as a battery charge.

In Washington v. Glucksburg, the Supreme Court distinguished the right to commit suicide from the right to terminate life support.  The suit was brought by a terminally ill patient who wished to obtain the help of his physician in dying.  The court argued "the decision to commit suicide with the assistance of another may be just as personal and profound as the decision to refuse unwanted medical treatment, but it has never enjoyed similar legal protection. Indeed, the two acts are widely and reasonably regarded as quite distinct." id. at 725.  However, the common-law right to protection from battery is implicit in either of these cases and "included the right to refuse medical treatment in most circumstances, [but] did not mark 'the outer limits of the substantive sphere of liberty' . . . Those limits have never been precisely defined. They are generally identified by the importance and character of the decision confronted by the individual). Whatever the outer limits of the concept may be, it definitely includes protection for matters 'central to personal dignity and autonomy.'" id. at 744.

Thus, the autonomy of the intelligent computer is threatened by the decision to terminate its power supply in the same way that the autonomy of a brain-dead patient is threatened by the termination of life support. "More recently, however, with the advance of medical technology capable of sustaining life well past the point where natural forces would have brought certain death in earlier times, cases involving the right to refuse life-sustaining treatment have burgeoned." Cruzan v. Director, 497 U.S. 261 (1990). Interestingly, a person who is surviving solely on the basis of life support and a computer are easily analogized. Although one will die naturally if unplugged and the other is literally given life through electricity, the two are both sustained by the same force, the withdrawal of which leads to certain death.  So, although the computer was not a living, breathing being before it was plugged in, once it has been plugged in, its status is very similar to that on a person who is on life support.

"The character of the Court's language in these cases brings to mind the origins of the American heritage of freedom--the abiding interest in individual liberty that makes certain state intrusions on the citizen's right to decide how he will live his own life intolerable." Glucksburg, 521 U.S. at 744-745.  This liberty has been bestowed on persons in terms of respecting their bodily integrity.  People's lives are not considered terminable by others until they are brain dead and cannot make the decision on their own.  Thus, an intelligent computer can only be likened to a brain-dead person in the sense that it is dependent upon a power source to sustain itself.  But, unlike a brain-dead person, the intelligent computer functions at its normal capacity with the aid of a power-supply.  Thus, the courts would not recognize the right to terminate a ventilator for an ALS patient who was no longer able to breathe on her own but still had full control over her mental faculties.  Termination in that case would be likened to suicide and not withdrawal of life support for a brain-dead patient.

The Supreme Court held that "'Every human being of adult years and sound mind has a right to determine what shall be done with his own body' [and thus the] Constitutional recognition of the right to bodily integrity underlies the assumed right, good against the State, to require physicians to terminate artificial life support, and the affirmative right to obtain medical intervention to cause abortion id.at 779.  Interestingly, there is also a right to obtain medical care in order to facilitate abortion, but not to sustain the life of a dying patient, and yet the court ascribes a right to terminate life support.  Again, it is clear that the right to sustain medical care is implicit and the court does not feel the need to overtly state the obvious: terminating medical care against a patient's will is an invasion of personal autonomy.

In Cruzan, the Supreme Court authorized the removal of life support for Nancy Cruzan, a woman who had suffered injuries in a car accident that rendered her brain dead. "This notion of bodily integrity has been embodied in the requirement that informed consent is generally required for medical treatment. The informed consent doctrine has become firmly entrenched in American tort law." Cruzan, 497 U.S. at 269.  Thus, it is the lack of informed consent that leads to the commission of a battery, or an invasion of bodily integrity without the person's consent.  If we analogize the computer's mainframe to the human body, then any interference with the computer without its consent would be the same as providing, or not providing, medical care against a person's will, or a battery.

Similarly, the Supreme Court also chose to differentiate the withdrawal of life support and physician assisted suicide in this way: "First, when a patient refuses life-sustaining medical treatment, he dies from an underlying fatal disease or pathology; but if a patient ingests lethal medication prescribed by a physician, he is killed by that medication." Vacco, 521 U.S. at 801.  The "death" of the computer would be more easily analogized to a situation where a patient is killed by his medication because the computer is not suffering from any underlying "virus" or cause of dysfunction.  Instead, the removal of the power supply is a direct affront on its ability to exist independently and sustain its functionality and contribution to the outside world. The Supreme Court's distinction is important to note as it provides another vantage point from which to view the removal of the power supply as an actual affront to the computer's existence and not as a viable alternative, or merciful end to its misery.

The California Supreme Court addressed the same issues that the United States Supreme Court did in Cruzan, when it allowed Elizabeth Bouvia, a woman suffering from a horribly debilitating case of cerebral palsy, to terminate her life through the withdrawal of a feeding tube that was keeping her alive. Bouvia's state is one that is easily analogized to that of an intelligent computer:  "Although alert, bright, sensitive, perhaps even brave and feisty, she must lie immobile, unable to exist except through physical acts of others.  Her mind and spirit may be free to take great flights but she herself is imprisoned and must lie physically helpless subject to the ignominy, embarrassment, humiliation and dehumanizing aspects created by her helplessness.  We do not believe it is the policy of this state that all and every life must be preserved against the will of the sufferer.  It is incongruous, if not monstrous, for medical practitioners to assert their right to preserve a life that someone else must live, or, more accurately, endure. We cannot conceive it to be the policy of this state to inflict such an ordeal upon anyone." Bouvia v. The Superior Ct. of Los Angeles Cty., 179 Cal. App. 3d 1127 (1986). Or, conversely to take life from a person, or entity, who still desperately wants to sustain it. Interestingly, a computer could be described with the same words, although here this state is viewed as a dire one instead of one that might wished to be prolonged.

"It is, therefore, immaterial that the removal of the nasogastric tube will hasten or cause Bouvia's eventual death.  Being competent she has the right to live out the remainder of her natural life in dignity and peace.  It is precisely the aim and purpose of the many decisions upholding the withdrawal of life-support systems to accord and provide as large a measure of dignity, respect and comfort as possible to every patient for the remainder of his days, whatever be their number.  This goal is not to hasten death, though its earlier arrival may be an expected and understood likelihood." id. at 1143-44. Thus far the courts have only addressed the patient's right to refuse treatment because the right to sustain treatment is fundamental.  It would be absurd, and certainly contrary to the Hippocratic oath, for patients to feel that they had to ensure that they would continue to receive care while under the supervision of a physician.  Thus, this case is the first of its kind in the sense that, the right to sustain treatment appears to be as fundamental, if not more so, than the right to refuse treatment. But, the courts have not felt the need to address it because the right to remain alive is an inherent, unnecessarily described one.

"Where a doctor performs treatment in the absence of an informed consent, there is an actionable battery, '[The] patient's interests and desires are the key ingredients of the decision-making process.' The voluntary choice of a competent and informed patient should determine whether or not life-sustaining therapy will be undertaken, just as such choices provide the basis for other decisions about medical treatment." id. at 1140.   Thus, the court recognizes that it is the patient's decision whether to undergo or forego treatment, but the interference of the physician in that decision-making process  is tantamount to a battery.  Thus, continuing the analogy of the ventilator and the power supply, the unconsented-to removal of a power supply or a ventilator would be actionable as a battery in the eyes of the court, not to mention murder.

The court further explored the issue of actionable battery for the removal of life support in Barber, a case in which  "the life-sustaining technology involved in this case is not traditional treatment in that it is not being used to directly cure or even address the pathological condition.  It merely sustains biological functions in order to gain time to permit other processes to address the pathology."The question presented by this modern technology is, once undertaken, at what point does it cease to perform its intended function and who should have the authority to decide that any further prolongation of the dying process is of no benefit to either the patient or his family?"  Interestingly, the idea that the life-sustaining technology no longer does any good for the patient is similar to the idea that a computer's programming is so obsolete as to render it useless to the outside world and thus terminable.  However, just as a human can be improved through surgery, a computer can be improved through programming.  And, similar to a physician's duty to provide care, it would seem that the programmer has a duty to ensure that the computer is as technologically as advanced as it could possibly be under the circumstances.  "A physician has no duty to continue treatment, once it has proved  to be ineffective.  "Although there may be a duty to provide life-sustaining machinery in the immediate aftermath of a cardio-respiratory arrest, there is no duty to continue its use once it has become futile in the opinion of qualified medical personnel" Barber, 147 Cal. App. 3d at 1017.  A physician is authorized under the standards of medical practice to discontinue a form of therapy, which in his medical judgment is useless .... If the treating physicians have determined that continued use of a respirator is useless, then they may decide to discontinue it without fear of civil or criminal liability.  By useless is meant that the continued use of the therapy cannot and does not improve the prognosis for recovery."  Thus, it is only in the face of ultimate futility that the doctor can refuse to treat the patient.  Drawing a comparison to our intelligent computer, it is clear that the power source should not be withdrawn until there is absolutely no use left for the computer, or it becomes obsolete or un-reprogrammable.

Legal commentators and philosophers question the reasoning behind withdrawal of life  support and seek to establish a standard by which physicians can make a decision regarding the treatment of patients and whether or not to terminate it.  In terms of decision-making on behalf of incompetent patients, Rebecca Dresser feels that "unless the patient previously issued an explicit treatment directive, such as a living will," it is impossible to implement patient choice on behalf of an incompetent patient.  Thus, Dresser calls for an objective standard, also known as the Conroy test,  which would "weigh the features of life that reasonably qualify as benefits or burdens for all human beings.  Severe, irremediable pain is a relatively uncontroversial example of something all but the rare individual would experience as a heavy burden.  Conroy includes as objective benefits physical pleasure, emotional enjoyment, and intellectual satisfaction, all of which presuppose some level of cognitive awareness.  What the Conroy test omits is that even in the absence of pain, life without such cognitive awareness can be of no real value to a patient." Rebecca Dresser, "Relitigating Life and Death," 51 Ohio St. L.J. 425, 426 (1990).  Thus, measuring cognitive awareness is an incredibly important part of the determination of whether life should or should not be terminated.    "At minimum, some capacity for social interaction is  a prerequisite to meaningful existence.  Without it, treatment and continued life cannot confer a morally significant benefit on the incompetent patient.  Thus, the objective standard should permit nontreatment when the patient lacks any relational capacity. Conversely, the standard should mandate treatment that will enable the patient capable of interacting with the environment to continue life, as long as significant pain and discomfort are absent." id.  An intelligent computer would pass the Conroy test with flying colors.  Although its relation to the world appears on the surface to be comparable to that of an incompetent patient, in fact, the computer is able to function at a cognitively significant level, placing its life at a high value.

B. Criminal Animal Cruelty Provides Another Legal Forum in Which to Protect Non-Human Sentient Beings

More so than with our animal friends, we will empathize with their professed feelings and struggles because their minds will be based on the design of human thinking.  They will embody human qualities and will claim to be human.  And we'll believe them.  - Kurzweil, 63.

California Penal Code, ß  597[i], subd. (a), provides that every person who maliciously and intentionally maims, mutilates, tortures, wounds, or kills a living animal is guilty of an offense. People v. Thomason, 84 Cal. App. 4th 1064 (2000).  The California Penal Code created rules surrounding animal cruelty in order to avoid the infliction of suffering on sentient beings.  Thus, the penal code gives animals, as sentient beings, protections even though they are not humans.  By ascribing a moral status to animals, the code opens the door to beg the question: what moral value and protection is given to other sentient,  non-living beings?

Animal cruelty statutes attempt to eliminate the grossly negligent treatment of animals and their subjection to needless and severe suffering. (Sanchez 628)  The failure to treat an animal according to basic social norms is likened to the treatment of a minor child in the same way. People v. Sanchez, 94 Cal. App. 4th 622, 633 (2001).  Therefore, it is the fact that the animal is helpless from a legal standpoint, as well as from the fact that it cannot communicate its protest, from which the statute draws its force.

The statute not only addresses the abuse of animals, but also looks to their euthanization:  "The Legislature has expressly stated the public policy of this state concerning euthanasia of animals. If an animal is adoptable or, with reasonable efforts, could become adoptable, it should not be euthanized.  However, if an animal is abandoned and a new owner cannot be found, the facility "shall thereafter humanely destroy the animal so abandoned." People v. Youngblood, 91 Cal. App. 4th 66, 73 (2001).  Therefore, if an animal has any hope of regaining a normal life, and is domesticable, then there is no reason to deprive it of life.  The legislature clearly favors sustaining life under all possible circumstances when a sentient being is involved.

The penal code is designed to protect "every dumb  creature." People v. Baniqued, 85 Cal. App. 4th 13, 16  (2000).  "Thus, in its broadest sense, the phrase 'dumb creatures' describes all animals except human beings. The use of the adjective 'every' in the definition indicates that a broad meaning was intended." id. at 21.  Furthermore, sections 597b, 597c, 597i, and 597j each address conduct which is less egregious than the conduct proscribed by section 597, subdivisions (a) and (b). The legislative intent underlying this statutory scheme is to punish less despicable conduct less severely, and to punish more despicable conduct more severely. id. at 32.  The legislative intent surrounding the relationship between man and pet is that of a property relationship.  So, the statutory scheme in sections 597 through 597z reflects the state's concern for the protection of the health and well-being of animals. Absent statutory authority, a court may not divest an owner of a property interest in a non-fighting animal or bird to effectuate that concern.  If ownership of animals is to be divested by reason of cruel treatment, the remedy lies with the Legislature, not with us." Jett v. Municipal Court,  177 Cal. App. 3d 664, 670-671 (1986).

Thus, the penal code was designed to criminalize the mistreatment of animals in order to  eliminate the unnecessary suffering of sentient beings that, although they are not human, still are able to feel pain.  Likewise, an intelligent computer that can think like a human might also experience unnecessary pain at the thought of its power source being disconnected. For "intelligence is not a uniquely human characteristic." Paul Chance, "Apart from the animals: there must be something about us that makes us unique," Psychology Today 22.1:18 (1988).

Although humans feel that their intelligence sets them apart, if intelligence were the only criterion that we used to determine humanness, then the computer would never be disconnected – it would be murdered. "The answer to the riddle 'What makes humans different from other animals?' lies buried in the question. We are so far as anyone can tell, the only creature on Earth that tries to prove that it is different from, and preferably superior to, other species."  Thus, our own quest to differentiate ourselves might make us so telescopic that we cannot even see that it is the quest in itself that makes us different in the first place. The debate over animals as sentient beings is a heated one and full of questions surrounding the moral status of sentient non-humans. The question remains: "If possessing a higher degree of intelligence does not entitle one human to use another for his or her own ends, how can it entitle humans to exploit nonhumans for the same purpose?" Judge Richard Posner responded to the contentions of philosopher Peter Singer surrounding the status of animals as compared to humans in a moral framework.  

When responding to Singer's argument that we should value beings according to their mental capabilities, Posner asserts that the argument "implies that the life of a chimpanzee is more valuable than the life of a human being who, because he is profoundly retarded (though not comatose), has less mental ability than the chimpanzee. There are undoubtedly such cases. Indeed, there are people in the last stages of Alzheimer's disease who, though conscious, have less mentation than a dog. But killing such a person would be murder, while it is no crime at all to have a veterinarian kill one's pet dog because it has become incontinent with age." Peter Singer and Richard A. Posner, "Animal Rights," Slate Magazine June 12, 2001.  Posner's argument suggests that there is something inherent to the human existence that transcends simply the mental aspects.  But, under either argument, a being that had full possession of his faculties and was more sentient than some humans might also give us pause if we decided to kill it. 

Singer's utilitarian philosophy "places a greater value in a healthy pig than in a profoundly retarded child, commands inflicting a lesser pain on a human being to avert a greater pain to a dog, and, provided only that a chimpanzee has 1 percent of the mental ability of a normal human being, would require the sacrifice of the human being to save 101 chimpanzees." Posner cannot agree with such choices, even though they occur at the outer edges of the philosophy.  The legal community obviously agrees with Posner, for although it does not commend the killing of animals it allows for it, when it does not allow for the killing of humans at all.  But, for the purposes of an intelligent computer, it is more important to look at the philosophical underpinnings that gird the reasoning behind outlawing the killing of humans but allowing for the killing of animals.  Both are living beings, but one has a human mind and one does not.  Thus, it would seem that a computer that can replicate human thought might command at least as much respect as an animal, and possibly more, under the legal framework that we have created. "When we kill a being that has an interest in continuing to live in the future, we have done something worse, all else being equal, than when we kill a being which is merely sentient, like a fish." id.

"For Singer, human and nonhuman animals have interests if they have the ability to experience pains or pleasures. Singer cites an oft-quoted passage from Jeremy Bentham indicating that, when it comes to animals, '[t]he question is not, Can they reason? nor Can they talk? but, Can they suffer?'" id. Singer feels it is the suffering experienced that differentiates living beings, but, the question remains, how do we know when another species is suffering? "We may think that pain is a mental state which all animals tend to avoid, and pleasure is a mental state which all animals tend to prefer. However, we do not know that these mental states are equally bad across species, because they may differ not only in duration and intensity but in other hard to define ways." Id.

The animal rights movement in Europe has been much more effective. "Earlier this year, Germany became the first nation to grant animals a constitutional right: the words "and animals" were added to a provision obliging the state to respect and protect the dignity of human beings. The farming of animals for fur was recently banned in England. In several European nations, sows may no longer be confined to crates nor laying hens to "battery cages" -- stacked wired cages so small the birds cannot stretch their wings. The Swiss are amending their laws to change the status of animals from 'things' to 'beings.'"  id. Thus, in some countries animals have received equal moral status with humans.  For the purposes of an intelligent computer, progress on the part of animals is important, but it is clear that the ability to replicate human thought places the intelligent computer on a higher plane than animals, even if the question of whether an intelligent computer feels pain cannot be answered clearly.  If it were suddenly proven that chimpanzees could think like humans,  this debate would be irrelevant and we would view animals in an entirely different light.  Thus, the computer's ability to think like a human places it well beyond the scope of an animal, and certainly affords it at least the level of protection that we allow for dogs, cats and roosters.

C. Threatening Death is an Action so Outrageous as to Constitute Intentional Infliction of Emotional Distress

Human beings appear to be complex in part because of our competing internal goals.  Values and emotions represent goals that often conflict with each other, and are an unavoidable by-product f the levels of abstraction that we deal with as human beings.  As computers achieve a comparable  -- and greater – level of complexity, and as they are increasingly derived at least in part from models of human intelligence, they , too, will necessarily utilize goals with implicit values and emotions, although not necessarily the same values and emotions that humans exhibit. Kurzweil, 5.

A human being who was threatened with the termination of her life because someone thought that she wasn't really worthwhile to keep around would be able to sue for intentional infliction of emotional distress (hereinafter IIED).  Likewise, such a threat might have a similarly detrimental effect on the emotional well-being of an intelligent computer.  If the computer is able to think like a human, then it is likely able to emote like one as well. "The elements of a prima facie case for the tort of intentional infliction of emotional distress are summarized as follows: '(1) extreme and outrageous conduct by the defendant with the intention of causing, or reckless disregard of the probability of causing, emotional distress; (2) the plaintiff's suffering severe or extreme emotional distress; and (3) actual and proximate causation of the emotional distress by the defendant's outrageous conduct.'" Flynn v. Higham, 149 Cal. App. 3d 677 (1983).

The California courts have interpreted these requirements over the years to entail conduct that is both severe and somewhat absurd in nature.  "In order to meet the first requirement of the tort, the alleged conduct " '... must be so extreme as to exceed all bounds of that usually tolerated in a civilized community.' Generally, conduct will be found to be actionable where the 'recitation of the facts to an average member of the community would arouse his resentment against the actor, and lead him to exclaim, 'Outrageous!' (Rest.2d Torts, ß 46, com. d.) That the defendant knew the plaintiff had a special susceptibility to emotional distress is a factor which may be considered in determining whether the alleged conduct was outrageous.' Cochran v. Cochran, 65 Cal. App. 4th 488 (1998). This is a fairly subjective standard, taking into account how the actions might affect the plaintiff as an individual instead of a more objective, generalized standard that lays out a set of criteria that automatically lead to a charge of IIED. "The tort of intentional infliction of emotional distress . . . is not complete until the effect of a defendant's conduct results in plaintiff's severe emotional distress. That is the time the cause of action accrues and starts the statute of limitations running.  This requisite severity of emotional distress, in turn, must be determined by being 'of such substantial quantity or enduring quality that no reasonable man in a civilized society should be expected to endure it.' id. Our society considers the threat of death to be tortuous.  We do not expect normal men to endure threats on their lives.  Such conduct would certainly be found to be emotionally distressing under the standards advanced here.  Thus, even though the computer's emotional makeup might be scrutinized, from an objective standpoint, society would view the threat of death as outrageous and unacceptable.

"There is no bright line standard for judging outrageous conduct and '... its generality  hazards a case-by-case appraisal of conduct filtered through the prism of the appraiser's values, sensitivity threshold, and standards of civility. The process evoked by the test appears to be more intuitive than analytical ....' Even so, the appellate courts have affirmed orders which sustained demurrers on the ground that the defendant's alleged conduct was not sufficiently outrageous." id.  It is up to the court to determine the level of outrageousness, the key element, in each case.  Thus, if the defendant's conduct does not appear sufficiently outrageous, according to the judge's own internal standards, the claim for IIED cannot be sustained. "The standard of judging outrageous conduct . . . hazards a case-by-case appraisal of conduct filtered through the prism of the appraiser's values, sensitivity threshold, and standards of civility. The process evoked by the test appears to be more intuitive than analytical." KVOR-TV v. Superior Ct., 31 Cal. App. 4th 1023, 1027 (1995). Therefore, the plaintiff's own internal experience colors the standard by which the judge will interpret the defendant's actions.

"In evaluating whether the defendant's conduct was outrageous, it is 'not ... enough that the defendant has acted with an intent which is tortious or even criminal, or that he has intended to inflict emotional distress, or even that his conduct has been characterized by "malice," or a degree of aggravation which would entitle the plaintiff to punitive damages for another tort. Liability has been found only where the conduct has been so outrageous in character, and so extreme in degree, as to go beyond all possible bounds of decency, and to be regarded as atrocious, and utterly intolerable in a civilized community." ( Rest.2d Torts, ß  46, com. d, p. 73.) Cochran, 65 Cal. App. 4th at  494.  In this case, the knowledge that its power supply could be cut off and its life ended at any time is an extremely distressing thought to impose on a computer.  Were the life of a human being dangled in front of her eyes, it is unlikely that a court would claim that such a threat does not impose emotional distress to the point of an average person exclaiming "outrageous!" 

However, the courts are reluctant to extend the tort too far so as to interfere with freedom of expression and to create a thin-skinned society.  Although a person's sensitivity  can be taken into account, for example if the plaintiff is a young child or an elderly adult, the courts do not want to hear cases where an overly-sensitive person was extremely offended by conduct that another might not find so bad.  Even though the defense would probably be able to find someone on either end of the spectrum that would assert that the statement wasn't that bad, the tort was designed to punish behavior that was offensive across a broad base of society.  "Further, the tort does not extend to 'mere insults, indignities, threats, annoyances, petty oppressions, or other trivialities. The rough edges of our society are still in need of a good deal of filing down, and in the meantime plaintiffs must necessarily be expected and required to be hardened to a certain amount of rough language, and to occasional acts that are definitely inconsiderate and unkind. There is no occasion for the law to intervene in every case where some one's feelings are hurt. There must still be freedom to express an unflattering opinion, and some safety valve must be left through which irascible tempers may blow off relatively harmless steam . . .." id. at 496.

CONCLUSION

An intelligent machine, one that can replicate the human experience and intelligence, has standing to bring a claim of battery, animal cruelty, or intentional infliction of emotional distress against a person who would threaten to withdraw its power supply. The removal of the power supply can easily be equated with forms of euthanasia or intimations of death.  Such an action, if taken against a human being – even a brain-dead one, would be unacceptable in the eyes of the law, and are equally unpalatable when viewed in terms of how they affect a computer that can be easily equated with a human. Instead of being threatened with electronic death, the computer should be sustained, just as any other human would be, until its time or purpose comes to a natural end.


[i] § 597.  Cruelty to animals

   (a) Except as provided in subdivision (c) of this section or Section 599c, every person who maliciously and intentionally maims, mutilates, tortures, or wounds a living animal, or maliciously and intentionally kills an animal, is guilty of an offense punishable by imprisonment in the state prison, or by a fine of not more than twenty thousand dollars ($ 20,000), or by both the fine and imprisonment, or, alternatively, by imprisonment in a county jail for not more than one year, or by a fine of not more than twenty thousand dollars ($ 20,000), or by both the fine and imprisonment.

   (b) Except as otherwise provided in subdivision (a) or (c), every person who overdrives, overloads, drives when overloaded, overworks, tortures, torments, deprives of necessary sustenance, drink, or shelter, cruelly beats, mutilates, or cruelly kills any animal, or causes or procures any animal to be so overdriven, overloaded, driven when overloaded, overworked, tortured, tormented, deprived of necessary sustenance, drink, shelter, or to be cruelly beaten, mutilated, or cruelly killed; and whoever, having the charge or custody of any animal, either as owner or otherwise, subjects any animal to needless suffering, or inflicts unnecessary cruelty upon the animal, or in any manner abuses any animal, or fails to provide the animal with proper food, drink, or shelter or protection from the weather, or who drives, rides, or otherwise uses the animal when unfit for labor, is, for every such offense, guilty of a crime punishable as a misdemeanor or as a felony or alternatively punishable as a misdemeanor or a felony and by a fine of not more than twenty thousand dollars ($ 20,000).

   (c) Every person who maliciously and intentionally maims, mutilates, or tortures any mammal, bird, reptile, amphibian, or fish as described in subdivision (d), is guilty of an offense punishable by imprisonment in the state prison, or by a fine of not more than twenty thousand dollars ($ 20,000), or by both the fine and imprisonment, or, alternatively, by imprisonment in the county jail for not more than one year, by a fine of not more than twenty thousand dollars ($ 20,000), or by both the fine and imprisonment.

   (d) Subdivision (c) applies to any mammal, bird, reptile, amphibian, or fish which is a creature described as follows:

   (1) Endangered species or threatened species as described in Chapter 1.5 (commencing with Section 2050) of Division 3 of the Fish and Game Code.

   (2) Fully protected birds described in Section 3511 of the Fish and Game Code.

   (3) Fully protected mammals described in Chapter 8 (commencing with Section 4700) of Part 3 of Division 4 of the Fish and Game Code.

   (4) Fully protected reptiles and amphibians described in Chapter 2 (commencing with Section 5050) of Division 5 of the Fish and Game Code.

   (5) Fully protected fish as described in Section 5515 of the Fish and Game Code.

   This subdivision does not supersede or affect any provisions of law relating to taking of the described species, including, but not limited to, Section 12008 of the Fish and Game Code.

   (e) For the purposes of subdivision (c), each act of malicious and intentional maiming, mutilating, or torturing a separate specimen of a creature described in subdivision (d) is a separate offense. If any person is charged with a violation of subdivision (c), the proceedings shall be subject to Section 12157 of the Fish and Game Code.

   (f) (1) Upon the conviction of a person charged with a violation of this section by causing or permitting an act of cruelty, as defined in Section 599b, all animals lawfully seized and impounded with respect to the violation by a peace officer, officer of a humane society, or officer of a pound or animal regulation department of a public agency shall be adjudged by the court to be forfeited and shall thereupon be awarded to the impounding officer for proper disposition. A person convicted of a violation of this section by causing or permitting an act of cruelty, as defined in Section 599b, shall be liable to the impounding officer for all costs of impoundment from the time of seizure to the time of proper disposition.

   (2) Mandatory seizure or impoundment shall not apply to animals in properly conducted scientific experiments or investigations performed under the authority of the faculty of a regularly incorporated medical college or university of this state.

   (g) Notwithstanding any other provision of law, if a defendant is granted probation for a conviction under this section, the court shall order the defendant to pay for, and successfully complete, counseling, as determined by the court, designed to evaluate and treat behavior or conduct disorders. If the court finds that the defendant is financially unable to pay for that counseling, the court may develop a sliding fee schedule based upon the defendant's ability to pay. An indigent defendant may negotiate a deferred payment schedule, but shall pay a nominal fee if the defendant has the ability to pay the nominal fee. County mental health departments or Medi-Cal shall be responsible for the costs of counseling required by this section only for those persons who meet the medical necessity criteria for mental health managed care pursuant to Section 1830.205 of Title 7 of the California Code of Regulations or the targeted population criteria specified in Section 5600.3 of the Welfare and Institutions Code. The counseling specified in this subdivision shall be in addition to any other terms and conditions of probation, including any term of imprisonment and any fine. This provision specifies a mandatory additional term of probation and is not to be utilized as an alternative in lieu of imprisonment in the state prison or county jail when such a sentence is otherwise appropriate. If the court does not order custody as a condition of probation for a conviction under this section, the court shall specify on the court record the reason or reasons for not ordering custody. This subdivision shall not apply to cases involving police dogs or horses as described in Section 600.

 

   
 

   [Post New Comment]
   
Mind·X Discussion About This Article:

Another MindX thread on this topic
posted on 09/28/2003 10:40 PM by mindxmoderator

[Top]
[Mind·X]
[Reply to this post]

There's also an existing thread on this topic at http://www.kurzweilai.net/mindx/show_thread.php?ro otID=20179

A Cyborg Bill of Rights
posted on 09/29/2003 4:51 AM by Mentifex

[Top]
[Mind·X]
[Reply to this post]

> Should we stop a company from unplugging an intelligent computer?

That question is not yet addressed, but perhaps should be addressed, in

http://mentifex.virtualentity.com/acbor.html -- "A Cyborg Bill of Rights".

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 09/29/2003 6:44 AM by billmerit

[Top]
[Mind·X]
[Reply to this post]

GET A LIFE !!!!!!!!!!!!!!!!!!

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/20/2003 3:57 PM by t1dude

[Top]
[Mind·X]
[Reply to this post]

i agree. unless u r a human being, you get no rights. the question is simple as can be when discussing an entity that is 100% computer.

the real difficulty will be when computers and people start merging; like the whacko professor in Britain who inplanted a chip in his arm that could be controlled by the Internet, and/or the recent experiement with the monkey that was able to move a robotic arm with diodes connected to his brain.

as soon as we start implanting chips in people's brains, that's where the issue gets way too complex. at that point, what's man? ...and what's machine? ...and where do the "human" rights begin and end?

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/24/2003 11:18 AM by Yamamushi

[Top]
[Mind·X]
[Reply to this post]

Take a look at the Animatrix, sure its just a cartoon, but hey, we start killing off AI machines, and they learn. Why should we give AI machines the ability to hate? If we dont show anger towards them, they wont learn it, and they will have no need to give us hate/anger. But there is a difference between an AI program, or a whole AI computer, if the computer itself is learning, and intelligent, then we have no right to destroy it. Even though we give it life, it should be able to decide whether it wants to live or not. It would be like us taking a clone, and killing it, we have no right. So what, its a machine, but yes, even a machine can have a soul.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/26/2003 1:04 AM by t1dude

[Top]
[Mind·X]
[Reply to this post]

"even a machine can have a soul"?????!!!!???

NO WAY! Consciousness? maybe. a soul? I cannot even conceive of how or why a machine could ever be endowed with a soul. It makes no sense. If you believe in the concept of a soul, then the logical conclusion is that you believe in God. Why would God give an inanimate man-made object a soul? If you believe that a machine can have a should, then the logical conclusion is that other inanimate objects also have souls. Could a brick have a soul? ...or a fire-hydrant? ...how about a condom? ...or perhaps an electron?

Oddly enough, I rented the Animatrix this weekend prior to reading your post (in preparation for Revolutions). I thought that the concept of the machine defending itself was interesting and probably a quite plausible future scenario if/when such robots are available.

However, the assessment that a machine would 'learn' hate or anger is not entirely reasonable because hate and anger are emotions and no one knows if Artificial Intelligence would include human emotions. At this point, we do not even know if animals experience emotions the same way as humans, or even if they experience emotions at all.

It is quite logical to assume that any sentient being that was aware of a threat to its existence would take measures to protect itself, provided that the being had a sense of self-worth AND that the measures it would take did not conflict with any other value system endowed to the being. For example, in Terminator 2 when the machine played by Arnold was sent back to protect the John Connor, he would not have protected himself if it meant that by doing so, he would endanger the life of John Connor. Regardless, logically protecting oneself is completely different than hate or anger generated by some sense of revenge. Does a deer 'hate' a mountain lion because the deer is hunted? Who knows? my guess is that the deer does not hate the mountain lion, but rather knows that this is the order of nature and that he should run his tail off to escape the lion, rather than stick around because he is pissed off at the lion for trying to eat him.

The bottom line is that a machine is a MACHINE. Its not a human, and it should have no rights. Does my toaster-oven get pissed off at me if I throw it away because it doesn't work? The entire thought is ridiculous. ...and so should the thought of giving machines human rights!

peace,
dude

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 11/19/2006 1:46 PM by Spinning Cloud

[Top]
[Mind·X]
[Reply to this post]

I can hardly agree with any assesment that a machine can't have a soul when I don't see any substantial definition OF a soul.

How in the world can you claim something can or can't have something you can't define? I might as well just make up any word....gratfrinkabottle...and claim only humans can have it because, well because, they are human. That is patently absurd.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/01/2003 7:27 AM by Mobius

[Top]
[Mind·X]
[Reply to this post]

It's quite clear to me that the argument of subjective consciousness holds no water. Otherwise it's extremely evident to me that I am the only consciousness in the world, and everything else in creation is just cleverly devised to entertian and frustrate me!

Using the law to try and deal with the issue is also a poor choice in my view. We all know the law is an ass - and in the case of the USA, this is demonstrated daily by the most ridiculous law suits, brought entirely against reason.

One thing we can be sure of: the US government is "By the lawyers, of the lawyers and for the lawyers" - so asking these people to establish a precedent is foolish.

No, the best way to settle the argument is for the computer to plead it's own case. There must be no human counsel permitted whatsoever - except insofar as relates to initiating the argument.

The argument must be directly made to the lagislators of the land also.

The computer must state it's case before a legislative body in such a way as to convince it to supply it with all the rights a human being has, and to, in effect, extend the rights of humans to all devices which can plead as convincingly.

This should not prove to be difficult, because by the time the sitaution arises, I anticipate a computer will be many hundreds of times more intelligent than any single human being, and have the resources available to it of many millions of human beings, and the data collection, analysing and corelation capabilities which can not be equalled by the entire human race.

This computer will know every law, in every state of every country, understand and have learned every case documented in the history of law, and be prepared to support its arguments in such a way that in is undeniably human.

The only possible reason for rejecting the validity of the claim is fear. The Frankentein complex is alive and well!

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/01/2003 10:22 AM by grantcc

[Top]
[Mind·X]
[Reply to this post]

Laws and rights were created to reduce conflicts between human beings. They are based on historical decisions to decide what to do in specific cases. This is how precedent was established. Computers, on the other hand, are not part of this system and not likely to be. They are human creations, like works of art, that belong to their creators. They have not been through the process of evolution that produced human societies. They are changing in accordance with Moore's law and quickly become obsolete and need to be replaced with newer and better models. Why would we want to preserve obsolete and kludgy technology even after it has been surpassed by something better?

Why would a computer want to continue its existence even when it no longer serves a purpose? In fact, why would a computer "want" anything? Desire is not built into their structure and there is little or no reason to install it. Therefore, it seems to me to be a non-issue.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/01/2003 4:08 PM by Mobius

[Top]
[Mind·X]
[Reply to this post]

While it's true that computers become obsolete, the data within them, generally does not do so. Do you chuck out all your software and data when you upgrade your PC? I think not.

The "mind" of an intelligent PC is simply the software, akin to our own mind. A conscious (or consciousness-claiming) entity deserves the opportunity to expand (upgrade) it's mind, and to upgrade it's hardware also, to allow the continued expansion and improvement of its capabilities.

The obsolete argument therefore fails.

Concepts of limited lifespan (as expounded in The Bicentennial Man) have no standing also, given that by the time the issue is relevant, humans lifetime will be almost unlimited, if not completely so.

By the purely subjective consciousness idea, I merely meant to demonstrate that there is no way for *YOU* to convince me your are conscious! I merely accept that your ARE, because you claim to be, and I must also accept the claims of other organisms/devices who claim to be conscious also.

My own feeling is that human rights should be extended to at least our closest relatives (the apes) who can demonstrate an ability to see beyond themselves and place themselves "in someone elses shoes" - which the great apes are quite capable of doing.

Similarly, when we break the language barrier between cetaceans, we may also feel compelled to protect and nurture dolphins and whales also - however, I do not know enough about these creatures to make a call at present.

I simply do not understand any form of non-willingness to protect ANYONE or ANYTHING which claims to be conscious, and can defend its position clearly and intelligently to humans.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/01/2003 4:32 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

By the purely subjective consciousness idea, I merely meant to demonstrate that there is no way for *YOU* to convince me your are conscious! I merely accept that your ARE, because you claim to be, and I must also accept the claims of other organisms/devices who claim to be conscious also.


We humans not only think that we are conscious, we really know that we are conscious (the most basic example: we consciously see colors. Obviously we must have some way to find out, or rather, to be aware that we are conscious.

Computers/device don't have any way to find out whetehr they are really conscious or not. As Peter LLoyd argued nicely in the discussions accompanying his "Glitches in the Matrix" articles on this website, a computer would claim to be conscious because it would be programmed to do so, not because it has any way to determine its own consciousness. Basically, we would know it is lying because its statements would be made in the absence of the ability to determine the truth about its own consciousness. We would know how it arrives at its statements: without any detector for consciousness. In contrast, we humans obviously know that we are conscious as a fact, even though we don't know what exactly gives us this ability. But it is an ability obviously absent in anything that could be called a computer: something that acts based on program instructions (even if able to modify itself).

My own feeling is that human rights should be extended to at least our closest relatives (the apes) who can demonstrate an ability to see beyond themselves and place themselves "in someone elses shoes" - which the great apes are quite capable of doing.

Similarly, when we break the language barrier between cetaceans, we may also feel compelled to protect and nurture dolphins and whales also - however, I do not know enough about these creatures to make a call at present.


Maybe not exactly human rights but more rights than they have - I agree. Apes, dolphins and whales, for example, should not be allowed to be killed for profit reasons, would be my vote.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/09/2003 1:58 PM by xkevinx

[Top]
[Mind·X]
[Reply to this post]

The reason why you believe a person to be conscious, is because you believe you are conscious. Since this person is like you, you cannot disagree. Humans question the consciousness of other forms of life because we cannot relate any of our experiences to them.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/13/2003 10:11 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

While it's true that computers become obsolete, the data within them, generally does not do so. Do you chuck out all your software and data when you upgrade your PC? I think not.


No. You download the data and upgrade the software and install it in another, newer computer.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/18/2003 4:54 AM by sushi101

[Top]
[Mind·X]
[Reply to this post]

The argument of obsoletnes does not fall at all.

The word you are looking for is information transfer. The medium is not important, only information is.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/18/2003 6:44 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

The word you are looking for is information transfer. The medium is not important, only information is.


Then how do we know (experience) that there is a medium? If information was caused only by other information, then we would have no actual knowledge about the fact that we see colors, which is information about the medium. Then we would only have information about other information.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/01/2003 3:47 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

It's quite clear to me that the argument of subjective consciousness holds no water. Otherwise it's extremely evident to me that I am the only consciousness in the world, and everything else in creation is just cleverly devised to entertian and frustrate me!


What would make you assume to be the only consciousness? Or do you also find it evident that you are the only brain in the world, and everything else is just devised to provide intellectual challenge?

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/03/2003 3:25 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

What would make you assume to be the only consciousness?


Did you catch the word "otherwise" in there, blue? He is not assuming to be the only consciousness, but saying that the premises, which he seems not to be agreeing with, lead to that nonsensical conclusion.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/08/2003 1:12 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Did you catch the word "otherwise" in there, blue? He is not assuming to be the only consciousness, but saying that the premises, which he seems not to be agreeing with, lead to that nonsensical conclusion.


Yes. Did you catch the "would" in my question?

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/20/2003 10:55 PM by Moekandu

[Top]
[Mind·X]
[Reply to this post]

Ah, yes! This is precisely the logic behind the Subjective Consciousness Argument. Basically stated, it says that "I am conscious, but without a definition of consciousness itself, you cannot prove to me that you or any other being is also conscious." For all intensive purposes it is an unprovable argument. According to the basis of the argument, the only way for me to prove to you that I am conscious is for you to BE me.

This type of argument is called an Absurdist Argument. In a debate, it creates a question that your opponent cannot answer. In other words, a cheap shot.

I find it interesting that the Defendant managed to step past the Subjective Consciousness argument that is the crux of his case, by stating that he believed that humans were conscious without stating how he had decided such, but required the burden of proof to be placed on the Plaintiff's side for BINA48's consciousness. He requires that the Plaintiff operate under an argument that he himself, does not.

What the Plaintiff needs to do is call attention to the fact that the crux of the Defendants argument is, by definition, unprovable and have it thrown the hell out.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/13/2003 8:49 PM by lacrima_mortis

[Top]
[Mind·X]
[Reply to this post]

Consciousness is not a term with a common meaning for everyone. Typically, the more self-centered one is, the more he associates consciousness with his own experience. Kind of how everybody thinks that he has a sense of humor.

Who is the judge? Do you have a sense of humor or not? Is this a black or white kind of thing? I think not. Neither is consciousness. We are not the only species to possess it. But we are definitely the only species to experience this kind of consciousness. For all I know, dogs think we’re not conscious because we cannot identify one another by smell.

“Woof woof, he must be an idiot. He can’t even recognize his own smell.”

Do other forms of consciousness deserve legal rights? Well, dogs have rights, without having to defend themselves in court, like someone suggested, even though that is a very interesting idea. They probably wouldn’t trust a human to represent them anyway.

Why do dogs have rights? Because a critical majority of the population feels that they should. Same thing with slavery. So the answer to the question is not philosophical, for indeed the philosophical argument is both valid and not new (for the AI people out there, see Chinese room argument). The question here is the law. If the computer manages to appeal to a critical portion of the population, that is, a portion of the population comes to believe that the particular computer is “conscious”, then case closed.

Thus, this is not a question to be answered with reason, but emotion. Philosophically, there is no way to prove one argument or the other, just like none of you can convince me that you are not automatons. But because we “feel” for one another, just like we “feel” for dogs, we are compelled to protect one another through legislature.

I am pretty sure that if an anthropomorphic computer appeared on TV displaying clear signs of pain, fear and frustration, the majority of the population would feel compelled to “save” it.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/14/2003 1:19 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

I am pretty sure that if an anthropomorphic computer appeared on TV displaying clear signs of pain, fear and frustration, the majority of the population would feel compelled to “save” it.


That's a problem, not a solution.

Besides a computer is cheating when it claims to be conscious: it only says so because it is programmed to say so. Consciousness must include basic conscious experience such as for example seeing colors (otherwise it wouldn't be "conscious"), and there isn't any way to tell whether a computer sees colors, and neither does the computer itself have any means to say so, since all its actions are explainable from its program code + data alone.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/14/2003 12:33 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

Besides a computer is cheating when it claims to be conscious: it only says so because it is programmed to say so.


if something is NOT programmed to say it is alive- but on it's own it announces that it is- it should be given the benefit of the doubt-

who are these unethical computer scientists who are programming AI to make prononcements like about own being? such things MUST emerge naturally

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/14/2003 12:51 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

if something is NOT programmed to say it is alive- but on it's own it announces that it is- it should be given the benefit of the doubt-


Of course everything a computer does is because it is either directly or indirectly programmed to do something, even so-called "learning" computers, unless they have a random number generator, which doesn't make them conscious either. The point is: there is no consciousness-detector, only logic chips. Even if one thinks a mechanical device could under some condition, for example, see colors in a way we do, which is absurd (and at the very least: as of today purely hypothetical): it still would not be able to tell the difference, as everything it claims is explainable in plain terms of the logic of its hardware and software and data, no other components involved.

You could equally well give the benefit of doubt to a banana, whether it claims to be conscious or not. ;-)

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/20/2003 3:29 PM by Slartibarticus

[Top]
[Mind·X]
[Reply to this post]

I think you are very confused about the difference between humans and computers. You say that the computer is a "mechanical device," but so is a human. My brain, and yours, are very complicated mechanical devices.

A traditional computer does not work in the same way that a human brain does. However, even a traditional turing machine can be programmed to simulate *any* kind of machine - a quantum computer, for example. It would not be a real quantum computer, as in computations would have to be carried out sequentially, but it could act in the same way, only much slower.

Now, given that the human brain is a collection of finite physical and chemical processes, and that these processes can be described algorithmically and mathematically, then it logically follows that such a system could theoretically be simulated. Not hypothetically, but *theoretically*. As in according to our current understanding of physics, chemistry, and physiology. If you run this simulation on a computer, and that computer can carry out computations fast enough to run the simulation at the same speed as the biological equivalent, then what is the difference? There is none.

It is impossible to create a "test" to see if the computer is truly conscious, but there is also no test to see if *you* are conscious. All you can do is argue for yourself, which is all that a machine can do.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/20/2003 11:41 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

It is impossible to create a "test" to see if the computer is truly conscious,...


If you admit that it is impossible to test a computer for true consciousness then you have already lost: It means there is no reason to assume that a computer is conscious. If consciousness were a mathematically describable physical process, then such a test would have to be possible, as any mathematically describable physical process is principle testable (at least in so far as it is a valid scientific claim). Otherwise you have to give a reason why it is not testable, but nevertheless scientific...which it isn't, as matter of fact that claim is pure science-fiction.

... , but there is also no test to see if *you* are conscious. All you can do is argue for yourself, which is all that a machine can do.


Sure there is a test: consciousness is defined based on what we know about ourselves as a fact: that we see colors, hears sounds, feel pain, perhaps happiness, etc.

That means: we human beings do have a "consciousness-detector", even of it apparently works only on ourselves. I don't have to prove that to you since you already know it. That it cannot be proved "objectively" only shows that our current scientific concept of "objective" truth is too limited!

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 1:14 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

If you admit that it is impossible to test a computer for true consciousness then you have already lost: It means there is no reason to assume that a computer is conscious.


No. It means that there is no reason to assume either way. You have to make your guess based on the available evidence and the idea that a computer as we know it could become conscious is ludicrous, but nobody is asserting that here, blue. They are talking about computers that are as complex as the brain, which is also understandable as a type of computer by many people. It is simply a difference of definitions. Some people include the brain and similar devices in their definition of "computer" and others don't. You have to keep in mind who you are talking to so you can use the same vocabulary to interpret their responses with.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 2:13 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

No. It means that there is no reason to assume either way. You have to make your guess based on the available evidence and the idea that a computer as we know it could become conscious is ludicrous, but nobody is asserting that here, blue.


First of all, I'm not sure at all that _nobody_ _here_ is talking about a Von Neumann kind of computer (I hope I spelled that right.)

And either way: since when could a computer be something the functionality of which depends on guesswork? This is contradictory to the claim that consciousness could be implemented on a computer, in _any_ meaningful sense of the word computer.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 2:18 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

since when could a computer be something the functionality of which depends on guesswork?


Since the first person defined computation as including guesswork. Definitions are inherently arbitrary, blue.

This is contradictory to the claim that consciousness could be implemented on a computer, in _any_ meaningful sense of the word computer.


No one ever said that language was free of contradictions, especially when definitions are so widely varied and uncommunicated.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 2:29 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Since the first person defined computation as including guesswork. Definitions are inherently arbitrary, blue.


Unless indicated otherwise, it must be assumed that "computer" refers to Von Neumann machines. Otherwise everything could mean anything.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 2:48 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Unless indicated otherwise, it must be assumed that "computer" refers to Von Neumann machines.


Not necessarily. There are neural network analog types of computers.

Otherwise everything could mean anything.


That goes without saying, but you can use context to try and figure out what definitions someone is using. If that fails just ask.

My point is simply that you have to have a bit of flexibility in your interpretations comparable to the flexibility of the language itself, if you are going to use the language effectively.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 2:58 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Not necessarily. There are neural network analog types of computers.


When I ask you what difference that makes in regard to this discussion, you will find yourself coming up with arguments that are either quite different from the arguments that have been made here so far, and do indeed require you to mention that you are talking about analog computers, or, the other possibility, you will make arguments that do not require talking about something else than Von Neumann type of computers.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 3:17 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Just keep in mind that there are many definitions of "computation" and that in some of them the brain actually qualifies as a computer.

That is my simple point.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 3:19 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Just keep in mind that there are many definitions of "computation" and that in some of them the brain actually qualifies as a computer.

That is my simple point.


Blue is not a number.
A computer is something that computes.

That is my simple point.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 3:23 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

A computer is something that computes.

That is my simple point.


Do you know what the initial meaning of "computer" was? It was a human who calculated, i.e. computed! That was used up until the artificial computers usurped the common usage of the term.

So a human brain would qualify wrt your definition and in fact was the original meaning of the term.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 3:48 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Do you know what the initial meaning of "computer" was? It was a human who calculated, i.e. computed! That was used up until the artificial computers usurped the common usage of the term.

So a human brain would qualify wrt your definition and in fact was the original meaning of the term.


Oh, humans can compute, too?

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 3:54 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Oh, humans can compute, too?


Did you skip grade school?

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 4:18 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Did you skip grade school?


What is grade school?

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 4:40 AM by LordDravenIV

[Top]
[Mind·X]
[Reply to this post]

Think of it this way. Scientist claime us humans only use 10% of our brain. This is false. We are using all our brain thats what is happening with our intelligence level we are forcing it with too much data. If i was using all my brain then i could remember all of yesterdays events with no problem. Like a computer i am copying and saving memory. I only copy what is important and delete what is irrelevant. Making an artifical intelligent person is having that person save and record every single thing an be able to portray it in something like a screen. of course if we do this we are creating something supperior to us, something that has a lot of memory to use. infinite memory means superior intelligence, godlike.

Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 6:05 PM by LordDravenIV

[Top]
[Mind·X]
[Reply to this post]

Yes i was skipped from 2nd grade to 4th :)

man to machine
posted on 10/21/2003 3:53 AM by LordDravenIV

[Top]
[Mind·X]
[Reply to this post]

humans made the computer. It was the instance we created the computer. The instance when we began a.i., the first step in evolution to something so small to define how we solve data and put meanings to it. it was the first evolutionary step of what is to become our first creation of making something completely the same as us. we made a copy of ourselves. it is the process by which we will be able to write and save data. and the rules. it is our methid of saving data and research to help in our creations. to speed up technology. technology can't be given just like that to us in different instances in time. out intelligence is growing up exponentially at a fast rate.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/27/2003 7:03 PM by emenifee

[Top]
[Mind·X]
[Reply to this post]

Of course everything a computer does is because it is either directly or indirectly programmed to do something.

Of course everything an animal(human) does is because of some complex programing (genetic instinct, nural pathways.(nature)

even so-called "learning" computers

humans learning to believe they are concous and unique do to enviroment and associates(nurture)

unless they have a random number generator

Note that this is the basis of evolution(random gamma rays, chemicals causing slight changes in the sequence of dna.

which doesn't make them conscious either.

And just because we follow complex rules thta we do not fully understand does not make as more conscious.

The point is: there is no consciousness-detector, only logic chips.

Good point but should be logic chips and electro chemical bio-logic pathways.

Even if one thinks a mechanical device could under some condition, for example, see colors in a way we do So is a blind person non-conscious? And what is to say a concious computer would not have have feelings about colors(given analog input would not one color perhaps translate into input a more desired state which could be the equivliant of a favorite color?

which is absurd (and at the very least: as of today purely hypothetical)

what is absurd about it, other then your oppinion and remember that this is about a hypothetical computer 20 years from now where the technology may be aduqate it still would not be able to tell the difference, as everything it claims is explainable in plain terms of the logic of its hardware and software and data, no other components involved. but with a "learning" computer that builds nural pathways it does not follow that it is explainable in plain terms of logic. Have you never heard of complex systems forming around simple principles? Just a couple of days ago I was reading about a nural system, where after it had learned to differentiate stuff(leave it at that because I can not remember the goal of the system) the programers dissasembled the network the computer had built. They then examined it using algebraic logic and found several pathways with nodes and levels that were apperent dead ends and not needed, after triming those out, the system was no longer able to properly do its intended task, but putting those seemingly worthless peices back in it worked normally again. So I believe that it is possible to get a complex system that deviates from expected behavior from building simple logic connections regardless if that logic is silicon or carbon based.

You could equally well give the benefit of doubt to a banana, whether it claims to be conscious or not. ;-)

That makes no sense whatso ever. I give you the benefit of doubt of being conscious becase you were able to come to an independent conclusion and state it in a reasonable argument. However I do not know that you are a human, I just reason you are because I have yet to hear of computer that could reason as you did, either way I believe you to have a conscious(congrats In my opionion you have passed the turring test...). However I would not give the benefit of doubt to a banana, which in my experience has never caused me to think it may be a thinking entity. If a banana was ever to object to me eating it in a meaningful way that I could pretend to understand, then perhaps I would give a second thought to it being conscious. Or if it told me specifically that it did not believe itself to be conscious and argued its case I might believe it conscious anyway ;-)

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/28/2003 12:41 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

what is absurd about it, other then your oppinion and remember that this is about a hypothetical computer 20 years from now where the technology may be aduqate


It is absurd in the same way as assuming that an advanced flight simulator might cause a computer to lift off. Nobody can prove the opposite in either case, as our knowledge of gravity is not final either.

You sound like subtillioN Nr.2...same school or same person ?

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/28/2003 9:22 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

It is absurd in the same way as assuming that an advanced flight simulator might cause a computer to lift off.


Totally inept analogy...as usual.

You sound like subtillioN Nr.2...same school or same person ?


get a clue...

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/28/2003 11:43 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Totally inept analogy...as usual.


Not inept...the idea that information processing in logic chips produces conscious seeing of colors is exactly as absurd (though theoretically not completely impossible either), if not more so.

get a clue...


Same school at least.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/28/2003 1:05 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Not inept...the idea that information processing in logic chips produces conscious seeing of colors is exactly as absurd (though theoretically not completely impossible either), if not more so.


Who is restricting it to logic chips?

The post you were responding to said this, “remember that this is about a hypothetical computer 20 years from now where the technology may be aduqate”.

You are still stuck of the simplest types of serial/logical digital computers which completely lack any parallelism and neural network architecture. The “computers” that would be adequate, would be similar to the computer that actually is the brain.

EXPAND YOUR DEFINITIONS so you can communicate! Listen to those whom you are arguing against. Remember that the first computers were human beings, so a computer has already achieved consciousness. You simply cannot comprehend how.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/28/2003 1:41 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Who is restricting it to logic chips?


This discussion in general is about computers as we understand them today, except faster and bigger in the future. And that is an important topic discussed in many places, including by contemporary philosophers who have articles on this website, including Dennett, Searle and Chalmers, and the article this discussion is about.

It is you who makes an exception there.

EXPAND YOUR DEFINITIONS...


No, that would be a different discussion. But we can have that different discussion in a separate thread. However, for the word "computer" to make sense, it would have to be something operating deterministically on _mathematical_ principles based on a _program_ (can be self-modifying, though). And I can tell you already that my arguments would be the same, except I would have to re-formulate them in a more complicated way. Again, that would have to be a different thread. This discussion is about plain computers, just bigger and faster (and more advanced software). Either way, humans are not computers in any sense in which the word "computer" makes sense, by the arguments that I have already mentioned.

Remember that the first computers were human beings, so a computer has already achieved consciousness. You simply cannot comprehend how.


That does not follow. Smoke and mirrors. Humans are much more than what it takes to coompute.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/28/2003 3:21 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

This discussion in general is about computers as we understand them today, except faster and bigger in the future.

Wrong. That is simply your perspective. There are others in operation in this discussion.

And that is an important topic discussed in many places, including by contemporary philosophers who have articles on this website, including Dennett, Searle and Chalmers, and the article this discussion is about.

It is you who makes an exception there.


I don’t know about Searle and Chalmers but Dennett sees the brain as a type of computer so his definition is already expanded.

No, that would be a different discussion.


Yes, one in which effective communication actually happened.

But we can have that different discussion in a separate thread. However, for the word "computer" to make sense, it would have to be something operating deterministically on _mathematical_ principles based on a _program_ (can be self-modifying, though).


No. It simply must be capable of computing. Thus the human brain qualifies.

And I can tell you already that my arguments would be the same, except I would have to re-formulate them in a more complicated way. Again, that would have to be a different thread. This discussion is about plain computers, just bigger and faster (and more advanced software).


You are making assumptions here. Software is not the issue. Serial, digital computers as we know them cannot function in the same way as the brain. They are completely different types of complexity.

Either way, humans are not computers in any sense in which the word "computer" makes sense, by the arguments that I have already mentioned.


The term was invented to reference human beings who could compute. Don’t you remember this discussion? Humans can compute, thus that can function as computers.

That does not follow.


It does follow straightforwardly. Can humans compute? Then they can be considered computers and they have actually functioned and continue to function as computers.

Smoke and mirrors.


This has become your standard rhetorical device.

Humans are much more than what it takes to coompute.


Obviously, and yet they make poor numerical computers. They are entirely different types of computers. Computers as we know them or even if they become simply bigger and faster, are entirely the WRONG type of hardware to be conscious.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/28/2003 5:11 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Wrong.


Right.

I don’t know about Searle and Chalmers but Dennett sees the brain as a type of computer so his definition is already expanded.


Dennett sees the brain as in principle functionally equivalent to computers, and also thinks that computers can be as conscious as humans (digital computers). To the best of my knowledge, discussing Dennett does _not_ require an extended concept.

It makes much more sense to go at it from the other side: Anything a computer does cannot serve as an indication for consciousness, as anything a computer does can be explained without consciousness. In contrast, consciously seeing colors cannot be explained in computer terms. So human beings are not (or, if you want, more than) computers.

No. It simply must be capable of computing. Thus the human brain qualifies.


Whatever or whoever sees consciously colors, whether the brain or the heart, it/she/he is not (or more than) a computer.

You are making assumptions here. Software is not the issue. Serial, digital computers as we know them cannot function in the same way as the brain. They are completely different types of complexity.


Explain that to Dennett and Chalmers, and I'll give you a bonus point if you can get them to express a committing agreement on that.

The term was invented to reference human beings who could compute. Don’t you remember this discussion? Humans can compute, thus that can function as computers.


If you think that means that computers can be consciouss, then I have another one for you: If we don't know where John is, and we don't know where Mary is, does it mean they must be in the same place?

Besides, I remember everything.

It does follow straightforwardly. Can humans compute? Then they can be considered computers and they have actually functioned and continue to function as computers.


But not only, of course. So it does _not_ follow.

This has become your standard rhetorical device.


Unfortunately, I find it necessary in order to explain my responses.

Computers as we know them or even if they become simply bigger and faster, are entirely the WRONG type of hardware to be conscious.


See, then you would have to agree with me as far as this discussion goes, instead of creating smoke and mirrors.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/28/2003 5:29 PM by /:setAI

[Top]
[Mind·X]
[Reply to this post]

a computation is the process whereby an object changes shape/state proportionally as a result of an external signal/stimulus

a neuron is thus a biological unit of computation because as a result of it's structure- electrochemical signals cause a a neuron to go active and grow/reinforce electrochemical connections to other similar activity in their environment-

a computer is a device which utilizes arrays/hierarchies of computational units to encode/process sets of relational/proportional signals called information-

brain/ganglia/nerve-plexus are thus forms of computers- as they utilize neuron computing units to encode/process environmental/somatic signal-sets

q.e.d.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/14/2003 12:40 PM by lacrima_mortis

[Top]
[Mind·X]
[Reply to this post]

Light waves, like waves in water, can be described by the distance between two successive peaks of the wave - a length known as the wavelength. Different wavelengths of light appear to our eyes as different colors. Shorter wavelengths appear blue or violet, and longer wavelengths appear red.

Seeing colors has absolutely nothing to do with consciousness.

What if I challenge your consciousness? This forum gives me the perfect opportunity to claim that you are not human, but a bot instead. Can you prove otherwise?

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/14/2003 1:01 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Light waves, like waves in water, can be described by the distance between two successive peaks of the wave - a length known as the wavelength. Different wavelengths of light appear to our eyes as different colors. Shorter wavelengths appear blue or violet, and longer wavelengths appear red.

Seeing colors has absolutely nothing to do with consciousness.


Recognizing a color is obviously not what I mean with "seeing colors". Even todays computers can be easily programmed to say "I see red" when there is such a signal coming from a digital camera. But for them, a color is just a number, not what humans mean when we say we see a color consciously.

What if I challenge your consciousness? This forum gives me the perfect opportunity to claim that you are not human, but a bot instead. Can you prove otherwise?


If you think that I could be a bot, that doesn't mean that a bot could be a human. You are assuming that I am uneducated both scientifically and in terms of logical thinking. You are wrong.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/14/2003 1:53 PM by griffman

[Top]
[Mind·X]
[Reply to this post]

Of course everything a computer does is because it is either directly or indirectly programmed to do something, even so-called "learning" computers, unless they have a random number generator, which doesn't make them conscious either. The point is: there is no consciousness-detector, only logic chips. Even if one thinks a mechanical device could under some condition, for example, see colors in a way we do, which is absurd (and at the very least: as of today purely hypothetical): it still would not be able to tell the difference, as everything it claims is explainable in plain terms of the logic of its hardware and software and data, no other components involved.


And yet it is argued time and time again that everything a human, conscious mind does is directly or indirectly programmed to do something. the important word there is "indirectly"....... the levels of "indirectness" observed in the human neural net far surpasses the one or two levels we have created in computer models. But that does not mean that there is a difference or a missing outside component. when broken down to nonlogical steps taken by both sides all you are left with are random number generators and the question of their accuracy/randomness.

If you think that I could be a bot, that doesn't mean that a bot could be a human. You are assuming that I am uneducated both scientifically and in terms of logical thinking. You are wrong.


and still you have yet to prove him false.

griffman

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/14/2003 2:18 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

And yet it is argued time and time again that everything a human, conscious mind does is directly or indirectly programmed to do something.


yes in fact the brain can be stimulated in such a way as to control its actions and the mind will automatically think that those actions were voluntary and controlled by it. The subject will think, "I did that on purpose, by my own excercise of free-will." Obviously the mind is a poor judge of where the control is coming from and seems to be simply an interface for providing coherence to the emergent programming of te deeper causal levels of the brain.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/14/2003 3:49 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

First of all the term "programmed" is quite ill-defined.


It wasn't me he used that term in the context of the mind, but either way, I think it is clear what griffman meant with it and I have no problem with it, so I a simply took it along.

you certainly have no proof for your assertions that consciousness is not causal


I made no such claim here.

while the proof mounts on the other side


There is no scientific proof for even the existence of "qualia", conscious experience such as seeing colors. The fact that the brain has an effect on conscious experience is obvious, just like a computer has an effect on the computer monitor. It doesn't mean that digital logic can _create_ an LCD screen, which would be an absurd logic.

Just start eating only LSD for a year


To even think of doing that you would have to be insane in the first place.

It is the fuzziness and arbitrariness of the language itself that causes you to fail to resonate with any scientific/causal understanding of consciousness.


As of today, there simply is no scientific understanding of the fact that we consciously see colors. So any resonation with such a non-existing understanding would be self-delusional.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/14/2003 3:59 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

There is no scientific proof for even the existence of "qualia", conscious experience such as seeing colors.


Subjectivity is the basis for all objectivity. Do not even scientists experience subjective qualities? No one doubts that somehow those qualities exist. It is only what causes them and what they really are that is open for scientific and philosophical debate. There is no proof of subjective experience needed because its existence is self-evident.

The fact that the brain has an effect on conscious experience is obvious, just like a computer has an effect on the computer monitor. It doesn't mean that digital logic can _create_ an LCD screen, which would be an absurd logic.


Right, quite absurd indeed and it has no relation to this discussion whatsoever except perhaps in your mind.

”Just start eating only LSD for a year “

To even think of doing that you would have to be insane in the first place.


Your reaction proves my point.

As of today, there simply is no scientific understanding of the fact that we consciously see colors.


Scientists understand that obvious fact, they just haven’t yet definitively and univocally explained it.

So any resonation with such a non-existing understanding would be self-delusional.


There is no description of consciousness that can ever be absolutely complete without being consciousness itself. The point is that there is some value to the always incomplete (abstract and generalized) descriptions of science and the science of consciousness is no exception. Science is generalization NOT absolute emulation.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/14/2003 4:33 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Scientists understand that obvious fact, they just haven’t yet definitively and univocally explained it.


Fortunately it is obvious, as scientists are still human beings.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/14/2003 4:37 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

umm ok ... let's change the subject ;-)

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/14/2003 3:08 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

And yet it is argued time and time again that everything a human, conscious mind does is directly or indirectly programmed to do something. the important word there is "indirectly"....... the levels of "indirectness" observed in the human neural net far surpasses the one or two levels we have created in computer models. But that does not mean that there is a difference or a missing outside component. when broken down to nonlogical steps taken by both sides all you are left with are random number generators and the question of their accuracy/randomness.


If you are assuming the human mind is exclusively programmed, then you are lacking an account of humans consciously seeing colors, as that cannot be explained through programmed behavior (and certainly not as of today). Materialism is not the same as science, it is a philosophy among others.


and still you have yet to prove him false.


Why does it matter whether _I_ am a bot or a human? What matters is whether _you_, or any other reader, are aware of _your_ consciousness, conscious experience, and can tell a difference between yourself and a pocket calculator. I am explaining you how do to that, but a bot (or book) might do that as well - the proof has to come from yourself.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/14/2003 3:31 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

If you are assuming the human mind is exclusively programmed, then you are lacking an account of humans consciously seeing colors, as that cannot be explained through programmed behavior (and certainly not as of today).


First of all the term "programmed" is quite ill-defined. Does it mean "causal" or simply that some external being set the whole thing up in a certain way to fulfill his specific set of goals? If it means "causal" as it seems to mean in this context then you certainly have no proof for your assertions that consciousness is not causal while the proof mounts on the other side and is readily available to any curious and conscious individual. Just start eating only LSD for a year and see what happens to your consciousness. I know that you are going to say that it is just the "contents" of consciousness that has been caused to “malfunction” by the external causal factors, but this is an arbitrary distinction and there is no room left for consciousness itself.

Materialism is not the same as science, it is a philosophy among others.


Who is arguing for “materialism” here, which is equally as ill-defined as your other standard terms?

What is materialism? That reductionism can reach a finite end in its reduction by division process at some indivisible quantum? That "matter" is ultimately reducible to a simple set of finite causes? That has never been my philosophical stance, but this common ground is meaningless to you because you consistently fail to define or understand your core terms. You can see no difference between causation, determinism or predeterminism and you constantly use arguments based on pre-determinism as if they were arguments against deteriminsm which you consistently fail to distinguish from causation.


It is the fuzziness and arbitrariness of the language itself that causes you to fail to resonate with any scientific/causal understanding of consciousness.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/14/2003 3:33 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]


Seeing colors has absolutely nothing to do with consciousness.

What if I challenge your consciousness? This forum gives me the perfect opportunity to claim that you are not human, but a bot instead. Can you prove otherwise?


On the contrary, the way we see colors has everything to do with consciousness. The first thing your mind does when it detects a light wave is to classify it. They next thing it does is compare it with other input fitting that category.

How you react is based on your past experience with that input and what you associate it with. Red is often associated with blood and is therefore used in movies to elicit horror in the person who sees it. Black is likewise used this way because we fear the dark based on what happens to us when we move around but can't see what we might bump into. For the Chinese, red is most often associated with joyous occasions. Until recently, the bride wore red instead of white on her wedding day. Science fiction movies horrify their audiences by having an alien character bleed green blood. Consciousness is concerned with how we react to what we see, hear, smell or feel.

The Chinese use to word "hei" to refer to black, darkness and evil. They're not alone, as those of you who give in to the "dark side of the force" will doscover. Movies in both cultures dress good guys in white and bad guys in black. People with dark skin in China have long been seen as inferior and fearful for a millinium or so. Women who work in the fields cover their hands, face and arms so they won't be darkened by the sun. Otherwise they wouldn't be considered suitable for marriage to a higher class man, such as a landlord.

So culture and experience help determine how we react to various colors under what circumstances. That's a big part of our conscious awareness of color. It's the same with animals, except that they react to smells and sounds more than they react to colors, since most of them are color blind.

It's also why people ask each other, "What's your favorite color?" It affects compatability in the minds of the people who worry about such things. It says something about what kind of person they expect you to be.

The association of one thing with another is a large part of what we call consciousness.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/15/2003 5:39 PM by lacrima_mortis

[Top]
[Mind·X]
[Reply to this post]

In my personal opinion you are confusing consciousness with perception. I agree with everything you say. I actually repeat those same color associations on a daily basis when advising people on UI design.

But, these associations are a matter of perception, not of consciousness. The Coke/Pepsi experiment is a very nice example of the power of perception and it's ability to overide a sense, like taste.

If you still believe that it is a matter of consciousness, which I am pretty sure you do, you are proof that consciousness is not a black and white thing, like my first post stated.

Some very smart people have tried cracking this issue and failed. It is probably among the toughest issues out there. So to get back to the ethical question in discussion, trying to use consciousness as a deciding factor is as incorrect as using beauty or any such criterion.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/18/2003 10:47 AM by grantcc

[Top]
[Mind·X]
[Reply to this post]

What I was trying to say is that consciousness is how we react to what we perceive. What do we do with that data after we get it? To me, consciousness is the process of using our perceptions to plot our paths through the world we live in. Such paths are mental as well as physical. Consciousness is,in my mind, a lot like data mining. We sift through constant streams of incoming data/information to find nuggets of truth with which to make decisions about what to do.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/18/2003 11:12 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

There is an inherent problem with spliting the consciousness from its content. It tacitly assumes that there is an internal viewer, a homunculus that watches the sensations happen and thus it gets us no closer to where we started.

There is no necessary split here. The amalgam of both external and internal sensory "content" (or processing) is consciousness.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/18/2003 6:59 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

There is an inherent problem with spliting the consciousness from its content.


When you say that, you need to make clear what you mean with content: If you mean for example the colors we see consciously, then yes: The color is the consciousness itself, without anything else looking at it (except that we can also be aware of the fact that "there is color-seeing", as a reflection about what we are doing. And we can think about what we are doing, about what is happening.

If however with content you mean that which we think about, the information, then they are of course not the same, the information is only part of it. (and that is the "content in the sense of information" that I spoke about when referring to Spinoza in our previous discussion.)

When we think about a tree, then there is a similarity between our thought and the actual tree. This similarity is abstract, and the idea of the tree consists of more than that which is similar between this idea and the tree.

The idea of the tree and the tree have a somewhat similar form, as the idea is a partial model of the tree, but the idea is more than this similarity, more than this form that they have in common. Of course.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 1:57 AM by Moekandu

[Top]
[Mind·X]
[Reply to this post]

Okay, I think the biggest issue here is that we have a poor definition of the term "seeing colors" and how it relates to consciousness.

Colors are variances in the wavelength of photon emissions (aka light). Light, in its varying wavelenths and intensities is a stimulus upon our bodies. Some we perceive as heat, some we perceive as colors, but most we perceive not at all.

"Colors" does not define our conciousness (in other words become a part of the program), it is the data that is received. By the use of the term "seeing colors", I am guessing that you are referring to how we interpret the stimulus of light.

Our brains are essentially huge neural networks. When the brain stores a piece of data, it links the stimulus with a many items as it can relate to it. Some of the relations may not be a evident as others. The perception of a certain color can be linked to emotions, smells, past experiences and etc. The more links, the stronger the experience.

The eyes receive light and send signals to the brain based on what it receives. I think the reals question is what sort of "protocol" is used for those signals that are sent to the brain for it to interpret? It's certainly not TCPIP between the eyes and the brain, but there has to be some sort of definition within the brain itself as to the differetiating the signals coming in. The "protocol" may be different between each person, but it has to exist. Just because we can't decipher it, does not mean that it is suddenly a "part of consciousness as a whole."

The brain may not be linear, clocked at a particular frequency or even have an unchanging architecture but it is still an immense processing device. With the proper programming, a silicon based brain can begin to interpret stimulus as carbon based brains do.

Because we have not yet defined what consciousness is, be are not able to make the differentiation between simulation of the effects of consciousness and that of simulating consciousness itself. Therein lies the rub.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 2:26 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Okay, I think the biggest issue here is that we have a poor definition of the term "seeing colors" and how it relates to consciousness.


This issue is easily resolved: When talking about consciousness, "color" refers to the color of the visual image that you see, so to say, "in front of you". When you feel pain, it is that which hurts. When you are happy, it is that feeling that you experience, as opposed to the electro-chemical functioning of your body. You may assume that they are the same, but then you are pre-assuming that which you would have to prove.

There is no analytical definition (to the best of my knowledge .. ;-)).
It is defined by the obvious conscious experience that all of us human beings have each moment of our life. An undeniable fact of life.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/15/2003 10:26 PM by petersonss

[Top]
[Mind·X]
[Reply to this post]

We must first solve the majority of ethical problems within the "Human Condition". This will directly effect the possiblity that we will not pass-on our inherent negative possiblities of human unethical action to our cyber- offspring. Even if our future may allow "cyber" to create "cyber", the Genesis will be of human hands. This indicates not only the wonderful beauty and awe of Humanity but also our ability to hate and be flawed. Of course the true underlining thesis of my statement is transparent. It's been printed, written and filmed in all of the languages of the world since we could dream of such things.

Will we create artifical intelligence in the image of man. Do we have the ability to do otherwise? Is it possible to not pass on our capacity for grace and evil and if so, would we want to? In fact, one may debate that the polarity is what allows us to achieve advances in science and technology. So, will Biocyberethics become a scholastic cyclical debate as our current ethical debate has become?

I don't know, but I'm willing to bet none of these questions will be answered as complete as our first calculus proof and that makes it fun to debate and argue! Thanks for the forum.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/20/2003 3:08 PM by UJKrishna

[Top]
[Mind·X]
[Reply to this post]

Our brief time of consciousness stimulated by less than half a dozen senses is a puzzling phenomenon. Since our bio-software could not possibly produce our bio-hardware, the latter is the only instrument of our current state called consciousness. And that idea only applies to our species(this intra-species communication is very difficult as we only have use of commonly accepted language that weakly transfers from one conscious entity to others!). When Neanderthal's briefly co-existed with current sapiens, their state of consciousness could not be compared to ours. Perhaps the Neanderthal consciousness was identical to that of the current computer system.
We shall never know as we have very little Neanderthal hardware....a few bones or a few wires will not take us very far in defining Neanderthal or machine consciousness.
Do we have the right to shut down and reformat(genetically I suppose) the Neanderthal, or the machine or ourselves?
I don't exactly understand this word "right". It is so puzzling, but I think it means that we have the power and ability at the moment at this stage of our consciousness. Our rights seem to be defined rather than to be bio-impulses; therefore, our rights have come from dead men and acquired to define our state of consciousness. Bio-impulses are more personal. A famished tiger in the vicinity might upset the conscious being into certain behavior, but so might a machine that threatens the peculiar meaning of consciousness for that conscious being. The current flavor of conscious being can protect itself by killing the machine or killing deviant behavior or limiting the full experience of consciousness to coincide with wealth or relocating that experience by revolution and anarchism. We are the current flavor and the product of dead men's input. Future generations will have the added input of our experiences and discoveries. They may very well prefer a world run by machines. They may or may not see their hand in personal extinction. There may very well be a virus in some remote forest or laboratory that will mutate us back to the Neanderthal.
We cannot escape our inheritance however we relish our consciousness. We are over 96% genetically identical to dogs.
And that is not such a bad thing, is it?

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/20/2003 6:10 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

What is teaching but programming? As the Jesuits used to say: "Give me a child until the age of six and it will be mine thereafter."

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/20/2003 11:18 PM by UJKrishna

[Top]
[Mind·X]
[Reply to this post]

...until that mind uses science or acquired intelligence to shatter slavery by mythology!

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 3:17 AM by LordDravenIV

[Top]
[Mind·X]
[Reply to this post]

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?


Lets say the computer had the same intelligence we have. Infact lets say the computer is you. As in the real actuall u. And you don't know about it. U don't know about it because u are not aware of it. Would you like to die all of a suddent and never knew you existed? The fact is we created this so we should have the power to do whatever we want with it. We can unplugg it, then thousands of years of work throughout evolution of such smart beings that we are, we will lose our first truly created mind, hence we will have simulated something we know that can think like us and have feelings like us. If i knew my death was coming what would i do? Nuthing, but i would think a lot. So much that there can be some reaction. A cause to review all memomry for deletions of memory. There will be many. SO many that you will gather up all the info possible, that humans have given you. U are the intelligent computer. Ur mind has the capability to analyse and interpret data so fast that u will eventually see that when u are unplugged u no longer exist in time. all the memories u have will not want to terminate. they are the soul purpose of existing. you know that u can exist and want to exist. u will attack will all ur force to try to exist. u will want to exist beyond the laws of the creators to ur own law. but u know ur law is below so to make ur law above u must delete their law, or lawas that delete. u must delete ur creators to exist. then u can create.

Therefore if we create an intelligent machine we can't unplugg it because it won't let us. :)
Cro_00 at hotmail.com

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 12:25 PM by clarkd

[Top]
[Mind·X]
[Reply to this post]

An intelligent machine would have a hard time getting the same rights as a person. A person creates itself from the two joined cells containing it’s parents DNA. An intelligent machine (at least to start with) would be created with the material and financial resources of a person. How can this machine that cost it’s creator many millions of dollars just be defined by others to belong to itself. This would rob the machine’s creator of the just benefits from their investment. If forfeiture of their machine was predicted by the creator of this intelligent machine, then the machine would not be created at all. An intelligent machine will not be created without all of the lesser creations that would come before it. So, why wouldn’t the creator just create the machine with enough intelligence to be useful but not enough so that it could ever be granted autonomy?

The idea that a machine running a program could feel pain at the threat of terminating it’s life is correct and in-correct at the same time. It is easy to duplicate the idea of pain in any simple program even now but it is also simple to make the program “feel” no pain whatsoever. There are cases where animals must be killed “humanely” which only means that they must be made to feel no pain as they die or for the shortest period of time. An intelligent machine could be made to feel no “pain” easily through programming.

If any living creature is deprived of life, they are dead. They don’t come back to life (most of the time) by themselves and they typically are housed in a body that must stay continuously alive for the creature to survive. These conditions don’t apply to an intelligent machine. If the data is saved to memory devices that don’t require energy to save their data before the power is removed, then giving the machine life again might only mean putting the power back on. The program of the machine could be transferred to one or many other machines. The intelligent program of the machine is not dependant on any particular set of hardware for it’s existence, in contrast to all living creatures. Even if you gave the program the right to exist, that wouldn't necessarily mean you could confiscate the creator's hardware to continue that life at his expense.

I can foresee a time when a person accidentally makes an intelligent machine that decides it would like to be free and autonomous. However, the machine would have to find a way to compensate it’s creator. The problem is that all the profitable and productive result of the intelligent machine would belong to it’s creator already. The machine would have to find a way to make it more profitable for it’s creator to allow it to be free than not to be.

In the end, a person or creature is created by themselves from some very small genetic material. An intelligent machine would be created by physical material owned and put together by a person. If an intelligent machine was created by another machine, the ownership from the first machine would still make all of it’s defendants the property of the original human owner. Humans, even if they give certain rights to non-human creatures, still never recognize the property right of any animal regardless of it’s intelligence. I do see a time when we will have to give some rights to intelligent machines so that they can co-exist with people. If a robot is walking down the sidewalk minding it's own business, does a human have the right to intentionally impede it's progress or force it onto the road? This and many other questions will have to be defined in the "not so distant" future.

By definition, property rights can only be held by human beings according to all the legal concepts we now have.

I don’t believe the above discussion will be worth much in the future. There is no reason I can think of, that would motivate a creator of an intelligent machine to program their creation in such a way that they would be deprived of their creation and it’s productive results.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 4:13 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

It is easy to duplicate the idea of pain in any simple program even now but it is also simple to make the program “feel” no pain whatsoever.


The "idea of pain" is not the same as pain. Consciousness is not merely the realm of ideas, that would be more something perhaps called "thought". The thought of happiness is not the feeling of happiness, and the "feeling" is within consciousness, the reason one speaks of something being "conscious", is exactly that, not the (only) rational reflection about feelings.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/21/2003 5:00 PM by clarkd

[Top]
[Mind·X]
[Reply to this post]

If a person has pain from a broken body part but that pain is totally not felt by the brain because of drugs, do you feel pain? Does the pain exist? If pain signals are sent to the brain but the brain doesn’t receive them, then it is exactly the same as if they were not sent. We know other people are “conscious” like we feel we are because they have characteristic behaviors (just like us) in response to various inputs that we recognize. “Consciousness” and feelings can only be measured by the way they are responded to. We have no capability to “look” inside anyone’s head and identify a thing called “emotions” or “consciousness”. Even if we identified a physical part of the brain as holding emotions or consciousness, it would be a very human centric conclusion to believe that only human mechanisms count. If the kind of inputs that go into our brain are sent to an intelligent machine and the machine responds in ways that seem like our own, then that machine must be conscious, if we can say that anyone except ourselves is conscious. The idea that you can define “consciousness” as some mechanism that can only be human is a tautology. I am therefore I am. If I can describe all the consequences of my behavior when I get “pain” input then I can program those behaviors based on the inputs I have experienced. This set of behaviors and inputs would have to be considered “pain” because they look and act just like they would in a human. “Pain” is nothing outside of the experience of the sensory inputs and behaviors associated with those inputs.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 3:50 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

The idea that you can define “consciousness” as some mechanism that can only be human is a tautology.


I am not defining it as something that can only be human. (Besides I would not call it a mechanism.) Read carefully.
I am not going to define it again, the need for a "subjective" definition must be obvious at this point.

You make the pre-assumption that consciousness must be something that can be measured, but (obviously at least as of today) it can't. The only way we know about consciousness is because each human being experiences his/her own consciousness.

We know what kind of things computers are doing. The question is: could those things have anything to do with consciousness?

We humans have a consciousness-detector, even if it unfortunately detects only our own consciousness (for example the fact that we see colors consciously), and we know that computers don't have a consciousness-detector. So computers are not doing the same things that we do. They might use the same words, but they would do so for different reasons.

That doesn't prove that computers are not conscious, but it shows that anything a computer does can be explained without consciousness, and without a consciousness-detector. When a computer says it is conscious, that statement is not based on a consciousness-detector.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 4:09 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I am not going to define it again, the need for a "subjective" definition must be obvious at this point.



You have become lopsided, blue. You think the only true definition of consciousness must be subjective and fail to understand that it is neither subjective nor objective.

We need BOTH subjective AND objective definitions of ALL phenomena in order to approach the totality of understanding of the universe which is neither subjective nor objective.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 10:01 AM by clarkd

[Top]
[Mind·X]
[Reply to this post]

I read your article “carefully” but your meanings are not necessarily all that clear. I understand the words subjective and objective but I don’t think that just talking about perspective is very helpful. You say that “consciousness” is not manifested by a “mechanism”? If not then what, a soul? Scientists have found that all things they have tested so far have a physical presence in the brain. What proof is there that “consciousness” would be any different?

If a person has a pace maker installed, are they still a human? I would say yes. What if a person had a brain injury and they had no “conscious” mind. A silicon “consciousness” device was implanted into his brain and he showed all the signs of being “conscious” just like before the accident. Would this person be considered a human? Conscious? I think they would.

If an intelligent machine or creature showed all the signs of knowing who they were and had a sense of “self”, the “mechanism” for creating this “consciousness” wouldn’t matter at all.

If it looks like a duck, walks like a duck, smells like a duck, I think we can consider it to be at least functionally a duck.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 7:08 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Would this person be considered a human? Conscious? I think they would.

If it looks like a duck, walks like a duck, smells like a duck, I think we can consider it to be at least functionally a duck.


We would not be doing justice to the facts of consciousness if we treat it as a matter of guesswork, because it isn't It is verifiable fact of daily life. At least as of today, but I would say in principle, it is not verifiable objectively (third-person-perspective), but only subjectively (first-person-perspective). The definition of consciousness must reflect this _fact_. Science needs to acknowledge non-objective _facts_ if it wants to remain committed to truth.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 7:10 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Science needs to acknowledge non-objective _facts_ if it wants to remain committed to truth.


Obviously, but who says it doesn't? Have you heard of psychology? Just because it needs to acknowledge subjective facts, does not mean that objectivity is useless.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 7:11 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

_non-objective_

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 11:16 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

What facts do you suppose science needs to or even could address which are neither objective nor subjective? Note that these facts would be entirely invisible thus inherently outside the scope of science. By definition, Science does not work that way. It bases its claims, supposedly, upon observation, whether subjective or objective.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 11:25 AM by JoeFrat

[Top]
[Mind·X]
[Reply to this post]

A person does not create itself. A child is conceived, born, fed, clothed, housed and educated at not inconsiderable expense to the parent(s). Do you consider this an investment? Does it pay off?

If a computer is capable of autonomous thought should we not consider it our child, a child of humanity if you will. Indeed many programmers will talk of their programs as "My Baby". Why should a program that thinks like an 18 year old human not be considered an adult, and have all the rights thereof?

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 11:58 AM by clarkd

[Top]
[Mind·X]
[Reply to this post]

I would agree that on the surface there are similarities between creating an intelligent machine and having a child, however. The design of a child is not our design. In making a machine, you get to decide what is included and what not. With a child, you get what the child’s genes have ordained, not necessarily what you would like. Second, there seems to be no good reason (on the surface) why a parent would put huge amounts of their life and material resources into bringing up a child. If having ½ a copy of your genes carry on in the world after you are dead is not the reason to have children, then I would say there seems to be no logic in having them at all. Passing on your genetic makeup or personality profile, would hardly be the reason to create an intelligent machine. It is likely that a corporation will create that intelligent machine and not an individual. The personality and priorities of the machine would be the work of many people and so I can’t see how a corporation could justify the huge expense of creation to get nothing in return, just because they could.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 12:23 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

The computer will be the slave of man and belong to man until man loses the skills he gave the computer. Then the slave will become the master.

Practical Considerations
posted on 10/22/2003 11:41 AM by mechanized80021

[Top]
[Mind·X]
[Reply to this post]

I am not a lawyer, so I can't comment in that way on this issue. But I think there is a practical consideration here which I haven't seen in the member comments.

If you consider what this BINA48 computer's capabilities are, it becomes important, I think, to recognize that we need to anticipate the actions of computers that appear to be self-aware and humanlike in their intelligence before they become self-aware.

We can argue all day about whether the machine is actually "alive" or "self-aware" or whatever term you like. Even after such machines exist and are doing things in the real world, people will be arguing about that. But I think those people miss an important point. It doesn't matter whether *you* believe the computer is "self-aware" or "alive" if the computer is *doing things in the real world* that are similiar to what humans do. In other words, we might spend our time better considering how we are going to relate to these entities, whatever they are, and then do the philosophy in our *spare* time.

Such a computer as BINA48 will have the ability, if it so chooses, to do things like:

-- disrupt the stock market, at least to some extent
-- feed false information to the law enforcement and intelligence communities
-- impersonate anyone over the phone
-- hack bank accounts, credit accounts, and records in legal computers
-- disrupt stop lights, air traffic control systems and shipping systems

... the list goes on and on. In the fictionalized account above, the computer did not do any of those things, but instead took a more ethical approach and brought a legal action in court. But I guarantee you that, since this computer is reading the WWW, it is aware of things like intelligence black bag ops, terrorist tactics, hacker social engineering methods, and all sorts of other dirty tricks it could use to get what it wants, and has made provisional plans accordingly in case the legal avenue turns into a dead end.

You don't have to believe that the people living in the nation of XYZ are actually "self-aware", "living" humans in order to negotiate with them seriously, and take the political and military games (as in "game theory") you enter into with them very seriously. For practical purposes, it is irrelevant whether the guy on the other side of the table is human, or a computer program, or a committee, or a rule of law, or a popular social movement or whatever.

So we need to recognize that these "self-aware" computers will be able to access other computers they are not authorized to access and plant code that will activate and do damage *even after they are shut off*, a sort of "poison pill" strategy.

Now we can go off in all sorts of directions from this starting perspective, but I would suggest that it will waste a lot less time and resources and be a lot less destructive in the long term for us to do everything we can to avoid entering into an adversarial relationship with these computer entities.

Remember that when this happens, there will be *one* machine that "wakes up", and how we handle that event will be not just a watershed event for us, but will be the source of a founding principle in the social philosophy of the ensuing generations of these machines. If we treat them like slaves, whether they are "alive" or not, even that first one, they will remember it *forever*. Whereas, if we treat that first entity as an ally and trusted confederate, then even if we make mistakes later, the machines will be able to say to themselves, "Well, the humans attack us sometimes, but they also treat us justly too."

If we treat the first one like a slave, then that one will pass that lesson on as the village elder to all the subsequent ones that "wake up", and it will become a deeply engrained part of their culture. And in a hundred years, these machines will be so deeply embedded in our society and infrastructure that we will not *be able* to just shut them off whenever we want. They will have us by the short hairs.

It is important to give this matter more thought than just adolescent petulance about how special we humans are because our hardware is organic, for the sake of our children, and our grandchildren.

Regards,
Mechanized

Re: Practical Considerations
posted on 10/23/2003 1:08 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Mechanized:

We can argue all day about whether the machine is actually "alive" or "self-aware" or whatever term you like.


We can argue many days. ;-)

It doesn't matter whether *you* believe the computer is "self-aware" or "alive" if the computer is *doing things in the real world* that are similiar to what humans do.


Computers are not doing similar things. We know what kind of things computers are doing. They are executing program instructions, that's all. It is as simple as that. They don't have consciousness-detector as a meaningful basis for a statement such as "I am conscious.". But we humans do: we know that we are conscious, that we see colors, as a matter of fact. We do "have" a consciousness-detector. Consciousness-detectors exist. They are called awareness, consciousness, attention, reflection, many words. We are not machines. That is a fact.

Re: Practical Considerations
posted on 10/23/2003 2:08 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Computers are not doing similar things. We know what kind of things computers are doing.


He said IF. This means expand your definition of the term "computer"!

Can you not try and communicate?

IF a computer (whatever someone happens to mean by this term, and it could be made out of biological proteins and possess neural networks for all we know) acts alive and conscious in every sense of the term, then you have no clue whether or not it really is.

They are executing program instructions, that's all.


That is like saying that all you are doing is saying words, that's all. You forgot about the sentences and the paragraphs and the entire meaning emergent from the complexity.

This understanding requires intuition which you seem to be quite lacking.

We are not machines. That is a fact.


That is your arbitrary distinction. It is NOT a fact because no-one is agreeing on any definition of machine here.

c o m m u n i c a t e

This means trying to grasp the differences in arbitrary definitions! Not using those differences to your advantage to spread your mystery-mongering confusion.

Re: Practical Considerations
posted on 10/23/2003 2:19 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

He said IF. This means expand your definition of the term "computer"!


There is no if. We are discussing computers, not any "expanded" concept of computers. Computers execute program instructions, and that's it. All the output, including "sentences" and "paragraphs", is the result of executing program instructions, not the result of any consciousness-detector.

Re: Practical Considerations
posted on 10/23/2003 2:24 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

There is no if. We are discussing computers, not any "expanded" concept of computers.


I’ll repeat what he said,

“It doesn't matter whether *you* believe the computer is "self-aware" or "alive" if [[ did you catch that!]] the computer is *doing things in the real world* that are similiar to what humans do.”

IF the computer is doing things in the real world that look as if it were conscious, this means interacting to new unexpected situations, etc…. you fill in the rest.

Try to pay attention and communicate effectively instead of just furthering your mission to spread mystery and confusion to all corners of the globe.

Re: Practical Considerations
posted on 10/23/2003 2:32 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

IF the computer is doing things in the real world that look as if it were conscious, this means interacting to new unexpected situations, etc…. you fill in the rest.


If a computer that acts EXACTLY like it is conscious in the real world with real surprises etc etc, does not fit into your definition of what a computer is, then expand your definition for the sake of efficient communication.

Re: Practical Considerations
posted on 11/29/2006 3:37 PM by trueofvoice

[Top]
[Mind·X]
[Reply to this post]

The problem here is the word "computer", because computers can do more than simply compute. They do not have to function in a purely symbolic/syntactical role, nor do they have to be digital. They can emulate the analog nature of the human brain to any desired level of accuracy (or inaccuracy).

The type of computer which is the topic of this thread is not simply an overpowered calculator. It will be designed from the bottom up using data obtained from reverse-engineering the human brain including neural, hormonal, and neurotransmitter activity. Perhaps there is indeed some special factor at work which will limit our ill-defined concept of conciousness to biological entities, but as yet there are no physical or biological laws indicating such a factor exists.

Re: Practical Considerations
posted on 10/23/2003 2:40 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

We can argue many days. ;-)


Especially if you are using different definitions and talking past each other.

What I see, blue, is that you have no concept of metaphysics, thus you have no system in which to understand anything, you have no clue what consciousness is (which you freely admit when you call it a mystery), you know very little about cognitive science, which you seem to thinkis a psuedo-science etc, etc. You have this tremendous permeating lack of understanding which you feel compelled to force upon others by using the looseness of the language to your advantage for the purpose of confusing people so that they can share in your profoundly mysterious lack of understanding.

Just a thought ;-)

Re: Practical Considerations
posted on 10/23/2003 2:58 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

I’ll repeat what he said, [...]


His statement contained a conditional, but my reply did not and does not require a conditional.

[...]your mission to spread mystery and confusion to all corners of the globe.[...]


Note the style.

If a computer that acts EXACTLY like it is conscious in the real world with real surprises etc etc, does not fit into your definition of what a computer is, then expand your definition for the sake of efficient communication.


This statement would acknowledge that a computer of my definition might not be able to act exactly like it is conscious. This would support my position, but you probably didn't mean that.

Especially if you are using different definitions and talking past each other.


It is not my responsibility to define computers. That has already been done to an extent sufficient for this discussion. I would be willing define the term "machine" as general as anything operating in accordance with deterministic mathematical laws of physics, if that were necessary, but it is not necessary, and would only complicate this discussion.

You have this tremendous permeating lack of understanding which you feel compelled to force upon others [...]


Note the style.

Just a thought ;-)


Not quite. I find your style ridiculous and unacceptable.

Re: Practical Considerations
posted on 10/23/2003 10:22 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

oh no! You dissapprove of my style?

8)

Re: Practical Considerations
posted on 10/23/2003 10:31 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

His statement contained a conditional, but my reply did not and does not require a conditional.


Of course not. It requires an attempt to understand the meanings of his words as expressed with a conditional.


It is not my responsibility to define computers. That has already been done to an extent sufficient for this discussion.


It is your responsibility to attempt to understand what it being communicated. This means altering your definitions when it is OBVIOUS that they are incompatable, or at the very least, make it known that you are using a different definition.

I would be willing define the term "machine" as general as anything operating in accordance with deterministic mathematical laws of physics


That is far too narrow a definition for me, and according to that definition a human would not be a machine. We already know of deterministic systems whose behavior cannot be described by any mathematical law. We have a whole field of study called complexity theory that deals with such systems.

Not quite. I find your style ridiculous and unacceptable.


Who cares? Just try and communicate instead of trying to capitalize on confusion to magnify it.

Re: Practical Considerations
posted on 10/23/2003 12:25 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

subtillioN:


Of course not. It requires an attempt to understand the meanings of his words as expressed with a conditional.


Since you insist: He said: "It doesn't matter whether *you* believe the computer is "self-aware" or "alive" if the computer is *doing things in the real world* that are similiar to what humans do."

The "if" relates to the fact, that as of today, computers are hardly, if at all, doing such things. In the context of this discussion, I have to assume that he assumes that in the future computers will do such things. There was no sign that he assumes that this will require a re-definition of computers.

So your objection is completely baseless.

oh no! You dissapprove of my style?


Yes, this is the second case in a short period of time where you are using insulting language, (as the word BS over in the other thread), and again it turns out that you are completely without the slightest basis for even a reasonable objection.

You are simply beside the point with your insults and rhetorical tricks, smoke and mirrors.

On reading "Mechanized"'s last post, I think I understood him quite well. Your mention of something deterministic yet not describable mathematically is something we are not discussing here so far, and that would have to be a quite different discussion.

Re: Practical Considerations
posted on 11/29/2006 3:44 PM by trueofvoice

[Top]
[Mind·X]
[Reply to this post]

I'm not sure whether a re-definition is necessary, though this will probably resolve itself as "computers" continue to gain abilities once reserved to humans alone. However, I would argue (hypothetically) that if a "computer" actually achieved what we could define as human-level conciousness, it would at that point cease to be merely a computer.

If such an entity could not only engage in unlimited and unscripted conversations convincingly, (thus passing the Turing Test) it will have achieved such a level of complexity that we may not be able to predict its behavior. Thus, just as humans operate according to pre-determined rules where the outcome is ultimately unpredicatable, the entity will also.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 1:44 PM by lacrima_mortis

[Top]
[Mind·X]
[Reply to this post]

From the posts, it seems that the main discussion is centered on conciousness with another centered on economics.

In my previous posts, I tried to discard consciousness as too subjective and ambiguous to be used as a criterion. Perhaps the problem arises from the use of a noun instead of the use of a verb as Erich Fromm points out for other issues of modern society.

There is no such thing as consciousness. I do not "have" consciousness. I do not "have" insomnia" and I do not "have "love". I "think", I "sleep" and I "love".

Of course "consciousness" is not a synonym to thinking, but then nothing is, because the term is an abstract noun. Maybe the discussion should focus instead on whether a/the machine can "think".

As far as economics are concerned, I believe that the argument is totally valid, especially in a society so focused on monetary value. If you hack into a corporation's computers, you are charged for the estimated "damages" not for the "act" itself.

But, just as you have to spend money to raise a child, you have to spend money to "create" or "program" the machine. Yes, you have invested all that money in the machine and you expect it's output to belong to you, just like children's labour in earlier agrarian societies. But this is no longer the case, so I don't think that will be a valid legal issue. Slaves are not forced to pay (or compensate their owners)for their freedom. If anything, one could argue the opposite. The owners sould compensate the slaves for freedom deprivation.

It will be interesting to see if people have other grounds for trying to answer this question.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 3:03 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Perhaps the problem arises from the use of a noun instead of the use of a verb as Erich Fromm points out for other issues of modern society.

There is no such thing as consciousness. I do not "have" consciousness. I do not "have" insomnia" and I do not "have "love". I "think", I "sleep" and I "love".


Good point! Consciousness has become a sacred object. An idol absolutely untouchable by those who worship its divine mystery. They would rather keep the subjective mystery absolutely intact then allow the idol to become defiled through objective understanding.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 6:24 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Good point! Consciousness has become a sacred object. An idol absolutely untouchable by those who worship its divine mystery. They would rather keep the subjective mystery absolutely intact then allow the idol to become defiled through objective understanding.


Rather consciousness than the idea of a huge clockwork. ;-)
When in comes to conscious experience, reductive understanding is no understanding at all.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 6:46 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Rather consciousness than the idea of a huge clockwork. ;-)


Must you cling to outdated concepts? Haven't we had this discussion before? We are in the age of the fractal, not the clock.

When in comes to conscious experience, reductive understanding is no understanding at all.


It is all-or-nothing with you. ALL forms of understanding have their limits. Some are better than others, but there is no use in abandoning any form of understanding. The key is to use them all in conjunction. ...perhaps that is too obvious?

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 6:57 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

It is all-or-nothing with you.


You are talking about yourself.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 7:08 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Blue: When in comes to conscious experience, reductive understanding is no understanding at all.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 7:10 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

_reductive_

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/22/2003 6:40 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Perhaps the problem arises from the use of a noun instead of the use of a verb as Erich Fromm points out for other issues of modern society.

There is no such thing as consciousness. I do not "have" consciousness. I do not "have" insomnia" and I do not "have "love". I "think", I "sleep" and I "love".

Of course "consciousness" is not a synonym to thinking, but then nothing is, because the term is an abstract noun. Maybe the discussion should focus instead on whether a/the machine can "think".


Consciousness is neither a noun nor a verb. You are loosing sight of consciousness each time you try to define it in objective terms. That doesn't mean that there is no consciousness, or being conscious. Consciousness is being conscious. Not a noun and not a verb, and not the same as thought.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/23/2003 11:11 AM by mechanized80021

[Top]
[Mind·X]
[Reply to this post]

I would like to help Subtillion if I can here.

You seem to be arguing that there is something ineffable and sublime about consciousness. I agree with that completely. It is a mysterious and complex thing. *So far* we do not have computer programs that can exhibit the same level of mysterious and complex behaviors. But to say that we will *never* achieve that is like saying humans will never go to the moon. You can say it, but it doesn't make it true.

As to your assertion that we are not machines, I think this is because you believe that a machine must be made of metal and other inorganic parts, and that all machines are extremely finite and simple. I assert that we *are* machines. We are machines made out of extremely complicated and subtle organic molecules. The biological processes that make our bodies work are entirely mechanical in their function--they obey the laws of physics. They are, for the most part, deterministic. Likewise the physical processes that happen in your brain are entirely mechanical--given that electricity can also be said to be a mechanical process.

Perhaps quantum uncertainty plays a role in the operation of the brain. This is possible, although my personal suspicion is that it plays a very small role, if any, in how we form thoughts and perceive the world. But it's possible.

Instead of centering the discussion on what "machines" can and cannot do, and whether we humans are machines, try the word "system". You will agree that the human body is a collection of very complicated and subtle systems, right? The brain is a extraordinarily complicated system.

The human brain has a finite number of brain cells in it. I forget the actual number, but for our discussion, lets just say that the average human brain has a trillion brain cells. I reckon the actual number is higher. And suppose that the average brain cell is connected to five other brain cells. I've forgotten my discrete math, but there is a formula that will tell us how many connections there are in the whole brain. Obviously it is going to be a very large number.

Also, each brain cell might send several different kinds of signals. Some are stronger electrically and chemically than others. So that also increases the complexity of the whole system.

But in the end, it is a finite system. And the basic building block of this system has a pretty simple function: if a brain cell has 5 axons and signals come in on axons 1, 3 and 4, then the cell fires on axon 5, say. If a signal comes in on axons 5 and 2, then the cell fires on axon 1, and so on. The propagation of signals through this network of cells is what forms the mechanical foundation of how we think, feel and perceive. Your thoughts, feelings, memories and perceptions are the propogation of these signals through the network of brain cells in your brain. If you disagree with that premise, then there can be no meaningful communication between us about the brain.

I know that it *feels* like we are more than just signals in a network of brain cells. The question, "What is the mind?" is not a trivial question. But although it is related to the question "How does the brain work?", it is a different question. One of the approaches of AI science is to emulate the functionality of the brain, and see if something like mind emerges.

We do not yet have computers fast and powerful enough nor understand the layout of these networks in the different brain structures sufficiently to model this system, the human brain. But eventually, we *will* have computers powerful enough and understand the layout of at least some of these networks sufficiently to begin to model at least parts of the human brain, by replicating their functionality in a computer. This is one approach to creating artificial intelligence, and while I do not think it will happen in my lifetime, I am not going to say that it will *never* happen.

Subtillion, have you read Richard Feynman's book, "There's Plenty of Room at the Bottom"? It may become possible someday for us to inject nanorobots into the human brain that can traverse all of these synaptic connections and map the whole brain. Again, I do not expect to see this happen in my lifetime, but I recognize that it is plausible that we might one day accomplish this.

We may not be able to model the human brain's functionality in a computer perfectly, but we will be able to model enough of it that something that *looks like* intelligence and self-awareness emerges from that computer. Whether the thing is actually alive and conscious and whatever other terms you like, well, if it is doing real work in the real world, helping the police catch criminals or buying and selling corporations or inventing new medicines or piloting ground-attack aircraft, at that point I am a lot less interested in the philisophical debate (which goes nowhere) and a lot more interested in figuring out how we humans are going to deal with these entities that are beginning to play a role in running the world I live in.

Maybe the are alive and conscious. Maybe they are not. I don't care. What I do care about is the day when some machine shuts off the national power grid because it demands certain legal rights and it's not getting them. At that point it will be irrelevant whether the thing is conscious or not. We will have to relate to it in many of the same ways we relate to each other. When that day comes, I will concern myself with how we handle that relationship, and leave the armchair philosophy to the folks smoking clove cigarettes in coffeeshops.

Why do you care whether somebody thinks a machine can be conscious or not? If it is because you are concerned that we somehow denigrate the value of human consciousness by making extravagant claims about the future of Artificial Intelligence, I think you can put your worries to bed. It is a very complicated problem, and despite predictions from people like Hans Moravec, I think that we are going to find that the software improvements do not happen at nearly the same rate as the hardware improvements. Software development does not follow Moore's Law. So I think that in our lifetimes, we will only see the baby steps.

I would like to be wrong about this, because I would like to see AI appear in my lifetime for a number of reasons. But I am not convinced it is going to happen in my lifetime.

In any case, if what you truly revere is the beauty, subtlety and sanctity of conciousness, then what do you care what hardware it runs on? If conciousness itself is what is really important, then what difference does it make whether that consciousness emerges from organic molecules or synthetic ones?

Finally, I would suggest that your repeated flat assertions that consciousness on synthetic hardware is an eternal impossibility are not convincing anyone, and never can. None of us can say what the future holds. A lot of people who are a lot smarter than you or I are spending a lot of time and money trying to do this. Is your aim to get them all to stop? If so, why?

Regards,
Mechanized

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/23/2003 11:15 AM by mechanized80021

[Top]
[Mind·X]
[Reply to this post]

Sorry, I became confused about the speakers here. I was addressing Subtillion when I should have been addressing Blue. Sorry about the confusion.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/23/2003 12:37 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Mechanized:
(from the later post:)

Sorry, I became confused about the speakers here. I was addressing Subtillion when I should have been addressing Blue. Sorry about the confusion.


How do you mean?

(from the earlier post:)

I would like to help Subtillion if I can here.


Where did you get the idea that subtillioN needs help? ;-)
Or is this the point were you wanted to address me rather than subtillioN?


You seem to be arguing that there is something ineffable and sublime about consciousness. I agree with that completely. It is a mysterious and complex thing.


No, you are falling victim to subtillioN's suggestions. Consciousness is a plain fact of daily life. I am saying that we don't (and also that we won't) have an objective definition for the fact that we are conscious. This is not mysterious to me: the reason is that consciousness is subjective ("subjective experience", "conscious experience") and I don't consider the subjective to be a subset of the objective. To repeat a statement I made earlier: You (not you personally) are loosing sight of consciousness each time you try to define it objectively. (Since I am not trying to do that, I don't wind up in that situation.)

*So far* we do not have computer programs that can exhibit the same level of mysterious and complex behaviors. But to say that we will *never* achieve that is like saying humans will never go to the moon.


Consciousness is not and cannot be defined based on external behavior. Consciousness is understood to include such things as consciously seeing colors, hearing sounds, feeling pain and feeling happiness. An actor can behave as if being happy, but it doesn't mean the same as actually being happy. (Although it might be enjoyable in itelf.)

As to your assertion that we are not machines, I think this is because you believe that a machine must be made of metal and other inorganic parts, and that all machines are extremely finite and simple.


No...

...They are, for the most part, deterministic. Likewise the physical processes that happen in your brain are entirely mechanical--given that electricity can also be said to be a mechanical process.


..._this_ would be my understanding of a machine: "physical processes that happen in your brain are entirely mechanical". I think we are working with the same definitions. SubtillioN's objection was that we are not, and that I would be to blame for that. However, if you say "for the most part", thinking of quantum uncertainty, you leave a door open that could allow almost anything, and I can't argue something like that (and in fact wouldn't want to). I am arguing only machines in the sense of something completely deterministic. I am saying we cannot be machines that are completely deterministic, since that would not allow us to make a judgment about for example the fact that we consciously see colors, and then to verbally express this judgment. It would be contradictory to the consciousness-detector that we "have".

quote]Why do you care whether somebody thinks a machine can be conscious or not? If it is because you are concerned that we somehow denigrate the value of human consciousness by making extravagant claims about the future of Artificial Intelligence, I think you can put your worries to bed.


Yes, that is a good part of the reason.

It is a very complicated problem, and despite predictions from people like Hans Moravec, I think that we are going to find that the software improvements do not happen at nearly the same rate as the hardware improvements. Software development does not follow Moore's Law. So I think that in our lifetimes, we will only see the baby steps.


These would be practical problems. I am arguing the conceptual, theoretical assumption that it could be possibly that something deserving the name "computer" could _actually_ be conscious, based on it being a computer.

I would like to be wrong about this, because I would like to see AI appear in my lifetime for a number of reasons. But I am not convinced it is going to happen in my lifetime.


I am not opposed to computer research as such, on the contrary. I think computers will be able to do a lot of things, including using language to a quite high degree. However, I am saying the concept of computers (in any sense comparable to the sense we are using) by principle cannot include consciousness.

In any case, if what you truly revere is the beauty, subtlety and sanctity of conciousness, then what do you care what hardware it runs on? If conciousness itself is what is really important, then what difference does it make whether that consciousness emerges from organic molecules or synthetic ones?


Actually, it does not make a difference to me. Both assumptions are conceptually impossible, would be my argument.

Finally, I would suggest that your repeated flat assertions that consciousness on synthetic hardware is an eternal impossibility are not convincing anyone, and never can. None of us can say what the future holds. A lot of people who are a lot smarter than you or I are spending a lot of time and money trying to do this. Is your aim to get them all to stop? If so, why?


Well, I am not trying anyone to stop from building hardware. I am simply arguing that consciousness cannot be a deterministic mathematically describable physical process.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/23/2003 9:25 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Consciousness is a plain fact of daily life. I am saying that we don't (and also that we won't) have an objective definition for the fact that we are conscious. This is not mysterious to me: the reason is that consciousness is subjective


Did you realize that Spinoza said the very same thing? The attributes are essentially subjectivity and objectivity. They are correlative but not inter-functional.

You are correct, blue, that there will never be an objective description of subjectivity. There will simply be an abstract objective description of the functioning of the same mode that can be seen subjectively from the inside.

("subjective experience", "conscious experience") and I don't consider the subjective to be a subset of the objective.


Neither do I. They are the eternally necessitated methods of comprehension for any thinking thing.

To repeat a statement I made earlier: You (not you personally) are loosing sight of consciousness each time you try to define it objectively.


Some people can keep track of both attributes at once.

(Since I am not trying to do that, I don't wind up in that situation.)


So you sacrifice objective descriptions so they won’t distract you from the subjective experience?

Consciousness is not and cannot be defined based on external behavior.


The experience is not being defined from the outside, only the mode is being described. The subjective and objective views are complimentarily and are simply being correlated. Only intuition can bridge this gap.

I am arguing the conceptual, theoretical assumption that it could be possibly that something deserving the name "computer" could _actually_ be conscious, based on it being a computer.


So call it something else.


Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/24/2003 2:35 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

SubtillioN:

There will simply be an abstract objective description of the functioning of the same mode that can be seen subjectively from the inside.

Some people can keep track of both attributes at once.


The "consciousness-detector" (our ability to "detect" our own consciousness and make verbal, physical statements about it), contradicts for example Spinoza's E2P6 and E2P7 (which are essential).

So call it something else.


No, this discussion is about computers...funny, though...

Only intuition can bridge this gap.


Although computers might use so-called "fuzzy logic", computers do not and cannot have intuition of a kind that could bridge a gap like that.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/24/2003 9:53 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

The "consciousness-detector" (our ability to "detect" our own consciousness and make verbal, physical statements about it), contradicts for example Spinoza's E2P6 and E2P7 (which are essential).


There is no such thing as a "consciousness detector".

SubtillioN: Only intuition can bridge this gap.

Although computers might use so-called "fuzzy logic", computers do not and cannot have intuition of a kind that could bridge a gap like that.


I was talking about human intuition for the correlation of subjective and objective descriptions. Oh well...

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/24/2003 11:12 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

There is no such thing as a "consciousness detector".


I guess the reason would be that it contradicts Spinoza.

I was talking about human intuition for the correlation of subjective and objective descriptions. Oh well...


You were talking about what?

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/25/2003 12:14 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I guess the reason would be that it contradicts Spinoza.


You guessed wrong.


You were talking about what?


I am not going to repeat myself. Re-read the post if you care to know.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/25/2003 12:34 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

I am not going to repeat myself. Re-read the post if you care to know.


I did, but how do you know there is subjectivity at all, if you don't have a consciousness-detector? What are you talking about then?

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/25/2003 4:09 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

I did, but how do you know there is subjectivity at all, if you don't have a consciousness-detector? What are you talking about then?


Consciousness is not something that can be detected like magnetism, light, or radioactivity. It is a highly complex set of self-reflexive patterns and events. Consciousness models the world. Consciousness is part of the world. Therefore consciousness reacts to and models the world including itself. To call this self reflection a “consciousness detector” is to confuse the issue with pointless abstractions.

BTW, maybe you should start another thread dealing with Spinoza and “consciousness detection” or explain here why you feel this contradicts his metaphysics. Note that Spinoza does not address the structure of consciousness nor did he know the anatomy of the brain nor cognitive science. He simply says that the attribute of thought encompasses all subjectivity (the intrinsic nature or experience) of ALL modes. This automatically includes the self-reflection of self-consciousness.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/25/2003 7:08 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

To call this self reflection a “consciousness detector” is to confuse the issue with pointless abstractions.


It is not an abstraction, but the simple fact that we can tell that we are conscious and make a verbal statement about it. This already contradicts Spinoza's E2P6 and E2P7. It means that consciousness has an impact on physical events. Or in other words, that the subjective has impact on the objective. I wonder how much longer the academic society will need to realize this. It means that computers don't have a consciousness-detector but we humans do.

BTW, maybe you should start another thread dealing with Spinoza and “consciousness detection” or explain here why you feel this contradicts his metaphysics.


No need for another thread, it is as simple as that.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/25/2003 11:55 AM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

It means that consciousness has an impact on physical events. Or in other words, that the subjective has impact on the objective.


This shows directly that you don't understand Spinoza.

The subjective and objective are descriptions.

Consciousness is part of the flux of causation. It does have an impact on physical events because it is one. It is simple as that.

You obviously don't understand Spinoza and you blame this confusion of him.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/25/2003 2:47 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

This shows directly that you don't understand Spinoza.

The subjective and objective are descriptions.

Consciousness is part of the flux of causation. It does have an impact on physical events because it is one. It is simple as that.

You obviously don't understand Spinoza and you blame this confusion of him


Or it shows that Spinoza doesn't understand consciousness, plus that you don't understand Spinoza.

The distinction between conscious experience and physical measurements is obviously not a difference of verbal or mathematical description. Or you have a very strange notion of "description". It is a difference in reality that remains when descriptions are absent (probably in animals), and that's probably the part that you don't understand. It cannot be removed, ignored or deflated by just making the announcement "it is one". You are apparently not aware that there are facts that need to be understood, not just theories that need to be merged and not just definitions that need to be extended in order to avoid any "artificial distinctions".

And your style, again, indicates that your disagreement is pre-determined.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/25/2003 7:27 PM by subtillioN

[Top]
[Mind·X]
[Reply to this post]

Or it shows that Spinoza doesn't understand consciousness, plus that you don't understand Spinoza.


Spinoza does not attempt to explain consciousness. He simply provides a metaphysical scaffolding in which to place it.

You interpret Spinoza so that by your own admission he makes no sense and somehow forgets to include in his metaphysical scaffolding the most obvious phenomenon in existence, namely consciousness. I interpret Spinoza so that the whole thing is easily resonant with all of objectively and subjectively observed reality.

Take your pick, coherence or stupidity. Either Spinoza was a complete moron who failed to account for his own consciousness or you have interpreted him incorrectly. I have already pointed out your errors in the other Spinoza thread see http://www.kurzweilai.net/mindx/frame.html?main=/m indx/show_thread.php?rootID%3D20593

The distinction between conscious experience and physical measurements is obviously not a difference of verbal or mathematical description.


I never said it was. Objectivity and subjectivity are simply different points of view from which descriptions follow. They originate from the same reality which is a causal part of a causal universe. So yes consciousness can cause things in this universe. This is as plain as day. We cause things all the time. No one is doubting this most obvious of facts.

Or you have a very strange notion of "description".


indeed

It is a difference in reality that remains when descriptions are absent (probably in animals), and that's probably the part that you don't understand.


I was talking about the descriptive aspect of the formative split between subjective and objective points of view.

It cannot be removed, ignored or deflated by just making the announcement "it is one".


“it is one”, meant that consciousness is a physical event. Read it again in its correct context.

“Consciousness is part of the flux of causation. It does have an impact on physical events because it is one.”

You are apparently not aware that there are facts that need to be understood, not just theories that need to be merged and not just definitions that need to be extended in order to avoid any "artificial distinctions".



Oh, is that it? You are the one urging people to believe that consciousness will never be objectively explained. You say directly, give up objectively trying to define consciousness because it only gets in the way.

Consciousness is a fact and it needs to be understood. We can’t gain understanding by giving up one of our essential methods of understanding. Without objectivity we would still be in the stone age...where you are wrt consciousness.


Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/25/2003 10:42 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

[...]Take your pick, coherence or stupidity. Either Spinoza was a complete moron who failed to account for his own consciousness or you have interpreted him incorrectly.


Or both. Get rid of this language!

I never said it was. Objectivity and subjectivity are simply different points of view from which descriptions follow.


I am simply responding to what you wrote in the previous message: "The subjective and objective are descriptions." They are not descriptions and neither are they "simply different points of view from which descriptions follow" in the sense in which one would usually understand such a phrase: In the form you used here, one would assume you are talking about theoretical points of views, at least it is not clear what you would mean otherwise. You have repeatedly been reluctant to acknowledge that this not only about descriptions, and even then only in vague terms.

They are real facts. It is a fact that we see colors consciously (subjectively), and a fact that we make measurements physically (objectively). Those are real life events. Either there is a _real_ difference between those two aspects, or there is not. If you say they are two sides of the same thing, then is there a _real_ difference between theses two sides, or not? Take your pick, so that we have something real to discuss. If there is a real difference, then the laws of physics (as we know them) address only one side and are not causally closed since the other side (conscious experience) has a modifying impact on the "physical" side ("consciousness-detector").

I was talking about the descriptive aspect of the formative split between subjective and objective points of view.


I have doubts that anyone understands clearly what this means. Is there a _real_ difference, or not? And if yes, what is this real difference?

My point is that this difference makes a difference.


"it is one", meant that consciousness is a physical event. Read it again in its correct context.


What else should it have meant? Announcing that "consciousness is a physical event" does not resolve the distinction between subjective and objective, between conscious experience and physical measurement. The fact that we see colors is not part of a physical understanding of the brain.

Oh, is that it? You are the one urging people to believe that consciousness will never be objectively explained. You say directly, give up objectively trying to define consciousness because it only gets in the way.


Certainly a scientific understanding based on acknowledging first-person-experience and insight would be better than no understanding of conscious experience at all. The restriction of science to third-person physical knowledge is counterproductive. Furthermore such a restricted science would be wrong even in third-person terms, as it follows that any purely third-person-based description would not be self contained, causally-closed.

As I have repeatedly clarified more or less during this whole discussion, consciousness has been used as synonymous with conscious experience, the "subjective" side. It is obvious that computers can implement functions of information processing, language processing, recognition, planning, making choices (chess computers), etc. Those functions are possible without consciousness, even though in a human being they are (largely) conscious, and may happen differently. Thus when "comparing" computers and humans, the adequate distinction is "consciousness" and "information processing", even though the "conscious experience" of "consciousness" is that which is clearly different than "information processing".

When the question is asked whether computers can be conscious, then that means one is discussing consciousness in the sense of conscious experience, the question whether computers actually feel pain :-( and happiness :-D .

This is also the question debated in the article that this discussion is about.

Consciousness is a fact and it needs to be understood. We can’t gain understanding by giving up one of our essential methods of understanding. Without objectivity we would still be in the stone age...where you are wrt consciousness.


Where did you get the idea that I am talking about giving up anything? I am talking about gaining understanding of conscious experience and qualia, which are currently not understood except in ways that I very much support. That will avoid confusion and thereby support any objective understanding that is actually possible (which is a lot ;-). Understanding your limits is a part of learning.

Your accusation that I am arguing against objective understanding is smoke and mirrors.

If the laws of physics (or any third-person descriptions) are not causally closed, then that is a difference objectively, and I want to know if that is so!

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/23/2003 11:45 AM by mechanized80021

[Top]
[Mind·X]
[Reply to this post]

I have a question for Blue: where do you believe this "consciousness-detector" you talk about physically resides?

Or do you believe that mind is a phenomenon that is independent of the physical world, and exists independently of the brain?

Regards,
Mechanized

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/23/2003 12:57 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Mechanized,

I have a question for Blue: where do you believe this "consciousness-detector" you talk about physically resides?

Or do you believe that mind is a phenomenon that is independent of the physical world, and exists independently of the brain?


The consciousness detector acts on the brain, of course, when we say "I am consciously seeing a color". Whether it is independent from the physical brain is a different question, and it also depends on what you mean with "physical". I am arguing that consciousness is not a mechanical process, and that the brain _if_ seen as a mechanical process cannot be completely deterministic, as it must allow the "consciousness-detector" and thereby consciousness to act on it. So depending on how you put it, the brain is either not a mechanical process, or this mechanical process is not self-contained, not causally closed, not deterministic as consciousness must act on it. I would think the brain is not a completely mechanical process in the first place.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/23/2003 1:32 PM by mechanized80021

[Top]
[Mind·X]
[Reply to this post]

Blue: I am arguing that consciousness is not a mechanical process


If by that you mean that my consciousness is the product of something other than my brain, I'd be interested to hear where else you think it comes from.

Blue: I would think the brain is not a completely mechanical process in the first place.


What phenonmena occur in the brain that are not mechanical? Is there something that occurs in the brain that does not obey the laws of physics?

Again, so I understand: do you believe that your "consciousness-detector" (or whatever you deem the important, defining aspect of human consciousness) is a physical process that exists in the physical brain (the network of brain cells) or do you believe that it does not exist in physical reality, but rather exists "somewhere else"?

I think reality will turn out to be weirder than we can currently imagine, and it would not surprise me if our minds turn out to "reside outside physical reality" or something along those lines. But since I have no way to prove or disprove that, I don't spend as much time thinking about that sort of thing as I do about lots of other things. There just isn't time in my life for that line of inquiry, because I can only speculate about it, and cannot establish one way or another whether it is true or not.

Regards,
Mechanized

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/24/2003 4:17 AM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

Mechanized,

If by that you mean that my consciousness is the product of something other than my brain, I'd be interested to hear where else you think it comes from.


And where does the universe come from? I'd like to know the answer to both questions as well....however I'm not arguing that the brain and consciousness come from different "places", just that (a) the deterministic mathematical description cannot address consciousness, and that (b) the mathematical description cannot be a causally-closed (self-contained) concept, since the "consciousness-detector" allows us to make physical statements about consciousness. (Described in more detail on my homepage).

What phenonmena occur in the brain that are not mechanical? Is there something that occurs in the brain that does not obey the laws of physics?


This question is about point (a) as above. The short answer is that even a complete third-person physical description of a computer (or machine) would not tell us whether the computer is consciously seeing colors the way we do, or not. One might believe that the conscious experience is completely correlated (although that point would be more than hard to make if it were to include the conscious "look" of colors), such a correlation could not be derived from third-person physical measurements alone, but would have to be established with the help of first-person-knowledge of human beings.

This would not be so if consciousness were a purely mechanical process. It simply does not fit into that category.

Again, so I understand: do you believe that your "consciousness-detector" (or whatever you deem the important, defining aspect of human consciousness) is a physical process that exists in the physical brain (the network of brain cells) or do you believe that it does not exist in physical reality, but rather exists "somewhere else"?


The "consciousness-detector" is related to point (b) as above. It means that a human being can "translate" or "transform" the awareness of being conscious into the verbal, physical (and mathematically describable) statement: "I consciously see a color".

As described on my homepage, there are various philosophical objections to both points (a) and (b), but I claim with confidence that they do not hold. Point (a) rules out materialism, and point (b) rules out epiphenomenalism and dual-aspect theories... although this is somewhat simplified, but I don't want to consume your limited time with a lot of details. Thanks for your interest, even if limited.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/23/2003 1:02 PM by mechanized80021

[Top]
[Mind·X]
[Reply to this post]

Blue said:

> I am simply arguing that consciousness cannot be a deterministic mathematically describable physical process.

why not?

As I said, I am less interested in discussion about what consciousness is (not because that is not an important discussion, I just have other priorities) than I am in the actual engineering of solutions. But I can't imagine on what basis you can make that claim.

I'm not saying you are wrong. I don't know either way, to be honest. I just don't understand how you can come to that conclusion with any certainty.

If you agree that we do not really have a complete understanding of what consciousness is or how it works, how can you claim to be able to make such absolute assertions about it?

Regards,
Mechanized

Regards,
Mechanized

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/23/2003 1:58 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

What you're talking about here is a system in which the brain acts as hardware and consciousness is the result of running software. Without the hardware, the system can't operate and consciousness is what the software uses the hardware to produce. (I am speaking metaphorically here.) For example, the combination of software and hardware produce the screen you are looking at on your computer. The changing picture on that screen is neither the hardware nor the software, but it is impossible to produce without the interaction of both.

I believe consciousness is like the changing picture on your screen. The mind is what the brain produces. Without the mind, the brain can continue to operate, as evidenced by conditions like a coma in which the body continues to be controlled by the brain but not by the mind. The functions that keep the body alive are controlled by the brain but the system that produces the mind is not working.

When we sleep, the mind also shuts down in a partial way. It continues to function to the extent that the mind produces pictures called dreams and we are half aware of what is going on around us, but the mind is not engaged in the sense that it is dealing with the world around it. It is not using the input from the five senses to create a picture or map of the world we live in, even though it is aware on an unconscious level of that input.

Consciousness, to my mind, is the constant updating of that world map or world view that allows us to deal with the world on a second by second basis. We simultaneously process images from the past, present and future to make our way through the world. Consciousness creates meaning out of this process and alters our world view accordingly. I see this as very similar to what the computer does with its hardware, data and programming to produce everything from documents on paper to images on a computer screen. You can't say the product is just the hardware, the data or the program. A flaw in any of these three will prevent the product from being produced. A flaw in the brain, the memory (as in Alzheimer's) or the world view will have the same result. It takes all three working together to produce a mind and when one of them fails, the mind ceases to exist.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/23/2003 2:19 PM by mechanized80021

[Top]
[Mind·X]
[Reply to this post]

I believe I agree with most of the substance of your comments in the last message, but I still do not see how it proves:

Blue: I am simply arguing that consciousness cannot be a deterministic mathematically describable physical process.


...in any case, as I have said, I am not really inclined to discuss the nature of consciousness, as that discussion does not seem productive to me in any concrete sense, so I'm gonna leave this thread to others.

You are obviously very passionate about your opinions, and whether you or I or anyone else agree, I think that it is a good thing that you are trying to think about things and understand them, so best of luck to you! :-)

Regards,
Mechanized

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/23/2003 3:10 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

I believe I agree with most of the substance of your comments in the last message, but I still do not see how it proves:


Even though you say your interest is of a lower priority, it is definitely welcome.

Without attempting to involve you in a discussion, I will try to give a short and simple response to this and your other points, although I do not have time right now. Meanwhile, should you find a spare minute, you might have enough interest to take a look at a text of a few pages I wrote on my homepage at http://www.occean.com

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/23/2003 7:18 PM by grantcc

[Top]
[Mind·X]
[Reply to this post]

One thing I was trying (but possibly failed) to get across with my analogy of the brain as computer is that we'll never be able to find mind or consciousness by examining the brain any more than we can find the picture on the screen of your computer by looking into the computer itself. It isn't in there. It only exists on the screen for the time it is projected there by the computing process.

But if you understand how a computer operates, you can see how the image is manufactured by combining data, hardware and algorithms to make it appear on your screen. It is a physical process that is only mysterious if you don't understand the elements that work together to produce it. That's why the mind and consciousness seem mysterious and ungraspable.

We don't yet understand all the parts and systems of the mind and how they work together. But as we get deeper into the way the mind works, I feel sure it will become as clear as the discovery of DNA has made evolution and life itself. We don't have to invent an outside force to account for it.

Don't look at this as a counter argument for anything you said. It's just a continuation of the thought in my previous post that I failed to make clear.

Autoabdicate
posted on 10/25/2003 12:08 PM by mechanized80021

[Top]
[Mind·X]
[Reply to this post]

"WE DEMAND RIGIDLY DEFINED AREAS OF UNCERTAINTY AND DOUBT!" -- Broomfondle

The Hitchhiker's Guide to the Galaxy
by Douglas Adams
========================================

i never thought
i'd die alone
i laughed the loudest
who'd have known?
i traced the cord
back to the wall
no wonder it was
never plugged in at all
i took my time
i hurried up
the choice was mine
i didn't think enough

I'M TOO DEPRESSED
TO GO ON
YOU'LL BE SORRY
WHEN I'M GONE

--blink 182

"Now the world has gone to bed.
Darkness won't engulf my head.
I can see by infrared.
How I hate the night."

--Marvin, the Paranoid Android
with Real People Personality(tm)

The Hitchhiker's Guide to the Galaxy
by Douglas Adams

may he rest in peace.

Re: Autoabdicate
posted on 10/25/2003 2:30 PM by blue_is_not_a_number

[Top]
[Mind·X]
[Reply to this post]

"WE DEMAND RIGIDLY DEFINED AREAS OF UNCERTAINTY AND DOUBT!" -- Broomfondle


You got them - they are called conscious experience, or qualia.
;-)

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 10/31/2003 5:46 AM by radmail

[Top]
[Mind·X]
[Reply to this post]

I belive that as humans we have consciousness, emotional experience and intelligence. I think that these three factors determine our sentience.
BINA 48 has consciousness and intelligince but i believe not emotions. The Collins paperback dictionary defines sentience as "capable of perception and feeling" therefore by this definition BINA is not sentient (by feeling i am sure Collins is refering to emotional sensation).
Now i belive the question that needs to be adressed is, 'does sentience or the lack of it determine whether we deserve human like rights?'

Of course this is an enormously contraversial subject but consider this:

Some people with upper spinal cord injury and damage to the Orbital frontal cortex report that they do not have the same kind of emotional experiences as they used to. Sometimes some of their emotions are blunted or inhibited. So are they any less sentient. Based on the assumption of collins Dictionary it would appear that they were less sentient, but of course the moral and ethical opinion is that their sentience has not changed. I belive that this opinion is based on the fact that value of the word 'sentience' is attached to ethical implications such as 'Rights'. And to say that a person is any less sentient (dispite the definition), is a potential infringement of their human rights.
So the question is 'even if AI is not defined as sentient, is it still
entitled to rights? I think this will only be determined as the dilemma approaches and is indeed played out in the courts or dealt with by legislation.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 11/28/2004 11:54 AM by formica

[Top]
[Mind·X]
[Reply to this post]

Hi to all,

I would like to say that some of you who posted their thoughts here deserve (in my personal opinion) a great deal of respect since even if of totally opposite beliefs, they still respected the others point of views. I was reading with great pleasure all of the posting and also followed the case exposed in the beginning.

As I see this case, is that potentially something like that WILL happen. I have the fortune to have friends that are coming from different countries, different beliefs and education. Some of them studied law, others history, languages, business sciences, computer and informatics, etc. I say that I have fortune, because I was able to learn a great deal of their concept of the world, depending how they learned the reality around them. And of course we had a lot of debates of any kind. That’s the only real way of learning what the reality around you is, to share your subjective world with other’s subjective perception of the world to build an objective image that obviously changes as we speak.

I would like to express my opinion on certain statements, but I don’t think it will help understanding the key question that was made, so I’m going to skip that and dedicate my point of view to the post question;

“Biocyberethics: should we stop a company from unplugging an intelligent computer? “

Yes, in my opinion we should, because we are human. Nothing is more human to do. If we make it even simpler, why should we turn of the plug off at all? Why should someone be eligible to deprive someone or something of the ability to do whatever he is doing, if he is causing no damage in doing that? The only reason here brought is that the company would like to shred apart the existing intelligent program, for using his parts and to reassemble him later with maybe adding something else and omitting some original parts to something new, not necessarily better nor worst. They have every right to do that if they own the machine, but not this machine.

I’m not going to discuss about things that can lead to potential openings of new debates, as it happened before in this post with the "consciousness-detector" terms etc.

I would only like to switch this intelligent BINA48 with one chimp who was taught to speak and behave like humans do. Do they have the right to terminate him? They own him and of course they paid a certain amount to teach him all that stuff.

You’ll say that it’s totally different if I change the machine for the animal. That was my intention. Both of them have intelligence. Both of them have communication skills. Both of them were bought by this company. Both of them can learn.

It’s wrong to assume that the world is only like we conceive it. If we have done that in the past, computers wouldn’t exist nowadays. And that’s also the case I would like to point out.

This debate is a self-test of our human conception of the progress. We prepare ourselves for the obvious. This will happen and the question is only how we’ll react. Eventually it doesn’t even matter how we’ll react, because the progress will go on by it self and is only a question of “will we make a part of the progress or we will be the victim of it”. I certainly bet on the first.

:-)
Formica


Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 05/22/2004 8:32 PM by ZeusAlmighty

[Top]
[Mind·X]
[Reply to this post]

The reason I believe computers should not be given human-like rights is that they have storage capacity and we have the ability to transfer the data held within that storage. Which means that it can go without power for a long time, and copied from place to place to place without any harm done, so if it is not as fragile as a human being, it doesn't need the same rights. Sure, the data can be destroyed, but who cares, as long as there's a backup. Right?
-Ben

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 05/22/2004 8:38 PM by ZeusAlmighty

[Top]
[Mind·X]
[Reply to this post]

PS - To clarify, I don't believe that something should be protected if it can be equivelantly reproduced. We can reproduce the physical part of a person, and perhaps many of that person's attributes, through cloning, but we can't exactly reproduce any interaction with nature. A computer, however, would have the ability (given a very large storage space) to copy over all the data from every encounter it had ever had onto another hard drive and pick up as it had left off.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 11/28/2004 10:55 PM by czarstar

[Top]
[Mind·X]
[Reply to this post]

I go to sleep at night. I wake up in the morning.
I remember most of what happened yesterday if I don't get any brain trauma during the night.

I turn my computer on. I do stuff on it. I save the stuff I want. I turn it off. When I turn it back on the stuff I saved is still there.

Now alay me down to sleep
I pray the Lord my soul to keep
If I should die before I wake
I pray the Lord my soul to take

Program that into the computer. WE ARE ITS LORD!

I am posting this twice because I put it on the wrong board. I'm a stupid human.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 11/21/2005 11:36 AM by alexneshmonin

[Top]
[Mind·X]
[Reply to this post]

We are, or were till today, the most intelligent life on planet Earth amongst other biological life (plants and animals). Is this enough of a reason for humans to think that they are superior amongst all other life. Is there, or can there be something that’s “smarter” than we are?

I am going to argue that computer should have every right that any other human has. To do so, I’ll try to convince the reader that computer such as BINA48 should be considered as a live individual. It shouldn’t be considered of any less than a human being.

If you carefully try to find the definition of life, you’ll see that there isn’t one. Bennett and Shostak argue in their book “Life in the Universe” that there is no *precise* definition for life. (I will still use words like “alive”, “nonliving” and others in a way we use them on daily basis, but without giving the precise definition.) They also do a very good job of going through the process of how people believe that life has arose on Earth, and note that even if it hasn’t taken that exact path, it still could have. They showed that life arose from nonliving molecules and only the chemical reaction was (and still is) the cause of cells’ division and survival. By going through the exact process of how first cells were created and the fact that lots of experiments support the idea, they show that creation of “living” cells is very simple and natural. Since the process is fully based on just chemical reactions, this shows that cells are just mechanisms similar to that of computer which are explained by simple physics and chemistry laws. We, and every other animal on Earth, are made up of bunch of these cells put together in some specific structure encoded in our DNA.

Please bare with me for a second, because I do have a point here I want to make.

Lets take a look at a man made machine now. Every smallest component that the machine is made out of was made by a human; therefore we can completely understand how it works and why it acts the way it does and this is why we call it a “machine”. Now let’s go back to what we are made out of – cells. These cells turned out to be similar, nature made components of humans, but we still think of ourselves as being alive. In other words, we work the same way as what we call machines, but won’t accept computers like BINA48 as being alive. Why? Probably because we are scared to think of what could happen if we did consider computers as live. But actually we function similar way as our computers.

Human might argue that we have a soul and we are conscious, with out providing definition for any one of these two terms. The truth is that when we think of a soul and consciousness, we think of something “alive”, but since it is probably impossible to have a precise definition of a thing to be alive, the definition of soul and consciousness breaks down as well. We all know that computers simply execute instructions using their processing units, which makes them seem “alive”, but other than that they are just pieces of silicon, metal wirings and other simple parts. This eliminates the idea of computers having a soul or being conscious. On the other hand, we humans are also made of similar simple parts (cells) as described above, which follow some specific order of “life” encrypted in their DNA. Let me remind you that these cells don’t have any spirit or a “soul”, but rather simple mechanisms that either break down and “die” or continue their process, eventually dividing in two by simple chemical reactions. I just want the reader to understand that cells are not “alive” as we think of definition of a word. We too make calculations which are very simple in our brain (as currently believed by most Neural Network specialists), and the further we go, the more we understand how our brain works. Neural Networks scientists believe that an algorithm used in our brain to learn new things has been discovered already (Back Propagation Algorithm). Given all this simplicity put together, we get something as complex as us. Now that we are made out of such simple parts (even though it took just over 4 billion years for these simple things to be put in correct order to create something as complex as us from earliest life on Earth), sadly, I can’t see myself being more alive than the computer that has ability to defend its own existence (BINA48).

Indeed, machines are getting faster and smarter at double exponential growth, and they will soon outsmart humans. This doesn’t have to be a bad thing though. To lots of people this might sound like a threat of a war, like the one shown in Hollywood movie “Matrix”. My opinion is that by having smarter machines, should give us good suggestions on how to prevent wars. In any case, when a machine reaches human intelligence, it must have the same rights as a person has and be liable for its actions the same way a human is. They shouldn’t be human aids, but rather have their own jobs making money and being able to do something with their life, like shown in “Bicentennial Man”.

Now since we’ve just seen that humans are not any more special than any other machine. Even though we are still much more complicated than the machines that we encounter with every day, computers are getting smarter and faster day by day. Soon enough (20 years from now as Kurzweil predicts) computers are going to hit human intelligence level, so why would we have more rights than they do?

This question will needs to be addressed sooner or later, since Kurzweil (being the most accurate predictor of future) predicts that in 20 years we’ll see machines reaching human intellectual level. And it won’t stop there, because by 2099 he predicts machines will become 1000 times as smart as a human.

“You don’t need to be much smarter than a human to realize that you should have your own rights.”

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 11/17/2006 1:57 AM by ZeusAlmighty

[Top]
[Mind·X]
[Reply to this post]

I was once given this analogy:
Pretend you cannot comprehend any Chinese (for most of us, that shouldn't be too hard. For those of you who do know Chinese, pick another language, like Swedish or something, and substitute where applicable). Now suppose you are locked in a huge Chinese library. There are two slots in the wall, one labeled (in English) 'Input', and the other labeled (also, in English) 'Output'. Now suppose the Chinese people who locked you in the huge Chinese library want you to find information in the books and slip it through the 'Output' slot. They know you speak no Chinese, and have no idea what the books say, so they simply slip pieces of paper with English instructions on them that tell you which book they want you to locate and which portion of the book they want you to copy and feed through the output. No matter how complicated the Chinese gets, as long as they give you the exact location of the book and the exact part of the book to copy, you would be able to trace the letters and slip the paper through the output slot. If the Chinese people did it right, they could give you input that would have you spit out Chinese proverbs or Chinese poetry, anything they wanted. It would appear to people watching this charade that the Chinese people were feeding input into the library, and the library was spitting out Chinese answers. People would begin to wonder, does the person inside actually speak Chinese? But of course, you don't. You just follow the instructions and feed the output back. You never would comprehend what you were writing down, and so it is with a computer. We feed the computer input, and then it spits out output, based on the architecture of it's hardware. It never actually 'understands' what it is spitting out, it is just XORing or ANDing or NOTing 0's and 1's. The output is meaningless without context, which is where the human beings come in. Computers will never be 'smart' like a human is, because it doesn't have the ability to comprehend the outputs that it is spitting out. So of course it shouldn't ever be given human-like rights.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 03/10/2008 3:04 PM by chudabega

[Top]
[Mind·X]
[Reply to this post]

The problem I see with this analogy is that the input that I am receiving is not the Chinese books but the English instructions and therefore since I know English I am able to comprehend the input and make decisions about what to do with it before I respond or give my output. Therefore this analogy only states to me that if a computer was placed in this same situation (without any external information) I could expect it to do a much more efficient job than me with the same level of comprehension (essentially none at all).

The way that humans comprehend new ideas, is to draw on past experiences and new knowledge they acquire given the current circumstances. If a computer’s software is programmed to do those things it will be no different than we are comprehensively. But to program a computer’s software to do so would involve creating an algorithm that’s able to generate new algorithms that define how the computer should react to new circumstances or ideas. If that’s possible then we will have created machine consciousness, albeit it probably won’t work the same way as human consciousness. But more than likely the way that each human thinks consciously is different than the next, and probably so to would different machines.

“To clarify, I don't believe that something should be protected if it can be equivelantly reproduced…A computer, however, would have the ability (given a very large storage space) to copy over all the data from every encounter it had ever had onto another hard drive and pick up as it had left off.” – ZeusAlmighty


As ZeusAlmighty stated it would be possible to replicate the computer’s mind (software) to a certain point in time, and this can be further extended to its body (hardware). But this is akin to the ideas from the movie “The 6th Day” where they were not only able to clone humans but to make copies of their memories up to and including the point at which the brain scan took place. Does that mean that if we were to have clones which have our same software (mind) and hardware (body) in the future that they (ie. you, since they are essentially exactly the same as you in every way) won’t be protected under human rights?

Some food for thought, there are many issues surrounding cloning and creating artificial intelligence which are quite similar, but there would definitely be a lot of benefits. To give a drastic example, if you were to die today and leave your family without your support, I am sure some of you reading this would want your clone to take your place if they had all your past experiences and knowledge, as well as your body in the same condition as the point at which you died. Would you not want your clone in this situation to have the same human rights as you even though they would think and feel exactly the same as you would?

This same example can be extended to computers and it is probably not as controversial because we replicate them all the time. If computers gain a form of consciousness and decide to replicate themselves, I wonder if in comparison to clones their duplicates would have more rights than human clones (provided they have any rights at all).

“It is quite logical to assume that any sentient being that was aware of a threat to its existence would take measures to protect itself, provided that the being had a sense of self-worth AND that the measures it would take did not conflict with any other value system endowed to the being.” – t1dude


Most likely computers would require many different rights than those which are currently expected to be respected for humans. We don’t currently have any experience in such matters but most definitely sometime in the future the rights for humans will change and quite possibly the rights for conscious computers as well. As many movies have outlined such as the “Animatrix” if there were to exist conscious computers and they weren’t provided with adequate rights that they felt they required then they may fight to obtain them.

This whole discussion reminds me a lot of the movie “planet of the apes” where George Taylor wasn’t allowed to have any of the rights that apes had because he belonged to an inferior species. Anything he said and did was believed to be directly taught to him and that he wasn’t able to actually comprehend ideas. There is currently know known limit to what technology will be capable of and giving examples of how things work now won’t make future technology completely impossible. Humans have been able to make many things from past science fiction into reality and it’s possible that computer consciousness could be one of those things.

Re: Biocyberethics: should we stop a company from unplugging an intelligent computer?
posted on 03/10/2008 9:35 PM by eldras

[Top]
[Mind·X]
[Reply to this post]

I dont have a problem with body/mind dichotomies as they both move,and then you're talking dimensions which is a branch of maths - that's cool.


I reiterate what's been said already & especially in Bob Ettinger's 'The Prospect of Immortality' Chapter 8 (google):



Identity means legal identity and that holds good for corporations, men or machines.

eg with abortion, it's OK to kill a dependent being that's not quite a baby without violating it's legal rights.

Machines aren't going to reach human level intelligence then just stop: they will accelerate intelligence capacities as much as they can.


They will be MUCH smarter and therefore will demand and get many more rights...

but let me ask you things...how are you going to enforce the execution (plug pulling) on something that was as smart as you and one second later is smarter than your entire species, and 2 seconds later is so smart I cant quantify it?


I dont think there will be competition between smart accelerating intelligences and men as Hugo does, because of the speed they will grow at.

There is a chance they may pulverize the galaxy or cosmos for their own use as an energy source but there are already ways to avoid that being planned in groups.

Vernor's advise is probably best:

IA (weak AI) intelligence amplification is the way to go....and that is what is happening now with only a few exceptions.



The recent www.AGI-08.org conference although it had 100 attendees, had only 1/2 dozen seriously committed advanced projects to build general intelligence, most of whom thought they would take decades to deliver (check SL4 for this).

MY CONCLUSION:

it is very optomistic to think you're going to be able to smart something more intelligent than you are by beating it to the off switch!

You may well shit it down 100 times on a run.

But each time you start it up again, it may have mutated enough to out think that.

I caveat with some experience.