WILL-TO-COMPUTE: ASENION ETHICS, EVOLUTIONARY COMPUTATION, AND SENTIENT MORALITY:
Quote snippits from http://en.wikipedia.org/wiki/Three_Laws_of_Robotics :
In science fiction, the Three Laws of Robotics are a set of three rules written by Isaac Asimov, which almost all positronic robots appearing in his fiction must obey. Introduced in his 1942 short story "Runaround", although foreshadowed in a few earlier stories, the Laws state the following:
1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
In The Naked Sun, Asimov established that the first law was incomplete: that a robot was fully capable of harming a human being as long as it did not know that its actions would result in harm. The example used was: one robot adds poison to a glass of milk, having been told that the milk will be disposed of later; then a second robot serves a human the milk, unaware that it is poisoned.
Asimov once added a "Zeroth Law"—so named to continue the pattern of lower-numbered laws superseding in importance the higher-numbered laws—stating that a robot must not merely act in the interests of individual humans, but of all humanity. The robotic character R. Daneel Olivaw was the first to give the Law a name, in the novel Robots and Empire; however, Susan Calvin articulates the concept in the short story "The Evitable Conflict"
… grasps the philosophical concept of the Zeroth Law, allowing him to harm individual human beings if he can do so in service to the abstract concept of humanity. The Zeroth Law is never programmed into Giskard's brain, but instead is a rule he attempts to rationalize through pure metacognition…
… thinking that the First Law forbids a robot from harming a human being, unless the robot is clever enough to rationalize that its actions are for the human's long-term good (here meaning the specific human that must be harmed).
Gaia, the planet with collective intelligence in the Foundation novels, adopted a law similar to the First as their philosophy:
Gaia may not harm life or, through inaction, allow life to come to harm.
On the other hand, the short story "Cal" (collected in Gold), told by a first-person robot narrator, features a robot who disregards the Laws because he has found something far more important—he wants to be a writer.
Asimov took varying positions on whether the Laws were optional: although in his first writings they were simply carefully engineered safeguards, in later stories Asimov stated that they were an inalienable part of the mathematical foundation underlying the positronic brain.
… the term "Asenion" to describe robots programmed with the Three Laws. The robots in Asimov's stories, being Asenion robots, are incapable of knowingly violating the Three Laws, but in principle, a robot in science fiction or in the real world could be non-Asenion.
As characters within the stories often point out, the Laws as they exist in a robot's mind are not the written, verbal version usually quoted by humans, but abstract mathematical concepts upon which a robot's entire developing consciousness is based. Thus, the Laws are comparable to basic human instincts of family or mating, and consequently are closer to forming the basis of a robot's self-consciousness—a sense that its entire purpose is based around serving humanity, obeying human orders and continuing its existence in this mode—rather than arbitrary limitations circumscribing an otherwise independent mind. This concept is largely fuzzy and unclear in earlier stories depicting very rudimentary robots who are only programmed to comprehend basic physical tasks, with the Laws acting as an overarching safeguard, but by the era of The Caves of Steel, featuring robots with human or beyond-human intelligence, the Three Laws have become the underlying basic ethical worldview that determines the actions of all robots.
The Solarians eventually create robots with the Laws as normal but with a warped meaning of "human". Solarian robots are told that only people speaking with a Solarian accent are human. This way, their robots have no problem harming non-Solarian human beings (and are specifically programmed to do so). By the time period of Foundation and Earth, it is revealed that the Solarians have, indeed, genetically modified themselves into a distinct species from humanity — becoming hermaphroditic, telekinetic and containing biological organs capable of powering and controlling whole complexes of robots on their own. The robots of Solaria thus respected the Three Laws only regarding the "humans" of Solaria, rather than the normal humans of the rest of the Galaxy.
….
The presence of a whole range of robotic life that serves the same purpose as organic life ends with two humanoid robots concluding that organic life is an unnecessary requirement for a truly logical and self-consistent definition of "humanity", and that since they are the most advanced thinking beings on the planet, they are therefore the only two true humans alive and the Three Laws only apply to themselves. The story ends on a sinister note as the two robots enter hibernation and await a time when they conquer the Earth and subjugate biological humans to themselves, an outcome they consider an inevitable result of the "Three Laws of Humanics".
Indeed, Asimov describes "—That Thou art Mindful of Him" and "Bicentennial Man" as two opposite, parallel futures for robots that obviate the Three Laws by robots coming to consider themselves to be humans — one portraying this in a positive light with a robot joining human society, one portraying this in a negative light with robots supplanting humans. Both are to be considered alternatives to the possibility of a robot society that continues to be driven by the Three Laws as portrayed in the Foundation series.
In the 1990s, Roger MacBride Allen wrote a trilogy set within Asimov's fictional universe. Each title has the prefix "Isaac Asimov's", as Asimov approved Allen's outline before his death. These three books (Caliban, Inferno and Utopia) introduce a new set of Laws. The so-called New Laws are similar to Asimov's originals, with three substantial differences. The First Law is modified to remove the "inaction" clause (the same modification made in "Little Lost Robot"). The Second Law is modified to require cooperation instead of obedience. The Third Law is modified so it is no longer superseded by the Second (i.e., a "New Law" robot cannot be ordered to destroy itself). Finally, Allen adds a Fourth Law, which instructs the robot to do "whatever it likes" so long as this does not conflict with the first three Laws. The philosophy behind these changes is that New Law robots should be partners rather than slaves to humanity. According to the first book's introduction, Allen devised the New Laws in discussion with Asimov himself.
Allen's two most fully characterized robots are Prospero, a wily New Law machine who excels in finding loopholes, and Caliban, an experimental robot programmed with no Laws at all.
The Laws of Robotics are portrayed as something akin to a human religion and referred to in the language of the Protestant Reformation, with the set of laws containing the Zeroth Law known as the "Giskardian Reformation" to the original "Calvinian Orthodoxy" of the Three Laws. Zeroth-Law robots under the control of R. Daneel Olivaw are seen continually struggling with First-Law robots who deny the existence of the Zeroth Law, promoting agendas different from Daneel's. Some are based on the first clause of the First Law — advocating strict non-interference in human politics to avoid unknowingly causing harm — while others are based on the second clause, claiming that robots should openly become a dictatorial government to protect humans from all potential conflict or disaster.
Daneel also comes into conflict with a robot known as R. Lodovic Trema, whose positronic brain was infected by a rogue AI — specifically, a simulation of the long-dead Voltaire — consequently freeing Trema from the Three Laws. Trema comes to believe that humanity should be free to choose its own future. Furthermore, a small group of robots claims that the Zeroth Law of Robotics itself implies a higher Minus One Law of Robotics:
A robot may not harm sentience or, through inaction, allow sentience to come to harm.
When robots are sophisticated enough to weigh alternatives, a robot may be programmed to accept the necessity of inflicting damage during surgery in order to prevent the greater harm that would result if the surgery were not carried out or were carried out by a more fallible human surgeon.
Asimovian (or "Asenion") robots can experience irreversible mental collapse if they are forced into situations where they cannot obey the First Law, or if they discover they have unknowingly violated it.
… restated the first law as "A robot may do nothing that, to its knowledge, will harm a human being; nor, through inaction, knowingly allow a human being to come to harm."
Modern roboticists and specialists in robotics agree that, as of 2006, Asimov's Laws are perfect for plotting stories, but useless in real life. Some have argued that, since the military is a major source of funding for robotic research, it is unlikely such laws would be built into the design. SF author Robert Sawyer generalizes this argument to cover other industries, stating:
The development of AI is a business, and businesses are notoriously uninterested in fundamental safeguards — especially philosophic ones. (A few quick examples: the tobacco industry, the automotive industry, the nuclear industry. Not one of these has said from the outset that fundamental safeguards are necessary, every one of them has resisted externally imposed safeguards, and none has accepted an absolute edict against ever causing harm to humans.
David Langford has suggested, tongue-in-cheek, that these laws might be the following:
1) A robot will not harm authorized Government personnel but will terminate intruders with extreme prejudice.
2) A robot will obey the orders of authorized personnel except where such orders conflict with the Third Law.
3) A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.
It is not possible to reliably constrain the behaviour of robots by devising and applying a set of rules." On the other hand, Asimov's later novels (The Robots of Dawn, Robots and Empire, Foundation and Earth) imply that the robots inflicted their worst long-term harm by obeying the Laws perfectly well, thereby depriving humanity of inventive or risk-taking behaviour.
In March 2007, the South Korean government announced that it would issue a Robot Ethics Charter, setting standards for both users and manufacturers, later in the year. According to Park Hye-Young of the Ministry of Information and Communication, the Charter may reflect Asimov's Three Laws, attempting to set ground rules for the future development of robotics.
The futurist Hans Moravec (a prominent figure in the transhumanist movement) proposed that the Laws of Robotics should be adapted to "corporate intelligences", the corporations driven by AI and robotic manufacturing power which Moravec believes will arise in the near future. In contrast, the David Brin novel Foundation's Triumph (1999) suggests that the Three Laws may decay into obsolescence: robots use the Zeroth Law to rationalize away the First, and robots hide themselves from human beings so that the Second Law never comes into play. Brin even portrays R. Daneel Olivaw worrying that should robots continue to reproduce themselves, the Three Laws would become an evolutionary handicap, and natural selection would sweep the Laws away — Asimov's careful foundation undone by evolutionary computation.
Sunday, June 15, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment