Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Three Laws of Robotics
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Ambiguities and loopholes == ===Unknowing breach of the laws=== In ''[[The Naked Sun]]'', [[Elijah Baley]] points out that the Laws had been deliberately misrepresented because robots could ''unknowingly'' break any of them. He restated the first law as "A robot may do nothing that, ''to its knowledge,'' will harm a human being; nor, through inaction, ''knowingly'' allow a human being to come to harm." This change in wording makes it clear that robots can become the tools of murder, provided they not be aware of the nature of their tasks; for instance being ordered to add something to a person's food, not knowing that it is poison. Furthermore, he points out that a clever criminal could divide a task among multiple robots so that no individual robot could recognize that its actions would lead to harming a human being.<ref name="TNSPoison">{{cite book|last=Asimov|first=Isaac|title=The Naked Sun (ebook)|year=1956–1957|page=233|quote=... one robot poison an arrow without knowing it was using poison, and having a second robot hand the poisoned arrow to the boy ...}}</ref> ''The Naked Sun'' complicates the issue by portraying a decentralized, planetwide communication network among Solaria's millions of robots meaning that the criminal mastermind could be located anywhere on the planet. Baley furthermore proposes that the Solarians may one day use robots for military purposes. If a spacecraft was built with a positronic brain and carried neither humans nor the life-support systems to sustain them, then the ship's robotic intelligence could naturally assume that all other spacecraft were robotic beings. Such a ship could operate more responsively and flexibly than one crewed by humans, could be armed more heavily and its robotic brain equipped to slaughter humans of whose existence it is totally ignorant.<ref name="TNSSpaceship">{{cite book|last=Asimov|first=Isaac|title=The Naked Sun (ebook)|year=1956–1957|page=240|quote=But a spaceship that was equipped with its own positronic brain would cheerfully attack any ship it was directed to attack, it seems to me. It would naturally assume all other ships were unmanned}}</ref> This possibility is referenced in ''[[Foundation and Earth]]'' where it is discovered that the Solarians possess a strong police force of unspecified size that has been programmed to identify only the Solarian race as human. (The novel takes place thousands of years after The Naked Sun, and the Solarians have long since modified themselves from normal humans to hermaphroditic telepaths with extended brains and specialized organs.) Similarly, in ''[[Lucky Starr and the Rings of Saturn]]'' Bigman attempts to speak with a Sirian robot about possible damage to the Solar System population from its actions, but it appears unaware of the data and programmed to ignore attempts to teach it about the matter. The same motive was explored earlier in "[[Reason (short story)|Reason (1941)]]", where a robot running a [[Space-based solar power|solar power station]] refuses to believe that the destinations of the station's beams are planets containing people. [[Powell and Donovan]] are afraid this will make it capable of causing mass destruction by letting the beams stray off their proper course during a solar storm. ===Ambiguities resulting from lack of definition=== The Laws of Robotics presume that the terms "human being" and "robot" are understood and well defined. In some stories this presumption is overturned. ====Definition of "human being"==== The [[Solaria (fictional planet)|Solaria]]ns create robots with the Three Laws but with a warped meaning of "human". Solarian robots are told that only people speaking with a Solarian accent are human. This enables their robots to have no ethical dilemma in harming non-Solarian human beings (and they are specifically programmed to do so). By the time period of ''[[Foundation and Earth]]'' it is revealed that the Solarians have genetically modified themselves into a distinct species from humanity—becoming hermaphroditic<ref>{{cite web |title=Foundation and Earth (1986) |url=http://www.gotterdammerung.org/books/isaac-asimov/foundation-and-earth.html |publisher=gotterdammerung.org |access-date=11 November 2010 |author=Branislav L. Slantchev |archive-date=25 September 2024 |archive-url=https://web.archive.org/web/20240925011021/http://www.gotterdammerung.org/books/isaac-asimov/foundation-and-earth.html |url-status=live }}</ref> and [[Psychokinesis|psychokinetic]] and containing biological organs capable of individually powering and controlling whole complexes of robots. The robots of Solaria thus respected the Three Laws only with regard to the "humans" of Solaria. It is unclear whether all the robots had such definitions, since only the overseer and guardian robots were shown explicitly to have them. In "Robots and Empire", the lower class robots were instructed by their overseer about whether certain creatures are human or not. Asimov addresses the problem of humanoid robots ("[[android (robot)|androids]]" in later parlance) several times. The novel ''[[Robots and Empire]]'' and the short stories "[[Evidence (Asimov)|Evidence]]" and "The Tercentenary Incident" describe robots crafted to fool people into believing that the robots are human.<ref name="ROBANDEMP1">{{cite book |last=Asimov |first=Isaac |title=Robots and Empire |publisher=Doubleday books |isbn=978-0-385-19092-3 |author-link=Isaac Asimov |page=[https://archive.org/details/robotsempire00asim/page/151 151] |quote=although the woman looked as human as Daneel did, she was just as nonhuman |year=1985 |url=https://archive.org/details/robotsempire00asim/page/151 }}</ref> On the other hand, "[[The Bicentennial Man]]" and "[[—That Thou Art Mindful of Him]]" explore how the robots may change their interpretation of the Laws as they grow more sophisticated. [[Gwendoline Butler]] writes in ''A Coffin for the Canary'' "Perhaps we are robots. Robots acting out the last Law of Robotics... To tend towards the human."<ref>{{cite book| last=Butler| first=Gwendoline| title=A Coffin for the Canary| publisher=Black Dagger Crime| year=2001| isbn=978-0-7540-8580-5}}</ref> In ''[[The Robots of Dawn]]'', [[Elijah Baley]] points out that the use of humaniform robots as the first wave of settlers on new Spacer worlds may lead to the robots seeing themselves as the true humans, and deciding to keep the worlds for themselves rather than allow the Spacers to settle there. "—That Thou Art Mindful of Him", which Asimov intended to be the "ultimate" probe into the Laws' subtleties,<ref>Gunn (1980); reprinted in Gunn (1982), p. 73.</ref> finally uses the Three Laws to conjure up the very "Frankenstein" scenario they were invented to prevent. It takes as its concept the growing development of robots that mimic non-human living things and are given programs that mimic simple animal behaviours which do not require the Three Laws. The presence of a whole range of robotic life that serves the same purpose as organic life ends with two humanoid robots, George Nine and George Ten, concluding that organic life is an unnecessary requirement for a truly logical and self-consistent definition of "humanity", and that since they are the most advanced thinking beings on the planet, they are therefore the only two true humans alive and the Three Laws only apply to themselves. The story ends on a sinister note as the two robots enter hibernation and await a time when they will conquer the Earth and subjugate biological humans to themselves, an outcome they consider an inevitable result of the "Three Laws of Humanics".<ref name="TCRMindful1">{{cite book |last=Asimov |first=Isaac |title=The Complete Robot |year=1982 |publisher=Nightfall, Inc. |author-link=Isaac Asimov|page=611|chapter=... That Thou Art Mindful Of Him}}</ref> This story does not fit within the overall sweep of the ''Robot'' and [[Foundation Series|''Foundation'' series]]; if the George robots ''did'' take over Earth some time after the story closes, the later stories would be either redundant or impossible. Contradictions of this sort among Asimov's fiction works have led scholars to regard the ''Robot'' stories as more like "the Scandinavian sagas or the Greek legends" than a unified whole.<ref>Gunn (1982), pp. 77–8.</ref> Indeed, Asimov describes "—That Thou Art Mindful of Him" and "Bicentennial Man" as two opposite, parallel futures for robots that obviate the Three Laws as robots come to consider themselves to be humans: one portraying this in a positive light with a robot joining human society, one portraying this in a negative light with robots supplanting humans.<ref name="TCRBicentennial">{{cite book |last=Asimov |first=Isaac|title=The Complete Robot|year=1982|publisher=Nightfall, Inc. |author-link=Isaac Asimov|page=658|chapter=The Bicentennial Man}}</ref> Both are to be considered alternatives to the possibility of a robot society that continues to be driven by the Three Laws as portrayed in the ''Foundation'' series.{{According to whom|date=December 2010}} In [[The Positronic Man]], the novelization of [[The Bicentennial Man]], Asimov and his co-writer [[Robert Silverberg]] imply that in the future where Andrew Martin exists his influence causes humanity to abandon the idea of independent, sentient humanlike robots entirely, creating an utterly different future from that of ''Foundation''.{{According to whom|date=December 2010}} In ''[[Lucky Starr and the Rings of Saturn]]'', a novel unrelated to the ''Robot'' series but featuring robots programmed with the Three Laws, John Bigman Jones is almost killed by a Sirian robot on orders of its master. The society of Sirius is eugenically bred to be uniformly tall and similar in appearance, and as such, said master is able to convince the robot that the much shorter Bigman, is, in fact, not a human being. ====Definition of "robot"==== As noted in "The Fifth Law of Robotics" by [[Nikola Kesarovski]], "A robot must know it is a robot": it is presumed that a robot has a definition of the term or a means to apply it to its own actions. Kesarovski played with this idea in writing about a robot that could kill a human being because it did not understand that it was a robot, and therefore did not apply the Laws of Robotics to its actions. ===Resolving conflicts among the laws=== Advanced robots in fiction are typically programmed to handle the Three Laws in a sophisticated manner. In a number of stories, such as "[[Runaround (story)|Runaround]]" by Asimov, the potential and severity of all actions are weighed and a robot will break the laws as little as possible rather than do nothing at all. For example, the First Law may forbid a robot from functioning as a surgeon, as that act may cause damage to a human; however, Asimov's stories eventually included robot surgeons ("The Bicentennial Man" being a notable example). When robots are sophisticated enough to weigh alternatives, a robot may be programmed to accept the necessity of inflicting damage during surgery in order to prevent the greater harm that would result if the surgery were not carried out, or was carried out by a more fallible human surgeon. In "[[Evidence (Asimov)|Evidence]]" Susan Calvin points out that a robot may act as a [[Attorney at law (United States)|prosecuting attorney]] because in the American justice system it is the [[jury]] which decides guilt or innocence, the judge who decides the sentence, and the [[executioner]] who carries through [[capital punishment]].<ref name=autogenerated2>{{cite book |title=I, Robot |url=http://nullfile.com/ebooks/%28ebook%29%20Asimov,%20Isaac%20-%20I,%20Robot.pdf |author=Isaac Asimov |access-date=11 November 2010 |page=122 |format=Asimov, Isaac - I, Robot.pdf}}</ref> Asimov's Three Laws-obeying robots (Asenion robots) can experience irreversible mental collapse if they are forced into situations where they cannot obey the First Law, or if they discover they have unknowingly violated it. The first example of this [[failure mode]] occurs in the story "[[Liar! (short story)|Liar!]]", which introduced the First Law itself, and introduces failure by dilemma—in this case the robot will hurt humans if he tells them something and hurt them if he does not.<ref name=autogenerated1>{{cite book|title=I, Robot |url=http://nullfile.com/ebooks/%28ebook%29%20Asimov,%20Isaac%20-%20I,%20Robot.pdf|author=Isaac Asimov|access-date=11 November 2010 |page=75 |format=Asimov, Isaac - I, Robot.pdf}}</ref> This failure mode, which often ruins the positronic brain beyond repair, plays a significant role in Asimov's SF-mystery novel ''[[The Naked Sun]]''. Here Daneel describes activities contrary to one of the laws, but in support of another, as overloading some circuits in a robot's brain—the equivalent sensation to pain in humans. The example he uses is forcefully ordering a robot to let a human do its work, which on Solaria, due to the extreme specialization, would mean its only purpose.<ref name="TNSPain">{{cite book |last=Asimov|first=Isaac|title=The Naked Sun (ebook) |year=1956–1957 |page=56 |quote=Are you trying to tell me, Daneel, that it hurts the robot to have me do its work? ... experience which the robot undergoes is as upsetting to it as pain is to a human}}</ref> In ''[[The Robots of Dawn]]'', it is stated that more advanced robots are built capable of determining which action is more harmful, and even choosing at random if the alternatives are equally bad. As such, a robot is capable of taking an action which can be interpreted as following the First Law, thus avoiding a mental collapse. The whole plot of the story revolves around a robot which apparently was destroyed by such a mental collapse, and since his designer and creator refused to share the basic theory with others, he is, by definition, the only person capable of circumventing the safeguards and forcing the robot into a brain-destroying paradox. In ''[[Robots and Empire]]'', Daneel states it's very unpleasant for him when making the proper decision takes too long (in robot terms), and he cannot imagine being without the Laws at all except to the extent of it being similar to that unpleasant sensation, only permanent.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)