Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Three Laws of Robotics
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Resolving conflicts among the laws=== Advanced robots in fiction are typically programmed to handle the Three Laws in a sophisticated manner. In a number of stories, such as "[[Runaround (story)|Runaround]]" by Asimov, the potential and severity of all actions are weighed and a robot will break the laws as little as possible rather than do nothing at all. For example, the First Law may forbid a robot from functioning as a surgeon, as that act may cause damage to a human; however, Asimov's stories eventually included robot surgeons ("The Bicentennial Man" being a notable example). When robots are sophisticated enough to weigh alternatives, a robot may be programmed to accept the necessity of inflicting damage during surgery in order to prevent the greater harm that would result if the surgery were not carried out, or was carried out by a more fallible human surgeon. In "[[Evidence (Asimov)|Evidence]]" Susan Calvin points out that a robot may act as a [[Attorney at law (United States)|prosecuting attorney]] because in the American justice system it is the [[jury]] which decides guilt or innocence, the judge who decides the sentence, and the [[executioner]] who carries through [[capital punishment]].<ref name=autogenerated2>{{cite book |title=I, Robot |url=http://nullfile.com/ebooks/%28ebook%29%20Asimov,%20Isaac%20-%20I,%20Robot.pdf |author=Isaac Asimov |access-date=11 November 2010 |page=122 |format=Asimov, Isaac - I, Robot.pdf}}</ref> Asimov's Three Laws-obeying robots (Asenion robots) can experience irreversible mental collapse if they are forced into situations where they cannot obey the First Law, or if they discover they have unknowingly violated it. The first example of this [[failure mode]] occurs in the story "[[Liar! (short story)|Liar!]]", which introduced the First Law itself, and introduces failure by dilemma—in this case the robot will hurt humans if he tells them something and hurt them if he does not.<ref name=autogenerated1>{{cite book|title=I, Robot |url=http://nullfile.com/ebooks/%28ebook%29%20Asimov,%20Isaac%20-%20I,%20Robot.pdf|author=Isaac Asimov|access-date=11 November 2010 |page=75 |format=Asimov, Isaac - I, Robot.pdf}}</ref> This failure mode, which often ruins the positronic brain beyond repair, plays a significant role in Asimov's SF-mystery novel ''[[The Naked Sun]]''. Here Daneel describes activities contrary to one of the laws, but in support of another, as overloading some circuits in a robot's brain—the equivalent sensation to pain in humans. The example he uses is forcefully ordering a robot to let a human do its work, which on Solaria, due to the extreme specialization, would mean its only purpose.<ref name="TNSPain">{{cite book |last=Asimov|first=Isaac|title=The Naked Sun (ebook) |year=1956–1957 |page=56 |quote=Are you trying to tell me, Daneel, that it hurts the robot to have me do its work? ... experience which the robot undergoes is as upsetting to it as pain is to a human}}</ref> In ''[[The Robots of Dawn]]'', it is stated that more advanced robots are built capable of determining which action is more harmful, and even choosing at random if the alternatives are equally bad. As such, a robot is capable of taking an action which can be interpreted as following the First Law, thus avoiding a mental collapse. The whole plot of the story revolves around a robot which apparently was destroyed by such a mental collapse, and since his designer and creator refused to share the basic theory with others, he is, by definition, the only person capable of circumventing the safeguards and forcing the robot into a brain-destroying paradox. In ''[[Robots and Empire]]'', Daneel states it's very unpleasant for him when making the proper decision takes too long (in robot terms), and he cannot imagine being without the Laws at all except to the extent of it being similar to that unpleasant sensation, only permanent.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)