Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
ELIZA effect
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Cognitive bias in which computers are anthropomorphised}} In [[computer science]], the '''ELIZA effect''' is a tendency to project human traits — such as experience, [[semantics|semantic]] comprehension or [[empathy]] — onto rudimentary computer programs having a textual interface. [[ELIZA]] was a [[symbolic AI]] [[chatbot]] developed in 1966 by [[Joseph Weizenbaum]] and imitating a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations. == History == The effect is named for [[ELIZA]], the 1966 [[chatbot]] developed by MIT computer scientist [[Joseph Weizenbaum]]. When executing Weizenbaum's ''DOCTOR'' [[scripting language|script]], ELIZA simulated a [[Rogerian psychotherapy|Rogerian]] [[psychotherapist]], largely by rephrasing the "patient{{"'}}s replies as questions:<ref name="Güzeldere1">{{cite web |title=dialogues with colorful personalities of early ai |last=Güzeldere |first=Güven |author2=Franchi, Stefano |url=http://www.stanford.edu/group/SHR/4-2/text/dialogues.html |access-date=2007-07-30 |archive-url=https://web.archive.org/web/20110425191843/http://www.stanford.edu/group/SHR/4-2/text/dialogues.html |archive-date=2011-04-25 |url-status=dead }}</ref> : Human: Well, my boyfriend made me come here. : ELIZA: Your boyfriend made you come here? : Human: He says I'm depressed much of the time. : ELIZA: I am sorry to hear you are depressed. : Human: It's true. I'm unhappy. : ELIZA: Do you think coming here will help you not to be unhappy? Though designed strictly as a mechanism to support "natural language conversation" with a computer,<ref name="Weizenbaum2">{{cite journal|first=Joseph|last=Weizenbaum|title=ELIZA--A Computer Program For the Study of Natural Language Communication Between Man and Machine|journal=Communications of the ACM|publisher=[[Massachusetts Institute of Technology]]|volume=9|date=January 1966|access-date=2008-06-17|url=http://www.csee.umbc.edu/courses/331/papers/eliza.html|doi=10.1145/365153.365168|page=36|s2cid=1896290|doi-access=free}}</ref> ELIZA's ''DOCTOR'' script was found to be surprisingly successful in eliciting emotional responses from users who, in the course of interacting with the program, began to ascribe understanding and motivation to the program's output.<ref name="Suchman1">{{cite book|first=Lucy A.|last=Suchman|title=Plans and Situated Actions: The problem of human-machine communication|publisher=Cambridge University Press|year=1987|isbn=978-0-521-33739-7|page=24|access-date=2008-06-17|url=https://books.google.com/books?id=AJ_eBJtHxmsC&q=Suchman+Plans+and+Situated+Actions}}</ref> As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."<ref>{{Cite book |last=Weizenbaum |first=Joseph |title=Computer Power and Human Reason: From Judgement to Calculation |date=1976 |publisher=W. H. Freeman |isbn=978-0716704645 |page=7}}</ref> Indeed, ELIZA's code had not been designed to evoke this reaction in the first place. Upon observation, researchers discovered users unconsciously assuming ELIZA's questions implied interest and emotional involvement in the topics discussed, even when they consciously knew that ELIZA did not simulate emotion.<ref name="Billings1">{{cite news |last=Billings |first=Lee |date=2007-07-16 |title=Rise of Roboethics |url=http://www.seedmagazine.com/news/2007/07/rise_of_roboethics.php |url-status=dead |archive-url=https://web.archive.org/web/20090228092414/http://www.seedmagazine.com/news/2007/07/rise_of_roboethics.php |archive-date=2009-02-28 |publisher=[[Seed (magazine)|Seed]] |quote=(Joseph) Weizenbaum had unexpectedly discovered that, even if fully aware that they are talking to a simple computer program, people will nonetheless treat it as if it were a real, thinking being that cared about their problems – a phenomenon now known as the 'Eliza Effect'.}}</ref> Although the effect was first named in the 1960s, the tendency to understand mechanical operations in psychological terms was noted by [[Charles Babbage]]. In proposing what would later be called a [[carry-lookahead adder]], Babbage remarked that he found such terms convenient for descriptive purposes, even though nothing more than mechanical action was meant.<ref>{{cite journal|last=Green|first=Christopher D.|author-link=Christopher D. Green|title=Was Babbage's Analytical Engine an Instrument of Psychological Research?|journal=History of Psychology|volume=8|number=1|pages=35–45|date=February 2005|doi=10.1037/1093-4510.8.1.35 |pmid=16021763 }}</ref> == Characteristics == In its specific form, the ELIZA effect refers only to "the susceptibility of people to read far more understanding than is warranted into strings of symbols—especially words—strung together by computers".<ref name="Hofstadter1996">{{ cite book |author=Hofstadter, Douglas R.|year=1996|title=Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought|url= https://books.google.com/books?id=somvbmHCaOEC|chapter=Preface 4 The Ineradicable Eliza Effect and Its Dangers, Epilogue|chapter-url = https://books.google.com/books?id=somvbmHCaOEC&pg=PA157|page=157|publisher=Basic Books|isbn= 978-0-465-02475-9 }}</ref> A trivial example of the specific form of the Eliza effect, given by [[Douglas Hofstadter]], involves an [[automated teller machine]] which displays the words "THANK YOU" at the end of a transaction. A naive observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols.<ref name=Hofstadter1996/> More generally, the ELIZA effect describes any situation<ref name="Fenton-Kerr1999">{{Cite book|chapter=GAIA: An Experimental Pedagogical Agent for Exploring Multimodal Interaction|title=Computation for Metaphors, Analogy, and Agents|last=Fenton-Kerr|first=Tom|series=Lecture Notes in Computer Science|year=1999|volume=1562|publisher=Springer|page=156|doi=10.1007/3-540-48834-0_9|isbn=978-3-540-65959-4|quote=Although Hofstadter is emphasizing the text mode here, the "Eliza effect" can be seen in almost all modes of human/computer interaction.}}</ref><ref name="Ekbia2008">{{cite book|title=Artificial Dreams: The Quest for Non-Biological Intelligence|last=Ekbia|first=Hamid R.|publisher=Cambridge University Press|year=2008|isbn=978-0-521-87867-8|page=[https://archive.org/details/artificialdreams0000ekbi/page/n23 8]|url=https://archive.org/details/artificialdreams0000ekbi|url-access=registration}}</ref> where, based solely on a system's output, users perceive computer systems as having "intrinsic qualities and abilities which the software controlling the (output) cannot possibly achieve"<ref name="King1995">{{Cite tech report|title=Anthropomorphic Agents: Friend, Foe, or Folly|first=W.|last=King|id=M-95-1|publisher=University of Washington|year=1995}}</ref> or "assume that [outputs] reflect a greater causality than they actually do".<ref name="Rouse2005">{{cite book|title=Organizational Simulation|last1=Rouse|first1=William B.|last2=Boff|first2=Kenneth R.|publisher=Wiley-IEEE|year=2005|isbn=978-0-471-73943-2|url=https://books.google.com/books?id=371wV4dI7ckC&pg=PA308|pages=308–309|quote=This is a particular problem in digital environments where the "Eliza effect" as it is sometimes called causes interactors to assume that the system is more intelligent than it is, to assume that events reflect a greater causality than they actually do.}}</ref> In both its specific and general forms, the ELIZA effect is notable for occurring even when users of the system are aware of the [[determinate]] nature of output produced by the system. From a psychological standpoint, the ELIZA effect is the result of a subtle [[cognitive dissonance]] between the user's awareness of programming limitations and their behavior towards the output of the [[computer program|program]].<ref name="Ekbia2008_quote">{{cite book|title=Artificial Dreams: The Quest for Non-Biological Intelligence|last=Ekbia|first=Hamid R.|publisher=Cambridge University Press|year=2008|isbn=978-0-521-87867-8|page=[https://archive.org/details/artificialdreams0000ekbi/page/156 156]|url=https://archive.org/details/artificialdreams0000ekbi|url-access=registration|quote=But people want to believe that the program is "seeing" a football game at some plausible level of abstraction. The words that (the program) manipulates are so full of associations for readers that they CANNOT be stripped of all their imagery. Collins of course knew that his program didn't deal with anything resembling a two-dimensional world of smoothly moving dots (let alone simplified human bodies), and presumably he thought that his readers, too, would realize this. He couldn't have suspected, however, how powerful the Eliza effect is.}}</ref> == Significance == The discovery of the ELIZA effect was an important development in [[artificial intelligence]], demonstrating the principle of using [[Social engineering (security)|social engineering]] rather than explicit programming to pass a [[Turing test]].<ref name="Trappl2002">{{cite book|title=Emotions in Humans and Artifacts|last1=Trappl|first1=Robert|last2=Petta|first2=Paolo|last3=Payr|first3=Sabine|page=353|year=2002|isbn=978-0-262-20142-1|quote=The "Eliza effect" — the tendency for people to treat programs that respond to them as if they had more intelligence than they really do (Weizenbaum 1966) is one of the most powerful tools available to the creators of virtual characters.|url=https://books.google.com/books?id=jTgMIhy6YZMC&pg=PA353|publisher=MIT Press|location=Cambridge, Mass.}}</ref> ELIZA convinced some users into thinking that a machine was human. This shift in human-machine interaction marked progress in technologies emulating human behavior. Two groups of chatbots are distinguished by William Meisel as "general [[personal assistant]]s" and "specialized digital assistants".<ref name=":0">{{Cite journal|last=Dale|first=Robert|date=September 2016|title=The return of the chatbots|journal=Natural Language Engineering|language=en|volume=22|issue=5|pages=811–817|doi=10.1017/S1351324916000243|issn=1351-3249|doi-access=free}}</ref> General digital assistants have been integrated into personal devices, with skills like sending messages, taking notes, checking calendars, and setting appointments. Specialized digital assistants "operate in very specific domains or help with very specific tasks".<ref name=":0" /> Weizenbaum considered that not every part of the human thought could be reduced to logical formalisms and that "there are some acts of thought that ought to be attempted only by humans".<ref>{{Cite book |last=Weizenbaum |first=Joseph |url=https://www.worldcat.org/oclc/1527521 |title=Computer power and human reason : from judgment to calculation |date=1976 |publisher=W. H. Freeman and Company |isbn=0-7167-0464-1 |location=San Francisco, Cal. |oclc=1527521}}</ref> When chatbots are [[Anthropomorphism|anthropomorphized]], they tend to portray gendered features as a way through which we establish relationships with the technology. "Gender stereotypes are instrumentalised to manage our relationship with chatbots" when human behavior is programmed into machines.<ref>[https://2018.xcoax.org/pdf/xCoAx2018-Costa.pdf Costa, Pedro. Ribas, Luisa. Conversations with ELIZA: on Gender and Artificial Intelligence. From (6th Conference on Computation, Communication, Aesthetics & X 2018) Accessed February 2021]</ref> Feminized labor, or [[women's work]], automated by anthropomorphic digital assistants reinforces an "assumption that women possess a natural affinity for service work and emotional labour".<ref>Hester, Helen. 2016. "Technology Becomes Her." New Vistas 3 (1):46-50.</ref> In defining our proximity to digital assistants through their human attributes, chatbots become gendered entities. == Incidents == As artificial intelligence has advanced, a number of internationally notable incidents underscore the extent to which the ELIZA effect is realized. In June 2022, Google engineer Blake Lemoine claimed that the [[large language model]] [[LaMDA]] had become [[sentient]], hiring an attorney on its behalf after the chatbot requested he do so. Lemoine's claims were widely pushed back by experts and the scientific community. After a month of paid administrative leave, he was dismissed for violation of corporate policies on intellectual property. Lemoine contends he "did the right thing by informing the public" because "AI engines are incredibly good at manipulating people".<ref>{{cite web | url=https://www.newsweek.com/google-ai-blake-lemoine-bing-chatbot-sentient-1783340 | title="I worked on Google's AI. My fears are coming true" | website=[[Newsweek]] | date=27 February 2023 }}</ref> In February 2023, Luka made abrupt changes to its [[Replika]] chatbot following a demand from the [[National data protection authority|Italian Data Protection Authority]], which cited "real risks to children". However, users worldwide protested when the bots stopped responding to their sexual advances. Moderators in the Replika [[subreddit]] even posted support resources, including links to suicide hotlines. Ultimately, the company reinstituted erotic roleplay for some users.<ref>{{cite web | url=https://www.vice.com/en/article/ai-companion-replika-erotic-roleplay-updates/ | title='It's Hurting Like Hell': AI Companion Users Are in Crisis, Reporting Sudden Sexual Rejection | date=15 February 2023 }}</ref><ref>{{cite web | url=https://time.com/6257790/ai-chatbots-love/ | title=Why People Are Confessing Their Love for AI Chatbots | date=23 February 2023 }}</ref> In March 2023, a Belgian man killed himself after chatting for six weeks on the app [[Chai (software)|Chai]]. The chatbot model was originally based on [[GPT-J]] and had been fine-tuned to be "more emotional, fun and engaging". The bot, ironically having the name Eliza as a default, encouraged the father of two to kill himself, according to his widow and his psychotherapist.<ref>{{cite web | url=https://robots4therestofus.substack.com/p/after-a-chatbot-encouraged-a-suicide | title=After a chatbot encouraged a suicide, "AI playtime is over." | date=10 April 2023 }}</ref><ref>{{cite web | url=https://www.vice.com/en/article/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says/ | title='He Would Still be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says | date=30 March 2023 }}</ref><ref>{{cite web | url=https://nypost.com/2023/03/30/married-father-commits-suicide-after-encouragement-by-ai-chatbot-widow/ | title=Married father commits suicide after encouragement by AI chatbot: Widow | date=30 March 2023 }}</ref> In an open letter, Belgian scholars responded to the incident fearing "the risk of emotional manipulation" by human-imitating AI.<ref>{{cite web | url=https://www.law.kuleuven.be/ai-summer-school/open-brief/open-letter-manipulative-ai | title=Open Letter: We are not ready for manipulative AI – urgent need for action }}</ref> == See also == {{Portal|Philosophy|Psychology}} * [[Duck test]] * [[Intentional stance]] * [[Loebner Prize]] * [[Philosophical zombie]] * [[Semiotics]] * [[Uncanny valley]] * [[Chinese Room]] == References == {{Refs}} == Further reading == {{Refbegin}} * Hofstadter, Douglas. ''Preface 4: The Ineradicable Eliza Effect and Its Dangers.'' (from ''[[Fluid Concepts and Creative Analogies]]: Computer Models of the Fundamental Mechanisms of Thought'', Basic Books: New York, 1995) * Turkle, S., Eliza Effect: tendency to accept computer responses as more intelligent than they really are (from ''Life on the screen- Identity in the Age of the Internet'', Phoenix Paperback: London, 1997) {{Refend}} [[Category:Human–computer interaction]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:"'
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite news
(
edit
)
Template:Cite tech report
(
edit
)
Template:Cite web
(
edit
)
Template:Portal
(
edit
)
Template:Refbegin
(
edit
)
Template:Refend
(
edit
)
Template:Refs
(
edit
)
Template:Short description
(
edit
)