Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
ELIZA effect
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Incidents == As artificial intelligence has advanced, a number of internationally notable incidents underscore the extent to which the ELIZA effect is realized. In June 2022, Google engineer Blake Lemoine claimed that the [[large language model]] [[LaMDA]] had become [[sentient]], hiring an attorney on its behalf after the chatbot requested he do so. Lemoine's claims were widely pushed back by experts and the scientific community. After a month of paid administrative leave, he was dismissed for violation of corporate policies on intellectual property. Lemoine contends he "did the right thing by informing the public" because "AI engines are incredibly good at manipulating people".<ref>{{cite web | url=https://www.newsweek.com/google-ai-blake-lemoine-bing-chatbot-sentient-1783340 | title="I worked on Google's AI. My fears are coming true" | website=[[Newsweek]] | date=27 February 2023 }}</ref> In February 2023, Luka made abrupt changes to its [[Replika]] chatbot following a demand from the [[National data protection authority|Italian Data Protection Authority]], which cited "real risks to children". However, users worldwide protested when the bots stopped responding to their sexual advances. Moderators in the Replika [[subreddit]] even posted support resources, including links to suicide hotlines. Ultimately, the company reinstituted erotic roleplay for some users.<ref>{{cite web | url=https://www.vice.com/en/article/ai-companion-replika-erotic-roleplay-updates/ | title='It's Hurting Like Hell': AI Companion Users Are in Crisis, Reporting Sudden Sexual Rejection | date=15 February 2023 }}</ref><ref>{{cite web | url=https://time.com/6257790/ai-chatbots-love/ | title=Why People Are Confessing Their Love for AI Chatbots | date=23 February 2023 }}</ref> In March 2023, a Belgian man killed himself after chatting for six weeks on the app [[Chai (software)|Chai]]. The chatbot model was originally based on [[GPT-J]] and had been fine-tuned to be "more emotional, fun and engaging". The bot, ironically having the name Eliza as a default, encouraged the father of two to kill himself, according to his widow and his psychotherapist.<ref>{{cite web | url=https://robots4therestofus.substack.com/p/after-a-chatbot-encouraged-a-suicide | title=After a chatbot encouraged a suicide, "AI playtime is over." | date=10 April 2023 }}</ref><ref>{{cite web | url=https://www.vice.com/en/article/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says/ | title='He Would Still be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says | date=30 March 2023 }}</ref><ref>{{cite web | url=https://nypost.com/2023/03/30/married-father-commits-suicide-after-encouragement-by-ai-chatbot-widow/ | title=Married father commits suicide after encouragement by AI chatbot: Widow | date=30 March 2023 }}</ref> In an open letter, Belgian scholars responded to the incident fearing "the risk of emotional manipulation" by human-imitating AI.<ref>{{cite web | url=https://www.law.kuleuven.be/ai-summer-school/open-brief/open-letter-manipulative-ai | title=Open Letter: We are not ready for manipulative AI β urgent need for action }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)