Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Misinformation ==== {{See also|YouTube#Moderation and offensive content}} [[YouTube]], [[Facebook]] and others use [[recommender system]]s to guide users to more content. These AI programs were given the goal of [[mathematical optimization|maximizing]] user engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choose [[misinformation]], [[conspiracy theories]], and extreme [[partisan (politics)|partisan]] content, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people into [[filter bubbles]] where they received multiple versions of the same misinformation.{{Sfnp|Nicas|2018}} This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government.<ref>{{Cite web |last1=Rainie |first1=Lee |last2=Keeter |first2=Scott |last3=Perrin |first3=Andrew |date=July 22, 2019 |title=Trust and Distrust in America |url=https://www.pewresearch.org/politics/2019/07/22/trust-and-distrust-in-america |url-status=live |archive-url=https://web.archive.org/web/20240222000601/https://www.pewresearch.org/politics/2019/07/22/trust-and-distrust-in-america |archive-date=Feb 22, 2024 |website=Pew Research Center}}</ref> The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took some steps to mitigate the problem.<ref>{{Cite magazine |last=Kosoff |first=Maya |date=2018-02-08 |title=YouTube Struggles to Contain Its Conspiracy Problem |url=https://www.vanityfair.com/news/2018/02/youtube-conspiracy-problem |access-date=2025-04-10 |magazine=Vanity Fair |language=en-US}}</ref> In 2022, [[generative AI]] began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films, or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda.{{Sfnp|Williams|2023}} One such potential malicious use is deepfakes for [[computational propaganda]].<ref>{{Cite journal |last=Olanipekun |first=Samson Olufemi |date=2025 |title=Computational propaganda and misinformation: AI technologies as tools of media manipulation |url=https://journalwjarr.com/node/366 |journal=World Journal of Advanced Research and Reviews |language=en |volume=25 |issue=1 |pages=911β923 |doi=10.30574/wjarr.2025.25.1.0131 |issn=2581-9615}}</ref> AI pioneer [[Geoffrey Hinton]] expressed concern about AI enabling "authoritarian leaders to manipulate their electorates" on a large scale, among other risks.{{Sfnp|Taylor|Hern|2023}} AI researchers at [[Microsoft]], [[OpenAI]], universities and other organisations have suggested using "[[Proof of personhood#Approaches|personhood credentials]]" as a way to overcome online deception enabled by AI models.<ref>{{Cite news |title=To fight AI, we need 'personhood credentials,' say AI firms |url=https://www.theregister.com/2024/09/03/ai_personhood_credentials/ |archive-url=http://web.archive.org/web/20250424232537/https://www.theregister.com/2024/09/03/ai_personhood_credentials/ |archive-date=2025-04-24 |access-date=2025-05-09 |language=en}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)