Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Fact-checking
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Ongoing research in fact-checking and detecting fake news ==== {{See also|Misinformation#Countermeasures|Argument technology}} Since the [[2016 United States presidential election]], fake news has been a popular topic of discussion by President [[Donald Trump|Trump]] and news outlets. The reality of fake news had become omnipresent, and a lot of research has gone into understanding, identifying, and combating fake news. Also, a number of researchers began with the usage of fake news to influence the 2016 presidential campaign. One research found evidence of pro-Trump fake news being selectively targeted on conservatives and pro-Trump supporters in 2016.<ref>{{cite news|url=https://www.dartmouth.edu/~nyhan/fake-news-2016.pdf|title=Selective Exposure to Misinformation: Evidence from the consumption of fake news during the 2016 U.S. presidential campaign|last=Guess|first=Andrew|date=9 January 2018|newspaper=Dartmouth|access-date=5 March 2019|archive-url=https://web.archive.org/web/20190223155230/https://www.dartmouth.edu/~nyhan/fake-news-2016.pdf|archive-date=23 February 2019|url-status=live}}</ref> The researchers found that social media sites, Facebook in particular, to be powerful platforms to spread certain fake news to targeted groups to appeal to their sentiments during the 2016 presidential race. Additionally, researchers from [[Stanford University|Stanford]], [[New York University|NYU]], and [[National Bureau of Economic Research|NBER]] found evidence to show how engagement with fake news on Facebook and Twitter was high throughout 2016.<ref>{{Cite web|url=https://web.stanford.edu/~gentzkow/research/fake-news-trends.pdf|title=Trends in the Diffusion of Misinformation on Social Media|last=Allcott|first=Hunt|date=October 2018|publisher=Stanford|access-date=5 March 2019|archive-url=https://web.archive.org/web/20190728160530/https://web.stanford.edu/~gentzkow/research/fake-news-trends.pdf|archive-date=28 July 2019|url-status=live}}</ref> Recently, a lot of work has gone into helping detect and identify fake news through [[machine learning]] and artificial intelligence.<ref name="onlenv">{{cite web |title=The online information environment |url=https://royalsociety.org/-/media/policy/projects/online-information-environment/the-online-information-environment.pdf |access-date=21 February 2022}}</ref><ref>{{cite journal |last1=Islam |first1=Md Rafiqul |last2=Liu |first2=Shaowu |last3=Wang |first3=Xianzhi |last4=Xu |first4=Guandong |title=Deep learning for misinformation detection on online social networks: a survey and new perspectives |journal=Social Network Analysis and Mining |date=29 September 2020 |volume=10 |issue=1 |page=82 |doi=10.1007/s13278-020-00696-x |pmid=33014173 |pmc=7524036 |language=en |issn=1869-5469}}</ref><ref>{{Cite arXiv|last1=Mohseni |first1=Sina |last2=Ragan |first2=Eric |title=Combating Fake News with Interpretable News Feed Algorithms |date=4 December 2018|class=cs.SI |eprint=1811.12349 }}</ref> In 2018, researchers at [[MIT Computer Science and Artificial Intelligence Laboratory|MIT's CSAIL]] created and tested a machine learning algorithm to identify false information by looking for common patterns, words, and symbols that typically appear in fake news.<ref>{{Cite web|url=https://www.technologyreview.com/s/612236/even-the-best-ai-for-spotting-fake-news-is-still-terrible/|title=AI is still terrible at spotting fake news|last=Hao|first=Karen|website=MIT Technology Review|language=en|access-date=6 March 2019}}</ref> More so, they released an open-source data set with a large catalog of historical news sources with their veracity scores to encourage other researchers to explore and develop new methods and technologies for detecting fake news.{{citation needed|date=March 2020}} In 2022, researchers have also demonstrated the feasibility of falsity scores for popular and official figures by developing such for over 800 contemporary [[elite]]s on [[Twitter]] as well as associated exposure scores.<ref>{{cite news |title=New MIT Sloan research measures exposure to misinformation from political elites on Twitter |url=https://apnews.com/press-release/pr-newswire/misinformation-701fb46656eb2197a845f789857d83b2 |access-date=18 December 2022 |work=AP News |date=29 November 2022 |language=en}}</ref><ref>{{cite journal |last1=Mosleh |first1=Mohsen |last2=Rand |first2=David G. |title=Measuring exposure to misinformation from political elites on Twitter |journal=Nature Communications |date=21 November 2022 |volume=13 |issue=1 |page=7144 |doi=10.1038/s41467-022-34769-6 |pmid=36414634 |pmc=9681735 |bibcode=2022NatCo..13.7144M |language=en |issn=2041-1723|doi-access=free}}</ref> There are also demonstrations of platform-built-in (by-design) as well [[Web browser|browser]]-integrated (currently in the form of [[browser addon|addons]]) [[misinformation#Countermeasures|misinformation mitigation]].<ref name="platforms">{{cite news |last1=Zewe |first1=Adam |title=Empowering social media users to assess content helps fight misinformation |url=https://techxplore.com/news/2022-11-empowering-social-media-users-content.html |access-date=18 December 2022 |work=Massachusetts Institute of Technology via techxplore.com |language=en}}</ref><ref>{{cite journal |last1=Jahanbakhsh |first1=Farnaz |last2=Zhang |first2=Amy X. |last3=Karger |first3=David R. |title=Leveraging Structured Trusted-Peer Assessments to Combat Misinformation |journal=Proceedings of the ACM on Human-Computer Interaction |date=11 November 2022 |volume=6 |issue=CSCW2 |pages=524:1β524:40 |doi=10.1145/3555637|doi-access=free|hdl=1721.1/147638 |hdl-access=free }}</ref><ref>{{cite web |last1=Elliott |first1=Matt |title=Fake news spotter: How to enable Microsoft Edge's NewsGuard |url=https://www.cnet.com/tech/mobile/fake-news-spotter-how-to-enable-microsoft-edges-newsguard/ |website=CNET |access-date=9 January 2023 |language=en}}</ref><ref>{{cite web |title=12 Browser Extensions to Help You Detect and Avoid Fake News |url=https://thetrustedweb.org/browser-extensions-to-detect-and-avoid-fake-news/ |website=The Trusted Web |access-date=9 January 2023 |date=18 March 2021}}</ref> Efforts such as providing and viewing structured accuracy assessments on posts "are not currently supported by the platforms".<ref name="platforms"/> Trust in the default or, in decentralized designs, user-selected providers of assessments<ref name="platforms"/> (and their reliability) as well as the large quantities of posts and articles are two of the problems such approaches may face. Moreover, they cannot mitigate misinformation in chats, print-media and [[TV]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)