Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Further reading == {{Refbegin|indent=yes|30em}} * [[David Autor|Autor, David H.]], "Why Are There Still So Many Jobs? The History and Future of Workplace Automation" (2015) 29(3) ''Journal of Economic Perspectives'' 3. * {{Cite book |last=Berlinski |first=David |author-link=David Berlinski |url=https://archive.org/details/adventofalgorith0000berl |title=The Advent of the Algorithm |publisher=Harcourt Books |date=2000 |isbn=978-0-1560-1391-8 |oclc=46890682 |access-date=22 August 2020 |archive-url=https://web.archive.org/web/20200726215744/https://archive.org/details/adventofalgorith0000berl |archive-date=26 July 2020 |url-status=live }} * Boyle, James, [https://direct.mit.edu/books/book/5859/The-LineAI-and-the-Future-of-Personhood The Line: AI and the Future of Personhood], [[MIT Press]], 2024. * [[Kenneth Cukier|Cukier, Kenneth]], "Ready for Robots? How to Think about the Future of AI", ''[[Foreign Affairs]]'', vol. 98, no. 4 (July/August 2019), pp. 192β198. [[George Dyson (science historian)|George Dyson]], historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." (p. 197.) Computer scientist [[Alex Pentland]] writes: "Current [[machine learning|AI machine-learning]] [[algorithm]]s are, at their core, dead simple stupid. They work, but they work by brute force." (p. 198.) * {{Cite journal |last=Evans |first=Woody |author-link=Woody Evans |date=2015 |title=Posthuman Rights: Dimensions of Transhuman Worlds |journal=Teknokultura |volume=12 |issue=2 |doi=10.5209/rev_TK.2015.v12.n2.49072 |doi-access=free|s2cid=147612763 }} * {{Cite web |last=Frank |first=Michael |date=September 22, 2023 |title=US Leadership in Artificial Intelligence Can Shape the 21st Century Global Order |url=https://thediplomat.com/2023/09/us-leadership-in-artificial-intelligence-can-shape-the-21st-century-global-order |access-date=2023-12-08 |website=[[The Diplomat (magazine)|The Diplomat]] |quote=Instead, the United States has developed a new area of dominance that the rest of the world views with a mixture of awe, envy, and resentment: artificial intelligence... From AI models and research to cloud computing and venture capital, U.S. companies, universities, and research labs β and their affiliates in allied countries β appear to have an enormous lead in both developing cutting-edge AI and commercializing it. The value of U.S. venture capital investments in AI start-ups exceeds that of the rest of the world combined. |archive-date=16 September 2024 |archive-url=https://web.archive.org/web/20240916014433/https://thediplomat.com/2023/09/us-leadership-in-artificial-intelligence-can-shape-the-21st-century-global-order/ |url-status=live }} * Gertner, Jon. (2023) "Wikipedia's Moment of Truth: Can the online encyclopedia help teach A.I. chatbots to get their facts right β without destroying itself in the process?" ''New York Times Magazine'' (July 18, 2023) [https://www.nytimes.com/2023/07/18/magazine/wikipedia-ai-chatgpt.html online] {{Webarchive|url=https://web.archive.org/web/20230720125400/https://www.nytimes.com/2023/07/18/magazine/wikipedia-ai-chatgpt.html |date=20 July 2023 }} * [[Gleick, James]], "The Fate of Free Will" (review of Kevin J. Mitchell, ''Free Agents: How Evolution Gave Us Free Will'', Princeton University Press, 2023, 333 pp.), ''[[The New York Review of Books]]'', vol. LXXI, no. 1 (18 January 2024), pp. 27β28, 30. "[[Agency (philosophy)|Agency]] is what distinguishes us from machines. For biological creatures, [[reason]] and [[motivation|purpose]] come from acting in the world and experiencing the consequences. Artificial intelligences β disembodied, strangers to blood, sweat, and tears β have no occasion for that." (p. 30.) * Halpern, Sue, "The Coming Tech Autocracy" (review of [[Verity Harding]], ''AI Needs You: How We Can Change AI's Future and Save Our Own'', Princeton University Press, 274 pp.; [[Gary Marcus]], ''Taming Silicon Valley: How We Can Ensure That AI Works for Us'', MIT Press, 235 pp.; [[Daniela Rus]] and [[Gregory Mone]], ''The Mind's Mirror: Risk and Reward in the Age of AI'', Norton, 280 pp.; [[Madhumita Murgia]], ''Code Dependent: Living in the Shadow of AI'', Henry Holt, 311 pp.), ''[[The New York Review of Books]]'', vol. LXXI, no. 17 (7 November 2024), pp. 44β46. "'We can't realistically expect that those who hope to get rich from AI are going to have the interests of the rest of us close at heart,' ... writes [Gary Marcus]. 'We can't count on [[government]]s driven by [[campaign finance]] contributions [from tech companies] to push back.'... Marcus details the demands that citizens should make of their governments and the [[tech company|tech companies]]. They include [[Transparency (behavior)|transparency]] on how AI systems work; compensation for individuals if their data [are] used to train LLMs ([[large language model]])s and the right to consent to this use; and the ability to hold tech companies liable for the harms they cause by eliminating [[Section 230]], imposing cash penalties, and passing stricter [[product liability]] laws... Marcus also suggests... that a new, AI-specific federal agency, akin to the [[FDA]], the [[FCC]], or the [[Federal Trade Commission|FTC]], might provide the most robust oversight.... [T]he [[Fordham University|Fordham]] law professor [[Chinmayi Sharma]]... suggests... establish[ing] a professional licensing regime for engineers that would function in a similar way to [[medical license]]s, [[malpractice]] suits, and the [[Hippocratic oath]] in medicine. 'What if, like doctors,' she asks..., 'AI engineers also vowed to [[Primum non nocere|do no harm]]?'" (p. 46.) * {{Cite news |last=Henderson |first=Mark |date=24 April 2007 |title=Human rights for robots? We're getting carried away |url=https://www.thetimes.com/uk/science/article/human-rights-for-robots-were-getting-carried-away-xfbdkpgwn0v |url-status=live |archive-url=https://web.archive.org/web/20140531104850/http://www.thetimes.co.uk/tto/technology/article1966391.ece |archive-date=31 May 2014 |access-date=31 May 2014 |work=The Times Online |location=London }} * Hughes-Castleberry, Kenna, "A Murder Mystery Puzzle: The literary puzzle ''[[Cain's Jawbone]]'', which has stumped humans for decades, reveals the limitations of natural-language-processing algorithms", ''[[Scientific American]]'', vol. 329, no. 4 (November 2023), pp. 81β82. "This murder mystery competition has revealed that although NLP ([[natural-language processing]]) models are capable of incredible feats, their abilities are very much limited by the amount of [[context (linguistics)|context]] they receive. This [...] could cause [difficulties] for researchers who hope to use them to do things such as analyze [[ancient language]]s. In some cases, there are few historical records on long-gone [[civilization]]s to serve as [[training data]] for such a purpose." (p. 82.) * [[Immerwahr, Daniel]], "Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?", ''[[The New Yorker]]'', 20 November 2023, pp. 54β59. "If by '[[deepfakes]]' we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren't deep, and the deeps aren't fake. [...] A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of [[cartoon]]s, especially smutty ones." (p. 59.) * Johnston, John (2008) ''The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI'', MIT Press. * {{Cite journal |last1=Jumper |first1=John |last2=Evans |first2=Richard |last3=Pritzel |first3=Alexander |last4=Green |first4=Tim |last5=Figurnov |first5=Michael |last6=Ronneberger |first6=Olaf |last7=Tunyasuvunakool |first7=Kathryn |last8=Bates |first8=Russ |last9=Ε½Γdek |first9=Augustin |last10=Potapenko |first10=Anna |last11=Bridgland |first11=Alex |last12=Meyer |first12=Clemens |last13=Kohl |first13=Simon A. A. |last14=Ballard |first14=Andrew J. |last15=Cowie |first15=Andrew |last16=Romera-Paredes |first16=Bernardino |last17=Nikolov |first17=Stanislav |last18=Jain |first18=Rishub |last19=Adler |first19=Jonas |last20=Back |first20=Trevor |last21=Petersen |first21=Stig |last22=Reiman |first22=David |last23=Clancy |first23=Ellen |last24=Zielinski |first24=Michal |last25=Steinegger |first25=Martin |last26=Pacholska |first26=Michalina |last27=Berghammer |first27=Tamas |last28=Bodenstein |first28=Sebastian |last29=Silver |first29=David |last30=Vinyals |first30=Oriol |last31=Senior |first31=Andrew W. |last32=Kavukcuoglu |first32=Koray |last33=Kohli |first33=Pushmeet |last34=Hassabis |first34=Demis |display-authors=3 |date=26 August 2021 |title=Highly accurate protein structure prediction with AlphaFold |journal=Nature |volume=596 |issue=7873 |pages=583β589 |bibcode=2021Natur.596..583J |doi=10.1038/s41586-021-03819-2 |pmc=8371605 |pmid=34265844 |s2cid=235959867}} * {{Cite journal |last1=LeCun |first1=Yann |last2=Bengio |first2=Yoshua |last3=Hinton |first3=Geoffrey |date=28 May 2015 |title=Deep learning |url=https://www.nature.com/articles/nature14539 |url-status=live |journal=Nature |volume=521 |issue=7553 |pages=436β444 |bibcode=2015Natur.521..436L |doi=10.1038/nature14539 |pmid=26017442 |s2cid=3074096 |archive-url=https://web.archive.org/web/20230605235832/https://www.nature.com/articles/nature14539 |archive-date=5 June 2023 |access-date=19 June 2023 }} * Leffer, Lauren, "The Risks of Trusting AI: We must avoid humanizing machine-learning models used in scientific research", ''[[Scientific American]]'', vol. 330, no. 6 (June 2024), pp. 80β81. * [[Jill Lepore|Lepore, Jill]], "The Chit-Chatbot: Is talking with a machine a conversation?", ''[[The New Yorker]]'', 7 October 2024, pp. 12β16. * {{Cite web |last=Maschafilm |date=2010 |title=Content: Plug & Pray Film β Artificial Intelligence β Robots |url=http://www.plugandpray-film.de/en/content.html |url-status=live |archive-url=https://web.archive.org/web/20160212040134/http://www.plugandpray-film.de/en/content.html |archive-date=12 February 2016 |website=plugandpray-film.de }} * [[Marcus, Gary]], "Artificial Confidence: Even the newest, buzziest systems of artificial general intelligence are stymmied by the same old problems", ''[[Scientific American]]'', vol. 327, no. 4 (October 2022), pp. 42β45. * {{Cite book |last=Mitchell |first=Melanie |title=Artificial intelligence: a guide for thinking humans |date=2019 |publisher=Farrar, Straus and Giroux |isbn=978-0-3742-5783-5 |location=New York}} * {{Cite journal |last1=Mnih |first1=Volodymyr |last2=Kavukcuoglu |first2=Koray |last3=Silver |first3=David |last4=Rusu |first4=Andrei A. |last5=Veness |first5=Joel |last6=Bellemare |first6=Marc G. |last7=Graves |first7=Alex |last8=Riedmiller |first8=Martin |last9=Fidjeland |first9=Andreas K. |last10=Ostrovski |first10=Georg |last11=Petersen |first11=Stig |last12=Beattie |first12=Charles |last13=Sadik |first13=Amir |last14=Antonoglou |first14=Ioannis |last15=King |first15=Helen |last16=Kumaran |first16=Dharshan |last17=Wierstra |first17=Daan |last18=Legg |first18=Shane |last19=Hassabis |first19=Demis |display-authors=3 |date=26 February 2015 |title=Human-level control through deep reinforcement learning |url=https://www.nature.com/articles/nature14236 |url-status=live |journal=Nature |volume=518 |issue=7540 |pages=529β533 |bibcode=2015Natur.518..529M |doi=10.1038/nature14236 |pmid=25719670 |s2cid=205242740 |archive-url=https://web.archive.org/web/20230619055525/https://www.nature.com/articles/nature14236 |archive-date=19 June 2023 |access-date=19 June 2023 }} Introduced [[Deep Q-learning|DQN]], which produced human-level performance on some Atari games. * [[Eyal Press|Press, Eyal]], "In Front of Their Faces: Does facial-recognition technology lead police to ignore contradictory evidence?", ''[[The New Yorker]]'', 20 November 2023, pp. 20β26. * {{Cite news |date=21 December 2006 |title=Robots could demand legal rights |url=http://news.bbc.co.uk/2/hi/technology/6200005.stm |url-status=live |archive-url=https://web.archive.org/web/20191015042628/http://news.bbc.co.uk/2/hi/technology/6200005.stm |archive-date=15 October 2019 |access-date=3 February 2011 |work=BBC News }} * Roivainen, Eka, "AI's IQ: [[ChatGPT]] aced a [standard intelligence] test but showed that [[intelligence]] cannot be measured by [[IQ]] alone", ''[[Scientific American]]'', vol. 329, no. 1 (July/August 2023), p. 7. "Despite its high IQ, [[ChatGPT]] fails at tasks that require real humanlike reasoning or an understanding of the physical and social world.... ChatGPT seemed unable to reason logically and tried to rely on its vast database of... facts derived from online texts." * Scharre, Paul, "Killer Apps: The Real Dangers of an AI Arms Race", ''[[Foreign Affairs]]'', vol. 98, no. 3 (May/June 2019), pp. 135β144. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.) * {{Cite journal |last1=Schulz |first1=Hannes |last2=Behnke |first2=Sven |date=1 November 2012 |title=Deep Learning |url=https://www.researchgate.net/publication/230690795 |journal=KI β KΓΌnstliche Intelligenz |volume=26 |issue=4 |pages=357β363 |doi=10.1007/s13218-012-0198-z |issn=1610-1987 |s2cid=220523562 }} * {{Cite journal |last1=Serenko |first1=Alexander |last2=Michael Dohan |date=2011 |title=Comparing the expert survey and citation impact journal ranking methods: Example from the field of Artificial Intelligence |url=http://www.aserenko.com/papers/JOI_AI_Journal_Ranking_Serenko.pdf |url-status=live |journal=Journal of Informetrics |volume=5 |issue=4 |pages=629β649 |doi=10.1016/j.joi.2011.06.002 |archive-url=https://web.archive.org/web/20131004212839/http://www.aserenko.com/papers/JOI_AI_Journal_Ranking_Serenko.pdf |archive-date=4 October 2013 |access-date=12 September 2013 }} * {{Cite journal |last1=Silver |first1=David |last2=Huang |first2=Aja |last3=Maddison |first3=Chris J. |last4=Guez |first4=Arthur |last5=Sifre |first5=Laurent |last6=van den Driessche |first6=George |last7=Schrittwieser |first7=Julian |last8=Antonoglou |first8=Ioannis |last9=Panneershelvam |first9=Veda |last10=Lanctot |first10=Marc |last11=Dieleman |first11=Sander |last12=Grewe |first12=Dominik |last13=Nham |first13=John |last14=Kalchbrenner |first14=Nal |last15=Sutskever |first15=Ilya |last16=Lillicrap |first16=Timothy |last17=Leach |first17=Madeleine |last18=Kavukcuoglu |first18=Koray |last19=Graepel |first19=Thore |last20=Hassabis |first20=Demis |display-authors=3 |date=28 January 2016 |title=Mastering the game of Go with deep neural networks and tree search |url=https://www.nature.com/articles/nature16961 |url-status=live |journal=Nature |volume=529 |issue=7587 |pages=484β489 |bibcode=2016Natur.529..484S |doi=10.1038/nature16961 |pmid=26819042 |s2cid=515925 |archive-url=https://web.archive.org/web/20230618213059/https://www.nature.com/articles/nature16961 |archive-date=18 June 2023 |access-date=19 June 2023 }} * [[Ben Tarnoff|Tarnoff, Ben]], "The Labor Theory of AI" (review of [[Matteo Pasquinelli]], ''The Eye of the Master: A Social History of Artificial Intelligence'', Verso, 2024, 264 pp.), ''[[The New York Review of Books]]'', vol. LXXII, no. 5 (27 March 2025), pp. 30β32. The reviewer, Ben Tarnoff, writes: "The strangeness at the heart of the [[generative AI]] boom is that nobody really knows how the technology works. We know how the [[large language model]]s within [[ChatGPT]] and its counterparts are trained, even if we don't always know which [[data]] they're being trained on: they are asked to predict the next string of characters in a sequence. But exactly how they arrive at any given [[prediction]] is a mystery. The [[computation]]s that occur inside the model are simply too intricate for any human to comprehend." (p. 32.) * [[Ashish Vaswani|Vaswani, Ashish]], Noam Shazeer, Niki Parmar et al. "[[Attention is all you need]]." Advances in neural information processing systems 30 (2017). Seminal paper on [[transformer (machine learning model)|transformer]]s. * Vincent, James, "Horny Robot Baby Voice: James Vincent on AI chatbots", ''[[London Review of Books]]'', vol. 46, no. 19 (10 October 2024), pp. 29β32. "[AI chatbot] programs are made possible by new technologies but rely on the timelelss human tendency to [[anthropomorphise]]." (p. 29.) * {{Cite book |url=https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf |title=White Paper: On Artificial Intelligence β A European approach to excellence and trust |publisher=European Commission |date=2020 |location=Brussels |ref={{Harvid|European Commission|2020}} |access-date=20 February 2020 |archive-url=https://web.archive.org/web/20200220173419/https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf |archive-date=20 February 2020 |url-status=live }} {{Refend}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)