Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial general intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Further reading== {{Refbegin|indent=yes|30em}} * {{Citation |last=Aleksander |first=Igor |title=Impossible Minds |date=1996 |url=https://archive.org/details/impossiblemindsm0000alek |publisher=World Scientific Publishing Company |isbn=978-1-8609-4036-1 |author-link=Igor Aleksander |url-access=registration}} * {{Citation |title=Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain |vauthors=Azevedo FA, Carvalho LR, Grinberg LT, Farfel J |date=April 2009 |journal=The Journal of Comparative Neurology |volume=513 |issue=5 |pages=532β541 |url=https://www.researchgate.net/publication/24024444 |access-date=4 September 2013 |archive-url=https://web.archive.org/web/20210218035513/https://www.researchgate.net/publication/24024444_Equal_Numbers_of_Neuronal_and_Nonneuronal_Cells_Make_the_Human_Brain_an_Isometrically_Scaled-Up_Primate_Brain |archive-date=18 February 2021 |url-status=live |doi=10.1002/cne.21974 |pmid=19226510 |s2cid=5200449 |display-authors=etal |via=ResearchGate |s2cid-access=free}} * {{Citation |last=Berglas |first=Anthony |title=Artificial Intelligence Will Kill Our Grandchildren (Singularity) |date=January 2012 |orig-date=2008 |url=http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html |access-date=31 August 2012 |archive-url=https://web.archive.org/web/20140723053223/http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html |archive-date=23 July 2014 |url-status=live}} * [[Kenneth Cukier|Cukier, Kenneth]], "Ready for Robots? How to Think about the Future of AI", ''[[Foreign Affairs]]'', vol. 98, no. 4 (July/August 2019), pp. 192β98. [[George Dyson (science historian)|George Dyson]], historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." (p. 197.) Computer scientist [[Alex Pentland]] writes: "Current [[machine learning|AI machine-learning]] [[algorithm]]s are, at their core, dead simple stupid. They work, but they work by brute force." (p. 198.) * {{Citation |last=Gelernter |first=David |title=Dream-logic, the Internet and Artificial Thought |url=http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html |access-date=25 July 2010 |archive-url=https://web.archive.org/web/20100726055120/http://www.edge.org/3rd_culture/gelernter10.1/gelernter10.1_index.html |archive-date=26 July 2010 |url-status=dead |publisher=Edge}} * [[James Gleick|Gleick, James]], "The Fate of Free Will" (review of [[Kevin J. Mitchell]], ''Free Agents: How Evolution Gave Us Free Will'', Princeton University Press, 2023, 333 pp.), ''[[The New York Review of Books]]'', vol. LXXI, no. 1 (18 January 2024), pp. 27β28, 30. "[[Agency (philosophy)|Agency]] is what distinguishes us from machines. For biological creatures, [[reason]] and [[motivation|purpose]] come from acting in the world and experiencing the consequences. [[Artificial intelligence]]s β disembodied, strangers to blood, sweat, and tears β have no occasion for that." (p. 30.) * {{Cite web |last=Halal |first=William E. |title=TechCast Article Series: The Automation of Thought |url=http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-url=https://web.archive.org/web/20130606101835/http://www.techcast.org/Upload/PDFs/633615794236495345_TCTheAutomationofThought.pdf |archive-date=6 June 2013}} * Halpern, Sue, "The Coming Tech Autocracy" (review of [[Verity Harding]], ''AI Needs You: How We Can Change AI's Future and Save Our Own'', Princeton University Press, 274 pp.; [[Gary Marcus]], ''Taming Silicon Valley: How We Can Ensure That AI Works for Us'', MIT Press, 235 pp.; [[Daniela Rus]] and [[Gregory Mone]], ''The Mind's Mirror: Risk and Reward in the Age of AI'', Norton, 280 pp.; [[Madhumita Murgia]], ''Code Dependent: Living in the Shadow of AI'', Henry Holt, 311 pp.), ''[[The New York Review of Books]]'', vol. LXXI, no. 17 (7 November 2024), pp. 44β46. "'We can't realistically expect that those who hope to get rich from AI are going to have the interests of the rest of us close at heart,' ... writes [Gary Marcus]. 'We can't count on [[government]]s driven by [[campaign finance]] contributions [from tech companies] to push back.'... Marcus details the demands that citizens should make of their governments and the [[tech company|tech companies]]. They include [[Transparency (behavior)|transparency]] on how AI systems work; compensation for individuals if their data [are] used to train LLMs ([[large language model]])s and the right to consent to this use; and the ability to hold tech companies liable for the harms they cause by eliminating [[Section 230]], imposing cash penalites, and passing stricter [[product liability]] laws... Marcus also suggests... that a new, AI-specific federal agency, akin to the [[FDA]], the [[FCC]], or the [[Federal Trade Commission|FTC]], might provide the most robust oversight.... [T]he [[Fordham University|Fordham]] law professor [[Chinmayi Sharma]]... suggests... establish[ing] a professional licensing regime for engineers that would function in a similar way to [[medical license]]s, [[malpractice]] suits, and the [[Hippocratic oath]] in medicine. 'What if, like doctors,' she asks..., 'AI engineers also vowed to [[Primum non nocere|do no harm]]?'" (p. 46.) * {{Citation |last1=Holte |first1=R. C. |title=Abstraction and reformulation in artificial intelligence |work=[[Philosophical Transactions of the Royal Society B]] |volume=358 |issue=1435 |pages=1197β1204 |date=2003 |doi=10.1098/rstb.2003.1317 |pmc=1693218 |pmid=12903653 |last2=Choueiry |first2=B. Y.}} * [[Kenna Hughes-Castleberry|Hughes-Castleberry, Kenna]], "A Murder Mystery Puzzle: The literary puzzle ''[[Cain's Jawbone]]'', which has stumped humans for decades, reveals the limitations of natural-language-processing algorithms", ''[[Scientific American]]'', vol. 329, no. 4 (November 2023), pp. 81β82. "This murder mystery competition has revealed that although NLP ([[natural-language processing]]) models are capable of incredible feats, their abilities are very much limited by the amount of [[context (linguistics)|context]] they receive. This [...] could cause [difficulties] for researchers who hope to use them to do things such as analyze [[ancient language]]s. In some cases, there are few historical records on long-gone [[civilization]]s to serve as [[training data]] for such a purpose." (p. 82.) * [[Daniel Immerwahr|Immerwahr, Daniel]], "Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?", ''[[The New Yorker]]'', 20 November 2023, pp. 54β59. "If by '[[deepfakes]]' we mean realistic videos produced using [[artificial intelligence]] that actually deceive people, then they barely exist. The fakes aren't deep, and the deeps aren't fake. [...] A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of [[cartoon]]s, especially smutty ones." (p. 59.) * Leffer, Lauren, "The Risks of Trusting AI: We must avoid humanizing machine-learning models used in scientific research", ''[[Scientific American]]'', vol. 330, no. 6 (June 2024), pp. 80β81. * [[Jill Lepore|Lepore, Jill]], "The Chit-Chatbot: Is talking with a machine a conversation?", ''[[The New Yorker]]'', 7 October 2024, pp. 12β16. * [[Gary Marcus|Marcus, Gary]], "Artificial Confidence: Even the newest, buzziest systems of artificial general intelligence are stymmied by the same old problems", ''[[Scientific American]]'', vol. 327, no. 4 (October 2022), pp. 42β45. * {{Citation |last=McCarthy |first=John |title=From here to human-level AI |date=Oct 2007 |journal=Artificial Intelligence |volume=171 |issue=18 |pages=1174β1182 |doi=10.1016/j.artint.2007.10.009 |author-link=John McCarthy (computer scientist) |doi-access=free}} * {{McCorduck 2004|ref=none}} * {{Citation |last=Moravec |first=Hans |title=The Role of Raw Power in Intelligence |date=1976 |url=http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html |access-date=29 September 2007 |archive-url=https://web.archive.org/web/20160303232511/http://www.frc.ri.cmu.edu/users/hpm/project.archive/general.articles/1975/Raw.Power.html |archive-date=3 March 2016 |url-status=dead |author-link=Hans Moravec}} * {{Citation |last1=Newell |first1=Allen |title=Computers and Thought |date=1963 |editor-last=Feigenbaum |editor-first=E. A. |editor-last2=Feldman |editor-first2=J. |chapter=GPS: A Program that Simulates Human Thought |place=New York |publisher=McGraw-Hill |last2=Simon |first2=H. A. |author-link=Allen Newell |author-link2=Herbert A. Simon}} * {{Citation |last=Omohundro |first=Steve |title=The Nature of Self-Improving Artificial Intelligence |date=2008 |publisher=presented and distributed at the 2007 Singularity Summit, San Francisco, California |author-link=Steve Omohundro}} * [[Eyal Press|Press, Eyal]], "In Front of Their Faces: Does facial-recognition technology lead police to ignore contradictory evidence?", ''[[The New Yorker]]'', 20 November 2023, pp. 20β26. * [[Eka Roivainen|Roivainen, Eka]], "AI's IQ: [[ChatGPT]] aced a [standard intelligence] test but showed that [[intelligence]] cannot be measured by [[IQ]] alone", ''[[Scientific American]]'', vol. 329, no. 1 (July/August 2023), p. 7. "Despite its high IQ, [[ChatGPT]] fails at tasks that require real humanlike reasoning or an understanding of the physical and social world.... ChatGPT seemed unable to reason logically and tried to rely on its vast database of... facts derived from online texts." * Scharre, Paul, "Killer Apps: The Real Dangers of an AI Arms Race", ''[[Foreign Affairs]]'', vol. 98, no. 3 (May/June 2019), pp. 135β44. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.) * {{Citation |last=Sutherland |first=J. G. |title=Holographic Model of Memory, Learning, and Expression |work=International Journal of Neural Systems |volume=1β3 |pages=256β267 |date=1990}} * Vincent, James, "Horny Robot Baby Voice: James Vincent on AI chatbots", ''[[London Review of Books]]'', vol. 46, no. 19 (10 October 2024), pp. 29β32. "[AI chatbot] programs are made possible by new technologies but rely on the timelelss human tendency to [[anthropomorphise]]." (p. 29.) * {{Citation |last1=Williams |first1=R. W. |title=The control of neuron number |journal=Annual Review of Neuroscience |volume=11 |pages=423β453 |date=1988 |doi=10.1146/annurev.ne.11.030188.002231 |pmid=3284447 |last2=Herrup |first2=K.}}<!--| access-date = 20 June 2009--> * {{Citation |last=Yudkowsky |first=Eliezer |title=Artificial General Intelligence |journal=Annual Review of Psychology |volume=49 |pages=585β612 |date=2006 |url=http://www.singinst.org/upload/LOGI//LOGI.pdf |archive-url=https://web.archive.org/web/20090411050423/http://www.singinst.org/upload/LOGI/LOGI.pdf |archive-date=11 April 2009 |url-status=dead |publisher=Springer |doi=10.1146/annurev.psych.49.1.585 |isbn=978-3-5402-3733-4 |pmid=9496632 |author-link=Eliezer Yudkowsky}} * {{Citation |last=Yudkowsky |first=Eliezer |title=Artificial Intelligence as a Positive and Negative Factor in Global Risk |work=Global Catastrophic Risks |date=2008 |bibcode=2008gcr..book..303Y |doi=10.1093/oso/9780198570509.003.0021 |isbn=978-0-1985-7050-9 |author-link=Eliezer Yudkowsky}} * {{Citation |last=Zucker |first=Jean-Daniel |title=A grounded theory of abstraction in artificial intelligence |date=July 2003 |work=[[Philosophical Transactions of the Royal Society B]] |volume=358 |issue=1435 |pages=1293β1309 |doi=10.1098/rstb.2003.1308 |pmc=1693211 |pmid=12903672}} {{Refend}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)