Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial general intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Feasibility === [[File:When-do-experts-expect-Artificial-General-Intelligence.png|thumb|Surveys about when experts expect artificial general intelligence<ref name=":22" />]] As of 2023, the development and potential achievement of AGI remains a subject of intense debate within the AI community. While traditional consensus held that AGI was a distant goal, recent advancements have led some researchers and industry figures to claim that early forms of AGI may already exist.<ref name=":17">{{Cite web |date=23 March 2023 |title=Microsoft Researchers Claim GPT-4 Is Showing "Sparks" of AGI |url=https://futurism.com/gpt-4-sparks-of-agi |access-date=2023-12-13 |website=Futurism}}</ref> AI pioneer [[Herbert A. Simon]] speculated in 1965 that "machines will be capable, within twenty years, of doing any work a man can do". This prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{Cite news |last1=Allen |first1=Paul |last2=Greaves |first2=Mark |date=October 12, 2011 |title=The Singularity Isn't Near |url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/ |access-date=17 September 2014 |work=[[MIT Technology Review]]}}</ref> Writing in ''[[The Guardian]]'', roboticist [[Alan Winfield]] claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{Cite news |last=Winfield |first=Alan |title=Artificial intelligence will not turn into a Frankenstein's monster |url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield |url-status=live |archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield |archive-date=17 September 2014 |access-date=17 September 2014 |work=[[The Guardian]]}}</ref> A further challenge is the lack of clarity in defining what [[intelligence]] entails. Does it require consciousness? Must it display the ability to set goals as well as pursue them? Is it purely a matter of scale such that if model sizes increase sufficiently, intelligence will emerge? Are facilities such as planning, reasoning, and causal understanding required? Does intelligence require explicitly replicating the brain and its specific faculties? Does it require emotions?<ref>{{Cite journal |last=Deane |first=George |date=2022 |title=Machines That Feel and Think: The Role of Affective Feelings and Mental Action in (Artificial) General Intelligence |url=http://dx.doi.org/10.1162/artl_a_00368 |journal=Artificial Life |volume=28 |issue=3 |pages=289–309 |doi=10.1162/artl_a_00368 |issn=1064-5462 |pmid=35881678 |s2cid=251069071}}</ref> Most AI researchers believe strong AI can be achieved in the future, but some thinkers, like [[Hubert Dreyfus]] and [[Roger Penrose]], deny the possibility of achieving strong AI.{{Sfn|Clocksin|2003}}<ref name=":0">{{Cite journal |last=Fjelland |first=Ragnar |date=2020-06-17 |title=Why general artificial intelligence will not be realized |journal=Humanities and Social Sciences Communications |language=en |volume=7 |issue=1 |pages=1–9 |doi=10.1057/s41599-020-0494-4 |issn=2662-9992 |s2cid=219710554 |doi-access=free |hdl-access=free |hdl=11250/2726984}}</ref> <!-- "One problem is that while humans are complex, we are not general intelligences." [If this is relevant, it is so in some other section, and deserves a citation and more discussion of the implications.] --> [[John McCarthy (computer scientist)|John McCarthy]] is among those who believe human-level AI will be accomplished, but that the present level of progress is such that a date cannot accurately be predicted.{{Sfn|McCarthy|2007b}} AI experts' views on the feasibility of AGI wax and wane. Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{Cite news |last=Khatchadourian |first=Raffi |date=23 November 2015 |title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction? |url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom |url-status=live |archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom |archive-date=28 January 2016 |access-date=7 February 2016 |work=[[The New Yorker (magazine)|The New Yorker]]}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found above [[#Tests for confirming human-level AGI|''Tests for confirming human-level AGI'']]. A report by Stuart Armstrong and Kaj Sotala of the [[Machine Intelligence Research Institute]] found that "over [a] 60-year time frame there is a strong bias towards predicting the arrival of human-level AI as between 15 and 25 years from the time the prediction was made". They analyzed 95 predictions made between 1950 and 2012 on when human-level AI will come about.<!-- "There was no difference between predictions made by experts and non-experts." see: https://aiimpacts.org/error-in-armstrong-and-sotala-2012/--><ref>Armstrong, Stuart, and Kaj Sotala. 2012. “How We’re Predicting AI—or Failing To.” In ''Beyond AI: Artificial Dreams'', edited by Jan Romportl, Pavel Ircing, Eva Žáčková, Michal Polák and Radek Schuster, 52–75. Plzeň: University of West Bohemia</ref> In 2023, [[Microsoft]] researchers published a detailed evaluation of [[GPT-4]]. They concluded: "Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system."<ref>{{Cite web |date=24 March 2023 |title=Microsoft Now Claims GPT-4 Shows 'Sparks' of General Intelligence |url=https://www.vice.com/en/article/microsoft-now-claims-gpt-4-shows-sparks-of-general-intelligence/}}</ref> Another study in 2023 reported that GPT-4 outperforms 99% of humans on the [[Torrance Tests of Creative Thinking|Torrance tests of creative thinking]].<ref>{{Cite web |last=Shimek |first=Cary |date=2023-07-06 |title=AI Outperforms Humans in Creativity Test |url=https://neurosciencenews.com/ai-creativity-23585/ |access-date=2023-10-20 |website=Neuroscience News}}</ref><ref>{{Cite journal |last1=Guzik |first1=Erik E. |last2=Byrge |first2=Christian |last3=Gilde |first3=Christian |date=2023-12-01 |title=The originality of machines: AI takes the Torrance Test |journal=Journal of Creativity |volume=33 |issue=3 |pages=100065 |doi=10.1016/j.yjoc.2023.100065 |issn=2713-3745 |s2cid=261087185 |doi-access=free}}</ref> [[Blaise Agüera y Arcas]] and [[Peter Norvig]] wrote in 2023 that a significant level of general intelligence has already been achieved with [[frontier model]]s. They wrote that reluctance to this view comes from four main reasons: a "healthy skepticism about metrics for AGI", an "ideological commitment to alternative AI theories or techniques", a "devotion to human (or biological) exceptionalism", or a "concern about the economic implications of AGI".<ref name=":3">{{Cite journal |last=Arcas |first=Blaise Agüera y |date=2023-10-10 |title=Artificial General Intelligence Is Already Here |url=https://www.noemamag.com/artificial-general-intelligence-is-already-here/ |journal=Noema |language=en-US}}</ref> 2023 also marked the emergence of large multimodal models (large language models capable of processing or generating multiple [[Modality (human–computer interaction)|modalities]] such as text, audio, and images).<ref>{{Cite web |last=Zia |first=Tehseen |date=January 8, 2024 |title=Unveiling of Large Multimodal Models: Shaping the Landscape of Language Models in 2024 |url=https://www.unite.ai/unveiling-of-large-multimodal-models-shaping-the-landscape-of-language-models-in-2024/ |access-date=2024-05-26 |website=Unite.ai}}</ref> In 2024, OpenAI released [[OpenAI o1|o1-preview]], the first of a series of models that "spend more time thinking before they respond". According to [[Mira Murati]], this ability to think before responding represents a new, additional paradigm. It improves model outputs by spending more computing power when generating the answer, whereas the model scaling paradigm improves outputs by increasing the model size, training data and training compute power.<ref>{{Cite web |date=September 12, 2024 |title=Introducing OpenAI o1-preview |url=https://openai.com/index/introducing-openai-o1-preview/ |website=OpenAI}}</ref><ref>{{Cite magazine |last=Knight |first=Will |title=OpenAI Announces a New AI Model, Code-Named Strawberry, That Solves Difficult Problems Step by Step |url=https://www.wired.com/story/openai-o1-strawberry-problem-reasoning/ |access-date=2024-09-17 |magazine=Wired |language=en-US |issn=1059-1028}}</ref> An [[OpenAI]] employee, Vahid Kazemi, claimed in 2024 that the company had achieved AGI, stating, "In my opinion, we have already achieved AGI and it's even more clear with [[OpenAI o1|O1]]." Kazemi clarified that while the AI is not yet "better than any human at any task", it is "better than most humans at most tasks." He also addressed criticisms that large language models (LLMs) merely follow predefined patterns, comparing their learning process to the scientific method of observing, hypothesizing, and verifying. These statements have sparked debate, as they rely on a broad and unconventional definition of AGI—traditionally understood as AI that matches human intelligence across all domains. Critics argue that, while OpenAI's models demonstrate remarkable versatility, they may not fully meet this standard. Notably, Kazemi's comments came shortly after OpenAI removed "AGI" from the terms of its partnership with [[Microsoft]], prompting speculation about the company's strategic intentions.<ref>{{Cite web |date=13 December 2024 |title=OpenAI Employee Claims AGI Has Been Achieved |url=https://orbitaltoday.com/2024/12/13/openai-employee-claims-agi-has-been-achieved/?utm_source=chatgpt.com |access-date=2024-12-27 |website=Orbital Today}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)