Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial general intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==History== ===Classical AI=== {{Main|History of artificial intelligence|Symbolic artificial intelligence}} Modern AI research began in the mid-1950s.<ref>{{Harvnb|Crevier|1993|pp=48–50}}</ref> The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades.<ref>{{Cite web |last=Kaplan |first=Andreas |date=2022 |title=Artificial Intelligence, Business and Civilization – Our Fate Made in Machines |url=https://www.routledge.com/Artificial-Intelligence-Business-and-Civilization-Our-Fate-Made-in-Machines/Kaplan/p/book/9781032155319 |url-status=live |archive-url=https://web.archive.org/web/20220506103920/https://www.routledge.com/Artificial-Intelligence-Business-and-Civilization-Our-Fate-Made-in-Machines/Kaplan/p/book/9781032155319 |archive-date=6 May 2022 |access-date=12 March 2022}}</ref> AI pioneer [[Herbert A. Simon]] wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."<ref>{{Harvnb|Simon|1965|p=96}} quoted in {{Harvnb|Crevier|1993|p=109}}</ref> Their predictions were the inspiration for [[Stanley Kubrick]] and [[Arthur C. Clarke]]'s character [[HAL 9000]], who embodied what AI researchers believed they could create by the year 2001. AI pioneer [[Marvin Minsky]] was a consultant<ref>{{Cite web |title=Scientist on the Set: An Interview with Marvin Minsky |url=http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |url-status=live |archive-url=https://web.archive.org/web/20120716182537/http://mitpress.mit.edu/e-books/Hal/chap2/two1.html |archive-date=16 July 2012 |access-date=5 April 2008}}</ref> on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time. He said in 1967, "Within a generation... the problem of creating 'artificial intelligence' will substantially be solved".<ref>Marvin Minsky to {{Harvtxt|Darrach|1970}}, quoted in {{Harvtxt|Crevier|1993|p=109}}.</ref> Several [[Symbolic AI|classical AI projects]], such as [[Doug Lenat]]'s [[Cyc]] project (that began in 1984), and [[Allen Newell]]'s [[Soar (cognitive architecture)|Soar]] project, were directed at AGI. However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI".{{Efn|The [[Lighthill report]] specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England.<ref>{{Harvnb|Lighthill|1973}}; {{Harvnb|Howe|1994}}</ref> In the U.S., [[DARPA]] became determined to fund only "mission-oriented direct research, rather than basic undirected research".{{Sfn|NRC|1999|loc="Shift to Applied Research Increases Investment"}}<ref>{{Harvnb|Crevier|1993|pp=115–117}}; {{Harvnb|Russell|Norvig|2003|pp=21–22}}.</ref>}} In the early 1980s, Japan's [[Fifth Generation Computer]] Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation".<ref>{{Harvnb|Crevier|1993|p=211}}, {{Harvnb|Russell|Norvig|2003|p=24}} and see also {{Harvnb|Feigenbaum|McCorduck|1983}}</ref> In response to this and the success of [[expert systems]], both industry and government pumped money into the field.{{Sfn|NRC|1999|loc="Shift to Applied Research Increases Investment"}}<ref>{{Harvnb|Crevier|1993|pp=161–162,197–203,240}}; {{Harvnb|Russell|Norvig|2003|p=25}}.</ref> However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled.<ref>{{Harvnb|Crevier|1993|pp=209–212}}</ref> For the second time in 20 years, AI researchers who predicted the imminent achievement of AGI had been mistaken. By the 1990s, AI researchers had a reputation for making vain promises. They became reluctant to make predictions at all{{Efn|As AI founder [[John McCarthy (computer scientist)|John McCarthy]] writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case."<ref>{{Cite web |last=McCarthy |first=John |author-link=John McCarthy (computer scientist) |date=2000 |title=Reply to Lighthill |url=http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html |url-status=live |archive-url=https://web.archive.org/web/20080930164952/http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthill.html |archive-date=30 September 2008 |access-date=29 September 2007 |publisher=Stanford University}}</ref>}} and avoided mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]".<ref>{{Cite news |last=Markoff |first=John |date=14 October 2005 |title=Behind Artificial Intelligence, a Squadron of Bright Real People |url=https://www.nytimes.com/2005/10/14/technology/14artificial.html?ei=5070&en=11ab55edb7cead5e&ex=1185940800&adxnnl=1&adxnnlx=1185805173-o7WsfW7qaP0x5%2FNUs1cQCQ |url-status=live |archive-url=https://web.archive.org/web/20230202181023/https://www.nytimes.com/2005/10/14/technology/behind-artificial-intelligence-a-squadron-of-bright-real-people.html |archive-date=2 February 2023 |access-date=18 February 2017 |work=The New York Times |quote=At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.}}</ref> ===Narrow AI research=== {{Main|Artificial intelligence}} In the 1990s and early 21st century, mainstream AI achieved commercial success and academic respectability by focusing on specific sub-problems where AI can produce verifiable results and commercial applications, such as [[speech recognition]] and [[recommendation algorithm]]s.<ref>{{Harvnb|Russell|Norvig|2003|pp=25–26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is heavily funded in both academia and industry. {{As of|2018}}, development in this field was considered an emerging trend, and a mature stage was expected to be reached in more than 10 years.<ref>{{Cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |url-status=live |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |access-date=7 May 2019 |publisher=Gartner Reports}}</ref> At the turn of the century, many mainstream AI researchers<ref name=":4"/> hoped that strong AI could be developed by combining programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real-world competence and the [[commonsense knowledge (artificial intelligence)|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts.<ref name=":4">{{Harvnb|Moravec|1988|p=20}}</ref></blockquote> However, even at the time, this was disputed. For example, Stevan Harnad of Princeton University concluded his 1990 paper on the [[Symbol grounding problem|symbol grounding hypothesis]] by stating: <blockquote>The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer).<ref>{{Cite journal |last=Harnad |first=S. |date=1990 |title=The Symbol Grounding Problem |journal=Physica D |volume=42 |issue=1–3 |pages=335–346 |arxiv=cs/9906002 |bibcode=1990PhyD...42..335H |doi=10.1016/0167-2789(90)90087-6 |s2cid=3204300}}</ref></blockquote> ===Modern artificial general intelligence research=== The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud<ref>{{Harvnb|Gubrud|1997}}</ref> in a discussion of the implications of fully automated military production and operations. A mathematical formalism of AGI was proposed by [[Marcus Hutter]] in 2000. Named [[AIXI]], the proposed AGI agent maximises "the ability to satisfy goals in a wide range of environments".<ref name=":14">{{Cite book |last=Hutter |first=Marcus |url=https://link.springer.com/book/10.1007/b138233 |title=Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability |date=2005 |publisher=Springer |isbn=978-3-5402-6877-2 |series=Texts in Theoretical Computer Science an EATCS Series |doi=10.1007/b138233 |access-date=19 July 2022 |archive-url=https://web.archive.org/web/20220719052038/https://link.springer.com/book/10.1007/b138233 |archive-date=19 July 2022 |url-status=live |s2cid=33352850}}</ref> This type of AGI, characterized by the ability to maximise a mathematical definition of intelligence rather than exhibit human-like behaviour,<ref>{{Cite thesis |last=Legg |first=Shane |title=Machine Super Intelligence |date=2008 |access-date=19 July 2022 |publisher=University of Lugano |url=http://www.vetta.org/documents/Machine_Super_Intelligence.pdf |archive-url=https://web.archive.org/web/20220615160113/https://www.vetta.org/documents/Machine_Super_Intelligence.pdf |archive-date=15 June 2022 |url-status=live}}</ref> was also called universal artificial intelligence.<ref>{{Cite book |last=Goertzel |first=Ben |url=https://www.researchgate.net/publication/271390398 |title=Artificial General Intelligence |date=2014 |publisher=Journal of Artificial General Intelligence |isbn=978-3-3190-9273-7 |series=Lecture Notes in Computer Science |volume=8598 |doi=10.1007/978-3-319-09274-4 |s2cid=8387410}}</ref> The term AGI was re-introduced and popularized by [[Shane Legg]] and [[Ben Goertzel]] around 2002.<ref>{{Cite web |title=Who coined the term "AGI"? |url=http://goertzel.org/who-coined-the-term-agi/ |url-status=live |archive-url=https://web.archive.org/web/20181228083048/http://goertzel.org/who-coined-the-term-agi/ |archive-date=28 December 2018 |access-date=28 December 2018 |website=goertzel.org |language=en-US}}, via [[Life 3.0]]: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel'</ref> AGI research activity in 2006 was described by Pei Wang and Ben Goertzel<ref>{{Harvnb|Wang|Goertzel|2007}}</ref> as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009<ref>{{Cite web |title=First International Summer School in Artificial General Intelligence, Main summer school: June 22 – July 3, 2009, OpenCog Lab: July 6-9, 2009 |url=https://goertzel.org/AGI_Summer_School_2009.htm |url-status=live |archive-url=https://web.archive.org/web/20200928173146/https://www.goertzel.org/AGI_Summer_School_2009.htm |archive-date=28 September 2020 |access-date=11 May 2020}}</ref> by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010<ref>{{Cite web |title=Избираеми дисциплини 2009/2010 – пролетен триместър |trans-title=Elective courses 2009/2010 – spring trimester |url=http://fmi-plovdiv.org/index.jsp?id=1054&ln=1 |url-status=live |archive-url=https://web.archive.org/web/20200726103659/http://fmi-plovdiv.org/index.jsp?id=1054&ln=1 |archive-date=26 July 2020 |access-date=11 May 2020 |website=Факултет по математика и информатика [Faculty of Mathematics and Informatics] |language=bg}}</ref> and 2011<ref>{{Cite web |title=Избираеми дисциплини 2010/2011 – зимен триместър |trans-title=Elective courses 2010/2011 – winter trimester |url=http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1 |url-status=live |archive-url=https://web.archive.org/web/20200726094625/http://fmi.uni-plovdiv.bg/index.jsp?id=1139&ln=1 |archive-date=26 July 2020 |access-date=11 May 2020 |website=Факултет по математика и информатика [Faculty of Mathematics and Informatics] |language=bg}}</ref> at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course on AGI in 2018, organized by [[Lex Fridman]] and featuring a number of guest lecturers. {{As of|2023}}, a small number of computer scientists are active in AGI research, and many contribute to a series of AGI conferences. However, increasingly more researchers are interested in open-ended learning,<ref name=":10">{{Cite journal |last1=Shevlin |first1=Henry |last2=Vold |first2=Karina |last3=Crosby |first3=Matthew |last4=Halina |first4=Marta |date=2019-10-04 |title=The limits of machine intelligence: Despite progress in machine intelligence, artificial general intelligence is still a major challenge |journal=EMBO Reports |language=en |volume=20 |issue=10 |pages=e49177 |doi=10.15252/embr.201949177 |issn=1469-221X |pmc=6776890 |pmid=31531926}}</ref><ref name=":2" /> which is the idea of allowing AI to continuously learn and innovate like humans do. === Feasibility === [[File:When-do-experts-expect-Artificial-General-Intelligence.png|thumb|Surveys about when experts expect artificial general intelligence<ref name=":22" />]] As of 2023, the development and potential achievement of AGI remains a subject of intense debate within the AI community. While traditional consensus held that AGI was a distant goal, recent advancements have led some researchers and industry figures to claim that early forms of AGI may already exist.<ref name=":17">{{Cite web |date=23 March 2023 |title=Microsoft Researchers Claim GPT-4 Is Showing "Sparks" of AGI |url=https://futurism.com/gpt-4-sparks-of-agi |access-date=2023-12-13 |website=Futurism}}</ref> AI pioneer [[Herbert A. Simon]] speculated in 1965 that "machines will be capable, within twenty years, of doing any work a man can do". This prediction failed to come true. Microsoft co-founder [[Paul Allen]] believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition".<ref>{{Cite news |last1=Allen |first1=Paul |last2=Greaves |first2=Mark |date=October 12, 2011 |title=The Singularity Isn't Near |url=http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near/ |access-date=17 September 2014 |work=[[MIT Technology Review]]}}</ref> Writing in ''[[The Guardian]]'', roboticist [[Alan Winfield]] claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight.<ref>{{Cite news |last=Winfield |first=Alan |title=Artificial intelligence will not turn into a Frankenstein's monster |url=https://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield |url-status=live |archive-url=https://web.archive.org/web/20140917135230/http://www.theguardian.com/technology/2014/aug/10/artificial-intelligence-will-not-become-a-frankensteins-monster-ian-winfield |archive-date=17 September 2014 |access-date=17 September 2014 |work=[[The Guardian]]}}</ref> A further challenge is the lack of clarity in defining what [[intelligence]] entails. Does it require consciousness? Must it display the ability to set goals as well as pursue them? Is it purely a matter of scale such that if model sizes increase sufficiently, intelligence will emerge? Are facilities such as planning, reasoning, and causal understanding required? Does intelligence require explicitly replicating the brain and its specific faculties? Does it require emotions?<ref>{{Cite journal |last=Deane |first=George |date=2022 |title=Machines That Feel and Think: The Role of Affective Feelings and Mental Action in (Artificial) General Intelligence |url=http://dx.doi.org/10.1162/artl_a_00368 |journal=Artificial Life |volume=28 |issue=3 |pages=289–309 |doi=10.1162/artl_a_00368 |issn=1064-5462 |pmid=35881678 |s2cid=251069071}}</ref> Most AI researchers believe strong AI can be achieved in the future, but some thinkers, like [[Hubert Dreyfus]] and [[Roger Penrose]], deny the possibility of achieving strong AI.{{Sfn|Clocksin|2003}}<ref name=":0">{{Cite journal |last=Fjelland |first=Ragnar |date=2020-06-17 |title=Why general artificial intelligence will not be realized |journal=Humanities and Social Sciences Communications |language=en |volume=7 |issue=1 |pages=1–9 |doi=10.1057/s41599-020-0494-4 |issn=2662-9992 |s2cid=219710554 |doi-access=free |hdl-access=free |hdl=11250/2726984}}</ref> <!-- "One problem is that while humans are complex, we are not general intelligences." [If this is relevant, it is so in some other section, and deserves a citation and more discussion of the implications.] --> [[John McCarthy (computer scientist)|John McCarthy]] is among those who believe human-level AI will be accomplished, but that the present level of progress is such that a date cannot accurately be predicted.{{Sfn|McCarthy|2007b}} AI experts' views on the feasibility of AGI wax and wane. Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead.<ref name="new yorker doomsday">{{Cite news |last=Khatchadourian |first=Raffi |date=23 November 2015 |title=The Doomsday Invention: Will artificial intelligence bring us utopia or destruction? |url=http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom |url-status=live |archive-url=https://web.archive.org/web/20160128105955/http://www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom |archive-date=28 January 2016 |access-date=7 February 2016 |work=[[The New Yorker (magazine)|The New Yorker]]}}</ref><ref>Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham.</ref> Further current AGI progress considerations can be found above [[#Tests for confirming human-level AGI|''Tests for confirming human-level AGI'']]. A report by Stuart Armstrong and Kaj Sotala of the [[Machine Intelligence Research Institute]] found that "over [a] 60-year time frame there is a strong bias towards predicting the arrival of human-level AI as between 15 and 25 years from the time the prediction was made". They analyzed 95 predictions made between 1950 and 2012 on when human-level AI will come about.<!-- "There was no difference between predictions made by experts and non-experts." see: https://aiimpacts.org/error-in-armstrong-and-sotala-2012/--><ref>Armstrong, Stuart, and Kaj Sotala. 2012. “How We’re Predicting AI—or Failing To.” In ''Beyond AI: Artificial Dreams'', edited by Jan Romportl, Pavel Ircing, Eva Žáčková, Michal Polák and Radek Schuster, 52–75. Plzeň: University of West Bohemia</ref> In 2023, [[Microsoft]] researchers published a detailed evaluation of [[GPT-4]]. They concluded: "Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system."<ref>{{Cite web |date=24 March 2023 |title=Microsoft Now Claims GPT-4 Shows 'Sparks' of General Intelligence |url=https://www.vice.com/en/article/microsoft-now-claims-gpt-4-shows-sparks-of-general-intelligence/}}</ref> Another study in 2023 reported that GPT-4 outperforms 99% of humans on the [[Torrance Tests of Creative Thinking|Torrance tests of creative thinking]].<ref>{{Cite web |last=Shimek |first=Cary |date=2023-07-06 |title=AI Outperforms Humans in Creativity Test |url=https://neurosciencenews.com/ai-creativity-23585/ |access-date=2023-10-20 |website=Neuroscience News}}</ref><ref>{{Cite journal |last1=Guzik |first1=Erik E. |last2=Byrge |first2=Christian |last3=Gilde |first3=Christian |date=2023-12-01 |title=The originality of machines: AI takes the Torrance Test |journal=Journal of Creativity |volume=33 |issue=3 |pages=100065 |doi=10.1016/j.yjoc.2023.100065 |issn=2713-3745 |s2cid=261087185 |doi-access=free}}</ref> [[Blaise Agüera y Arcas]] and [[Peter Norvig]] wrote in 2023 that a significant level of general intelligence has already been achieved with [[frontier model]]s. They wrote that reluctance to this view comes from four main reasons: a "healthy skepticism about metrics for AGI", an "ideological commitment to alternative AI theories or techniques", a "devotion to human (or biological) exceptionalism", or a "concern about the economic implications of AGI".<ref name=":3">{{Cite journal |last=Arcas |first=Blaise Agüera y |date=2023-10-10 |title=Artificial General Intelligence Is Already Here |url=https://www.noemamag.com/artificial-general-intelligence-is-already-here/ |journal=Noema |language=en-US}}</ref> 2023 also marked the emergence of large multimodal models (large language models capable of processing or generating multiple [[Modality (human–computer interaction)|modalities]] such as text, audio, and images).<ref>{{Cite web |last=Zia |first=Tehseen |date=January 8, 2024 |title=Unveiling of Large Multimodal Models: Shaping the Landscape of Language Models in 2024 |url=https://www.unite.ai/unveiling-of-large-multimodal-models-shaping-the-landscape-of-language-models-in-2024/ |access-date=2024-05-26 |website=Unite.ai}}</ref> In 2024, OpenAI released [[OpenAI o1|o1-preview]], the first of a series of models that "spend more time thinking before they respond". According to [[Mira Murati]], this ability to think before responding represents a new, additional paradigm. It improves model outputs by spending more computing power when generating the answer, whereas the model scaling paradigm improves outputs by increasing the model size, training data and training compute power.<ref>{{Cite web |date=September 12, 2024 |title=Introducing OpenAI o1-preview |url=https://openai.com/index/introducing-openai-o1-preview/ |website=OpenAI}}</ref><ref>{{Cite magazine |last=Knight |first=Will |title=OpenAI Announces a New AI Model, Code-Named Strawberry, That Solves Difficult Problems Step by Step |url=https://www.wired.com/story/openai-o1-strawberry-problem-reasoning/ |access-date=2024-09-17 |magazine=Wired |language=en-US |issn=1059-1028}}</ref> An [[OpenAI]] employee, Vahid Kazemi, claimed in 2024 that the company had achieved AGI, stating, "In my opinion, we have already achieved AGI and it's even more clear with [[OpenAI o1|O1]]." Kazemi clarified that while the AI is not yet "better than any human at any task", it is "better than most humans at most tasks." He also addressed criticisms that large language models (LLMs) merely follow predefined patterns, comparing their learning process to the scientific method of observing, hypothesizing, and verifying. These statements have sparked debate, as they rely on a broad and unconventional definition of AGI—traditionally understood as AI that matches human intelligence across all domains. Critics argue that, while OpenAI's models demonstrate remarkable versatility, they may not fully meet this standard. Notably, Kazemi's comments came shortly after OpenAI removed "AGI" from the terms of its partnership with [[Microsoft]], prompting speculation about the company's strategic intentions.<ref>{{Cite web |date=13 December 2024 |title=OpenAI Employee Claims AGI Has Been Achieved |url=https://orbitaltoday.com/2024/12/13/openai-employee-claims-agi-has-been-achieved/?utm_source=chatgpt.com |access-date=2024-12-27 |website=Orbital Today}}</ref> === Timescales === [[File:Performance on benchmarks compared to humans - 2024 AI index.jpg|thumb|upright=1.8|AI has surpassed humans on a variety of language understanding and visual understanding benchmarks.<ref>{{Cite web |date=2024-04-15 |title=AI Index: State of AI in 13 Charts |url=https://hai.stanford.edu/news/ai-index-state-ai-13-charts |access-date=2024-06-07 |website=hai.stanford.edu |language=en}}</ref> As of 2023, [[foundation model]]s still lack advanced reasoning and planning capabilities, but rapid progress is expected.<ref>{{Cite web |date=April 19, 2024 |title=Next-Gen AI: OpenAI and Meta's Leap Towards Reasoning Machines |url=https://www.unite.ai/next-gen-ai-openai-and-metas-leap-towards-reasoning-machines/ |access-date=2024-06-07 |website=Unite.ai}}</ref>]] Progress in artificial intelligence has historically gone through periods of rapid progress separated by periods when progress appeared to stop.{{Sfn|Clocksin|2003}} Ending each hiatus were fundamental advances in hardware, software or both to create space for further progress.{{Sfn|Clocksin|2003}}<ref>{{Cite journal |last=James |first=Alex P. |date=2022 |title=The Why, What, and How of Artificial General Intelligence Chip Development |url=https://ieeexplore.ieee.org/document/9390376 |url-status=live |journal=IEEE Transactions on Cognitive and Developmental Systems |volume=14 |issue=2 |pages=333–347 |arxiv=2012.06338 |doi=10.1109/TCDS.2021.3069871 |issn=2379-8920 |s2cid=228376556 |archive-url=https://web.archive.org/web/20220828140528/https://ieeexplore.ieee.org/document/9390376/ |archive-date=28 August 2022 |access-date=28 August 2022}}</ref><ref>{{Cite journal |last1=Pei |first1=Jing |last2=Deng |first2=Lei |last3=Song |first3=Sen |last4=Zhao |first4=Mingguo |last5=Zhang |first5=Youhui |last6=Wu |first6=Shuang |last7=Wang |first7=Guanrui |last8=Zou |first8=Zhe |last9=Wu |first9=Zhenzhi |last10=He |first10=Wei |last11=Chen |first11=Feng |last12=Deng |first12=Ning |last13=Wu |first13=Si |last14=Wang |first14=Yu |last15=Wu |first15=Yujie |date=2019 |title=Towards artificial general intelligence with hybrid Tianjic chip architecture |url=https://www.nature.com/articles/s41586-019-1424-8 |url-status=live |journal=Nature |language=en |volume=572 |issue=7767 |pages=106–111 |bibcode=2019Natur.572..106P |doi=10.1038/s41586-019-1424-8 |issn=1476-4687 |pmid=31367028 |s2cid=199056116 |archive-url=https://web.archive.org/web/20220829084912/https://www.nature.com/articles/s41586-019-1424-8 |archive-date=29 August 2022 |access-date=29 August 2022}}</ref> For example, the computer hardware available in the twentieth century was not sufficient to implement deep learning, which requires large numbers of [[Graphics processing unit|GPU]]-enabled [[Central processing unit|CPUs]].<ref>{{Cite journal |last1=Pandey |first1=Mohit |last2=Fernandez |first2=Michael |last3=Gentile |first3=Francesco |last4=Isayev |first4=Olexandr |last5=Tropsha |first5=Alexander |last6=Stern |first6=Abraham C. |last7=Cherkasov |first7=Artem |date=March 2022 |title=The transformational role of GPU computing and deep learning in drug discovery |journal=Nature Machine Intelligence |language=en |volume=4 |issue=3 |pages=211–221 |doi=10.1038/s42256-022-00463-x |issn=2522-5839 |s2cid=252081559 |doi-access=free}}</ref> In the introduction to his 2006 book,{{Sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century. {{As of|2007}}, the consensus in the AGI research community seemed to be that the timeline discussed by [[Ray Kurzweil]] in 2005 in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}}</ref> (i.e. between 2015 and 2045) was plausible.{{Sfn|Goertzel|2007}} Mainstream AI researchers have given a wide range of opinions on whether progress will be this rapid. A 2012 meta-analysis of 95 such opinions found a bias towards predicting that the onset of AGI would occur within 16–26 years for modern and historical predictions alike. That paper has been criticized for how it categorized opinions as expert or non-expert.<ref>{{Cite web |last=Grace |first=Katja |date=2016 |title=Error in Armstrong and Sotala 2012 |url=https://aiimpacts.org/error-in-armstrong-and-sotala-2012/ |url-status=live |archive-url=https://web.archive.org/web/20201204012302/https://aiimpacts.org/error-in-armstrong-and-sotala-2012/ |archive-date=4 December 2020 |access-date=2020-08-24 |website=AI Impacts |type=blog}}</ref> In 2012, [[Alex Krizhevsky]], [[Ilya Sutskever]], and [[Geoffrey Hinton]] developed a neural network called [[AlexNet]], which won the [[ImageNet]] competition with a top-5 test error rate of 15.3%, significantly better than the second-best entry's rate of 26.3% (the traditional approach used a weighted sum of scores from different pre-defined classifiers).<ref name=":5">{{Cite journal |last=Butz |first=Martin V. |date=2021-03-01 |title=Towards Strong AI |journal=KI – Künstliche Intelligenz |language=en |volume=35 |issue=1 |pages=91–101 |doi=10.1007/s13218-021-00705-x |issn=1610-1987 |s2cid=256065190 |doi-access=free}}</ref> AlexNet was regarded as the initial ground-breaker of the current [[deep learning]] wave.<ref name=":5"/> In 2017, researchers Feng Liu, Yong Shi, and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI, Apple's Siri, and others. At the maximum, these AIs reached an IQ value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests were carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{Cite journal |last1=Liu |first1=Feng |last2=Shi |first2=Yong |last3=Liu |first3=Ying |date=2017 |title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence |journal=Annals of Data Science |volume=4 |issue=2 |pages=179–191 |arxiv=1709.10242 |doi=10.1007/s40745-017-0109-0 |s2cid=37900130}}</ref><ref>{{Cite web |last=Brien |first=Jörn |date=2017-10-05 |title=Google-KI doppelt so schlau wie Siri |trans-title=Google AI is twice as smart as Siri – but a six-year-old beats both |url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003 |url-status=live |archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/ |archive-date=3 January 2019 |access-date=2 January 2019 |language=de}}</ref> In 2020, [[OpenAI]] developed [[GPT-3]], a language model capable of performing many diverse tasks without specific training. According to [[Gary Grossman]] in a [[VentureBeat]] article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to be classified as a narrow AI system.<ref>{{Cite web |last=Grossman |first=Gary |date=September 3, 2020 |title=We're entering the AI twilight zone between narrow and general AI |url=https://venturebeat.com/2020/09/03/were-entering-the-ai-twilight-zone-between-narrow-and-general-ai/ |url-status=live |archive-url=https://web.archive.org/web/20200904191750/https://venturebeat.com/2020/09/03/were-entering-the-ai-twilight-zone-between-narrow-and-general-ai/ |archive-date=4 September 2020 |access-date=September 5, 2020 |publisher=[[VentureBeat]] |quote="Certainly, too, there are those who claim we are already seeing an early example of an AGI system in the recently announced GPT-3 natural language processing (NLP) neural network. ... So is GPT-3 the first example of an AGI system? This is debatable, but the consensus is that it is not AGI. ... If nothing else, GPT-3 tells us there is a middle ground between narrow and general AI."}}</ref> In the same year, Jason Rohrer used his GPT-3 account to develop a chatbot, and provided a chatbot-developing platform called "Project December". OpenAI asked for changes to the chatbot to comply with their safety guidelines; Rohrer disconnected Project December from the GPT-3 API.<ref>{{Cite news |last=Quach |first=Katyanna |title=A developer built an AI chatbot using GPT-3 that helped a man speak again to his late fiancée. OpenAI shut it down |url=https://www.theregister.com/2021/09/08/project_december_openai_gpt_3/ |url-status=live |archive-url=https://web.archive.org/web/20211016232620/https://www.theregister.com/2021/09/08/project_december_openai_gpt_3/ |archive-date=16 October 2021 |access-date=16 October 2021 |publisher=The Register}}</ref> In 2022, [[DeepMind]] developed [[Gato (DeepMind)|Gato]], a "general-purpose" system capable of performing more than 600 different tasks.<ref>{{Citation |last=Wiggers |first=Kyle |title=DeepMind's new AI can perform over 600 tasks, from playing games to controlling robots |date=May 13, 2022 |work=[[TechCrunch]] |url=https://techcrunch.com/2022/05/13/deepminds-new-ai-can-perform-over-600-tasks-from-playing-games-to-controlling-robots/ |access-date=12 June 2022 |archive-url=https://web.archive.org/web/20220616185232/https://techcrunch.com/2022/05/13/deepminds-new-ai-can-perform-over-600-tasks-from-playing-games-to-controlling-robots/ |archive-date=16 June 2022 |url-status=live}}</ref> In 2023, [[Microsoft Research]] published a study on an early version of OpenAI's [[GPT-4]], contending that it exhibited more general intelligence than previous AI models and demonstrated human-level performance in tasks spanning multiple domains, such as mathematics, coding, and law. This research sparked a debate on whether GPT-4 could be considered an early, incomplete version of artificial general intelligence, emphasizing the need for further exploration and evaluation of such systems.<ref name=":2" /> In 2023, AI researcher [[Geoffrey Hinton]] stated that:<ref>{{Cite news |last=Metz |first=Cade |date=2023-05-01 |title='The Godfather of A.I.' Leaves Google and Warns of Danger Ahead |url=https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html |access-date=2023-06-07 |work=The New York Times |language=en-US |issn=0362-4331}}</ref> {{Blockquote|text=The idea that this stuff could actually get smarter than people – a few people believed that, [...]. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.}}He estimated in 2024 (with low confidence) that systems smarter than humans could appear within 5 to 20 years and stressed the attendant existential risks.<ref>{{cite news |date=2024-12-27 |title='Godfather of AI' shortens odds of the technology wiping out humanity over next 30 years |url=https://www.theguardian.com/technology/2024/dec/27/godfather-of-ai-raises-odds-of-the-technology-wiping-out-humanity-over-next-30-years |access-date=2025-04-18 |work=The Guardian}}</ref> In May 2023, [[Demis Hassabis]] similarly said that "The progress in the last few years has been pretty incredible", and that he sees no reason why it would slow down, expecting AGI within a decade or even a few years.<ref>{{Cite web |last=Bove |first=Tristan |title=A.I. could rival human intelligence in 'just a few years,' says CEO of Google's main A.I. research lab |url=https://fortune.com/2023/05/03/google-deepmind-ceo-agi-artificial-intelligence/ |access-date=2024-09-04 |website=Fortune |language=en}}</ref> In March 2024, [[Nvidia]]'s CEO, [[Jensen Huang]], stated his expectation that within five years, AI would be capable of passing any test at least as well as humans.<ref>{{Cite news |last=Nellis |first=Stephen |date=March 2, 2024 |title=Nvidia CEO says AI could pass human tests in five years |url=https://www.reuters.com/technology/nvidia-ceo-says-ai-could-pass-human-tests-five-years-2024-03-01/ |work=Reuters}}</ref> In June 2024, the AI researcher [[Leopold Aschenbrenner]], a former [[OpenAI]] employee, estimated AGI by 2027 to be "strikingly plausible".<ref>{{Cite news |last=Aschenbrenner |first=Leopold |title=SITUATIONAL AWARENESS, The Decade Ahead |url=https://situational-awareness.ai/}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)