Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial general intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Timescales === [[File:Performance on benchmarks compared to humans - 2024 AI index.jpg|thumb|upright=1.8|AI has surpassed humans on a variety of language understanding and visual understanding benchmarks.<ref>{{Cite web |date=2024-04-15 |title=AI Index: State of AI in 13 Charts |url=https://hai.stanford.edu/news/ai-index-state-ai-13-charts |access-date=2024-06-07 |website=hai.stanford.edu |language=en}}</ref> As of 2023, [[foundation model]]s still lack advanced reasoning and planning capabilities, but rapid progress is expected.<ref>{{Cite web |date=April 19, 2024 |title=Next-Gen AI: OpenAI and Meta's Leap Towards Reasoning Machines |url=https://www.unite.ai/next-gen-ai-openai-and-metas-leap-towards-reasoning-machines/ |access-date=2024-06-07 |website=Unite.ai}}</ref>]] Progress in artificial intelligence has historically gone through periods of rapid progress separated by periods when progress appeared to stop.{{Sfn|Clocksin|2003}} Ending each hiatus were fundamental advances in hardware, software or both to create space for further progress.{{Sfn|Clocksin|2003}}<ref>{{Cite journal |last=James |first=Alex P. |date=2022 |title=The Why, What, and How of Artificial General Intelligence Chip Development |url=https://ieeexplore.ieee.org/document/9390376 |url-status=live |journal=IEEE Transactions on Cognitive and Developmental Systems |volume=14 |issue=2 |pages=333–347 |arxiv=2012.06338 |doi=10.1109/TCDS.2021.3069871 |issn=2379-8920 |s2cid=228376556 |archive-url=https://web.archive.org/web/20220828140528/https://ieeexplore.ieee.org/document/9390376/ |archive-date=28 August 2022 |access-date=28 August 2022}}</ref><ref>{{Cite journal |last1=Pei |first1=Jing |last2=Deng |first2=Lei |last3=Song |first3=Sen |last4=Zhao |first4=Mingguo |last5=Zhang |first5=Youhui |last6=Wu |first6=Shuang |last7=Wang |first7=Guanrui |last8=Zou |first8=Zhe |last9=Wu |first9=Zhenzhi |last10=He |first10=Wei |last11=Chen |first11=Feng |last12=Deng |first12=Ning |last13=Wu |first13=Si |last14=Wang |first14=Yu |last15=Wu |first15=Yujie |date=2019 |title=Towards artificial general intelligence with hybrid Tianjic chip architecture |url=https://www.nature.com/articles/s41586-019-1424-8 |url-status=live |journal=Nature |language=en |volume=572 |issue=7767 |pages=106–111 |bibcode=2019Natur.572..106P |doi=10.1038/s41586-019-1424-8 |issn=1476-4687 |pmid=31367028 |s2cid=199056116 |archive-url=https://web.archive.org/web/20220829084912/https://www.nature.com/articles/s41586-019-1424-8 |archive-date=29 August 2022 |access-date=29 August 2022}}</ref> For example, the computer hardware available in the twentieth century was not sufficient to implement deep learning, which requires large numbers of [[Graphics processing unit|GPU]]-enabled [[Central processing unit|CPUs]].<ref>{{Cite journal |last1=Pandey |first1=Mohit |last2=Fernandez |first2=Michael |last3=Gentile |first3=Francesco |last4=Isayev |first4=Olexandr |last5=Tropsha |first5=Alexander |last6=Stern |first6=Abraham C. |last7=Cherkasov |first7=Artem |date=March 2022 |title=The transformational role of GPU computing and deep learning in drug discovery |journal=Nature Machine Intelligence |language=en |volume=4 |issue=3 |pages=211–221 |doi=10.1038/s42256-022-00463-x |issn=2522-5839 |s2cid=252081559 |doi-access=free}}</ref> In the introduction to his 2006 book,{{Sfn|Goertzel|Pennachin|2006}} Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century. {{As of|2007}}, the consensus in the AGI research community seemed to be that the timeline discussed by [[Ray Kurzweil]] in 2005 in ''[[The Singularity is Near]]''<ref name="K">{{Harv|Kurzweil|2005|p=260}}</ref> (i.e. between 2015 and 2045) was plausible.{{Sfn|Goertzel|2007}} Mainstream AI researchers have given a wide range of opinions on whether progress will be this rapid. A 2012 meta-analysis of 95 such opinions found a bias towards predicting that the onset of AGI would occur within 16–26 years for modern and historical predictions alike. That paper has been criticized for how it categorized opinions as expert or non-expert.<ref>{{Cite web |last=Grace |first=Katja |date=2016 |title=Error in Armstrong and Sotala 2012 |url=https://aiimpacts.org/error-in-armstrong-and-sotala-2012/ |url-status=live |archive-url=https://web.archive.org/web/20201204012302/https://aiimpacts.org/error-in-armstrong-and-sotala-2012/ |archive-date=4 December 2020 |access-date=2020-08-24 |website=AI Impacts |type=blog}}</ref> In 2012, [[Alex Krizhevsky]], [[Ilya Sutskever]], and [[Geoffrey Hinton]] developed a neural network called [[AlexNet]], which won the [[ImageNet]] competition with a top-5 test error rate of 15.3%, significantly better than the second-best entry's rate of 26.3% (the traditional approach used a weighted sum of scores from different pre-defined classifiers).<ref name=":5">{{Cite journal |last=Butz |first=Martin V. |date=2021-03-01 |title=Towards Strong AI |journal=KI – Künstliche Intelligenz |language=en |volume=35 |issue=1 |pages=91–101 |doi=10.1007/s13218-021-00705-x |issn=1610-1987 |s2cid=256065190 |doi-access=free}}</ref> AlexNet was regarded as the initial ground-breaker of the current [[deep learning]] wave.<ref name=":5"/> In 2017, researchers Feng Liu, Yong Shi, and Ying Liu conducted intelligence tests on publicly available and freely accessible weak AI such as Google AI, Apple's Siri, and others. At the maximum, these AIs reached an IQ value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. Similar tests were carried out in 2014, with the IQ score reaching a maximum value of 27.<ref>{{Cite journal |last1=Liu |first1=Feng |last2=Shi |first2=Yong |last3=Liu |first3=Ying |date=2017 |title=Intelligence Quotient and Intelligence Grade of Artificial Intelligence |journal=Annals of Data Science |volume=4 |issue=2 |pages=179–191 |arxiv=1709.10242 |doi=10.1007/s40745-017-0109-0 |s2cid=37900130}}</ref><ref>{{Cite web |last=Brien |first=Jörn |date=2017-10-05 |title=Google-KI doppelt so schlau wie Siri |trans-title=Google AI is twice as smart as Siri – but a six-year-old beats both |url=https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003 |url-status=live |archive-url=https://web.archive.org/web/20190103055657/https://t3n.de/news/iq-kind-schlauer-google-ki-siri-864003/ |archive-date=3 January 2019 |access-date=2 January 2019 |language=de}}</ref> In 2020, [[OpenAI]] developed [[GPT-3]], a language model capable of performing many diverse tasks without specific training. According to [[Gary Grossman]] in a [[VentureBeat]] article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to be classified as a narrow AI system.<ref>{{Cite web |last=Grossman |first=Gary |date=September 3, 2020 |title=We're entering the AI twilight zone between narrow and general AI |url=https://venturebeat.com/2020/09/03/were-entering-the-ai-twilight-zone-between-narrow-and-general-ai/ |url-status=live |archive-url=https://web.archive.org/web/20200904191750/https://venturebeat.com/2020/09/03/were-entering-the-ai-twilight-zone-between-narrow-and-general-ai/ |archive-date=4 September 2020 |access-date=September 5, 2020 |publisher=[[VentureBeat]] |quote="Certainly, too, there are those who claim we are already seeing an early example of an AGI system in the recently announced GPT-3 natural language processing (NLP) neural network. ... So is GPT-3 the first example of an AGI system? This is debatable, but the consensus is that it is not AGI. ... If nothing else, GPT-3 tells us there is a middle ground between narrow and general AI."}}</ref> In the same year, Jason Rohrer used his GPT-3 account to develop a chatbot, and provided a chatbot-developing platform called "Project December". OpenAI asked for changes to the chatbot to comply with their safety guidelines; Rohrer disconnected Project December from the GPT-3 API.<ref>{{Cite news |last=Quach |first=Katyanna |title=A developer built an AI chatbot using GPT-3 that helped a man speak again to his late fiancée. OpenAI shut it down |url=https://www.theregister.com/2021/09/08/project_december_openai_gpt_3/ |url-status=live |archive-url=https://web.archive.org/web/20211016232620/https://www.theregister.com/2021/09/08/project_december_openai_gpt_3/ |archive-date=16 October 2021 |access-date=16 October 2021 |publisher=The Register}}</ref> In 2022, [[DeepMind]] developed [[Gato (DeepMind)|Gato]], a "general-purpose" system capable of performing more than 600 different tasks.<ref>{{Citation |last=Wiggers |first=Kyle |title=DeepMind's new AI can perform over 600 tasks, from playing games to controlling robots |date=May 13, 2022 |work=[[TechCrunch]] |url=https://techcrunch.com/2022/05/13/deepminds-new-ai-can-perform-over-600-tasks-from-playing-games-to-controlling-robots/ |access-date=12 June 2022 |archive-url=https://web.archive.org/web/20220616185232/https://techcrunch.com/2022/05/13/deepminds-new-ai-can-perform-over-600-tasks-from-playing-games-to-controlling-robots/ |archive-date=16 June 2022 |url-status=live}}</ref> In 2023, [[Microsoft Research]] published a study on an early version of OpenAI's [[GPT-4]], contending that it exhibited more general intelligence than previous AI models and demonstrated human-level performance in tasks spanning multiple domains, such as mathematics, coding, and law. This research sparked a debate on whether GPT-4 could be considered an early, incomplete version of artificial general intelligence, emphasizing the need for further exploration and evaluation of such systems.<ref name=":2" /> In 2023, AI researcher [[Geoffrey Hinton]] stated that:<ref>{{Cite news |last=Metz |first=Cade |date=2023-05-01 |title='The Godfather of A.I.' Leaves Google and Warns of Danger Ahead |url=https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html |access-date=2023-06-07 |work=The New York Times |language=en-US |issn=0362-4331}}</ref> {{Blockquote|text=The idea that this stuff could actually get smarter than people – a few people believed that, [...]. But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.}}He estimated in 2024 (with low confidence) that systems smarter than humans could appear within 5 to 20 years and stressed the attendant existential risks.<ref>{{cite news |date=2024-12-27 |title='Godfather of AI' shortens odds of the technology wiping out humanity over next 30 years |url=https://www.theguardian.com/technology/2024/dec/27/godfather-of-ai-raises-odds-of-the-technology-wiping-out-humanity-over-next-30-years |access-date=2025-04-18 |work=The Guardian}}</ref> In May 2023, [[Demis Hassabis]] similarly said that "The progress in the last few years has been pretty incredible", and that he sees no reason why it would slow down, expecting AGI within a decade or even a few years.<ref>{{Cite web |last=Bove |first=Tristan |title=A.I. could rival human intelligence in 'just a few years,' says CEO of Google's main A.I. research lab |url=https://fortune.com/2023/05/03/google-deepmind-ceo-agi-artificial-intelligence/ |access-date=2024-09-04 |website=Fortune |language=en}}</ref> In March 2024, [[Nvidia]]'s CEO, [[Jensen Huang]], stated his expectation that within five years, AI would be capable of passing any test at least as well as humans.<ref>{{Cite news |last=Nellis |first=Stephen |date=March 2, 2024 |title=Nvidia CEO says AI could pass human tests in five years |url=https://www.reuters.com/technology/nvidia-ceo-says-ai-could-pass-human-tests-five-years-2024-03-01/ |work=Reuters}}</ref> In June 2024, the AI researcher [[Leopold Aschenbrenner]], a former [[OpenAI]] employee, estimated AGI by 2027 to be "strikingly plausible".<ref>{{Cite news |last=Aschenbrenner |first=Leopold |title=SITUATIONAL AWARENESS, The Decade Ahead |url=https://situational-awareness.ai/}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)