Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial general intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Type of AI with wide-ranging abilities}} {{Distinguish|Generative artificial intelligence|Artificial superintelligence}} {{Use British English|date=March 2019}} {{Use dmy dates|date=December 2019}} {{Artificial intelligence|Major goals}} '''Artificial general intelligence''' ('''AGI''')—sometimes called '''human‑level intelligence AI'''—is a type of [[artificial intelligence]] that would match or surpass human capabilities across virtually all cognitive tasks.<ref>{{cite journal |last=Goertzel |first=Ben |title=Artificial General Intelligence: Concept, State of the Art, and Future Prospects |journal=Journal of Artificial General Intelligence |year=2014 |volume=5 |issue=1 |pages=1–48|doi=10.2478/jagi-2014-0001 |bibcode=2014JAGI....5....1G |doi-access=free }}</ref><ref>{{cite journal |last1=Lake |first1=Brenden |last2=Ullman |first2=Tom |last3=Tenenbaum |first3=Joshua |last4=Gershman |first4=Samuel |title=Building machines that learn and think like people |journal=Behavioral and Brain Sciences |year=2017 |volume=40 |pages=e253 |doi=10.1017/S0140525X16001837|pmid=27881212 |arxiv=1604.00289 }}</ref> Some researchers argue that state‑of‑the‑art [[large language model]]s already exhibit early signs of AGI‑level capability, while others maintain that genuine AGI has not yet been achieved.<ref name=":2">{{cite arXiv |last1=Bubeck |first1=Sébastien |title=Sparks of Artificial General Intelligence: Early Experiments with GPT‑4 |year=2023 |class=cs.CL |eprint=2303.12712}}</ref> AGI is conceptually distinct from [[artificial superintelligence]] (ASI), which would outperform the best human abilities across every domain by a wide margin.<ref>{{cite book |last=Bostrom |first=Nick |title=Superintelligence: Paths, Dangers, Strategies |year=2014 |publisher=Oxford University Press}}</ref> AGI is considered one of the definitions of [[Chinese room#Strong AI vs. AI research|strong AI]]. Unlike [[artificial narrow intelligence]] (ANI), whose competence is confined to well‑defined tasks, an AGI system can generalise knowledge, transfer skills between domains, and solve novel problems without task‑specific reprogramming. The concept does not, in principle, require the system to be an autonomous agent; a static model—such as a highly capable large language model—or an embodied robot could both satisfy the definition so long as human‑level breadth and proficiency are achieved.<ref>{{cite conference |title=Why AGI Might Not Need Agency |first=Shane |last=Legg |conference=Proceedings of the Conference on Artificial General Intelligence |year=2023}}</ref> Creating AGI is a primary goal of AI research and of companies such as [[OpenAI]],<ref name="OpenAI Charter">{{Cite web |title=OpenAI Charter |url=https://openai.com/charter |access-date=2023-04-06 |website=OpenAI |language=en-US |quote="Our mission is to ensure that artificial general intelligence benefits all of humanity."}}</ref> [[Google]],<ref name=":1">{{Cite news |last=Grant |first=Nico |date=2025-02-27 |title=Google's Sergey Brin Asks Workers to Spend More Time In the Office |url=https://www.nytimes.com/2025/02/27/technology/google-sergey-brin-return-to-office.html |access-date=2025-03-01 |work=The New York Times |language=en-US |issn=0362-4331}}</ref> and [[Meta Platforms|Meta]].<ref>{{Cite web |last=Heath |first=Alex |date=2024-01-18 |title=Mark Zuckerberg's new goal is creating artificial general intelligence |url=https://www.theverge.com/2024/1/18/24042354/mark-zuckerberg-meta-agi-reorg-interview |access-date=2024-06-13 |website=The Verge |language=en |quote="Our vision is to build AI that is better than human-level at all of the human senses."}}</ref> A 2020 survey identified 72 active AGI [[research and development]] projects across 37 countries.<ref name="baum">{{Cite report |url=https://gcrinstitute.org/papers/055_agi-2020.pdf |title=A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy |last=Baum |first=Seth D. |date=2020 |publisher=Global Catastrophic Risk Institute |quote="72 AGI R&D projects were identified as being active in 2020." |access-date=28 November 2024}}</ref> The timeline for achieving human‑level intelligence AI remains deeply contested. Recent surveys of AI researchers give median forecasts ranging from the early 2030s to mid‑century, while still recording significant numbers who expect arrival much sooner—or never at all.<ref>{{cite web |title=Shrinking AGI timelines: a review of expert forecasts |url=https://80000hours.org/2025/03/when-do-experts-expect-agi-to-arrive/ |website=80,000 Hours |date=2025-03-21 |access-date=2025-04-18}}</ref><ref>{{cite web |title=How the U.S. Public and AI Experts View Artificial Intelligence |url=https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/ |website=Pew Research Center |date=2025-04-03 |access-date=2025-04-18}}</ref><ref>{{cite web |date=2023-02-07 |title=AI timelines: What do experts in artificial intelligence expect for the future? |url=https://ourworldindata.org/ai-timelines |access-date=2025-04-18 |website=Our World in Data}}</ref> There is debate on the exact definition of AGI and regarding whether modern [[large language model]]s (LLMs) such as [[GPT-4]] are early forms of AGI.<ref name=":2" /> AGI is a common topic in [[science fiction]] and [[futures studies]].<ref>{{Cite book |last=Butler |first=Octavia E. |title=Parable of the Sower |publisher=Grand Central Publishing |date=1993 |isbn=978-0-4466-7550-5 |quote="All that you touch you change. All that you change changes you."}}</ref><ref>{{Cite book |last=Vinge |first=Vernor |title=A Fire Upon the Deep |publisher=Tor Books |date=1992 |isbn=978-0-8125-1528-2 |quote="The Singularity is coming."}}</ref> Contention exists over whether AGI represents an [[Existential risk from artificial general intelligence|existential risk]].<ref name="NYT-202306302">{{Cite news |last=Morozov |first=Evgeny |date=June 30, 2023 |title=The True Threat of Artificial Intelligence |url=https://www.nytimes.com/2023/06/30/opinion/artificial-intelligence-danger.html |work=The New York Times |quote="The real threat is not AI itself but the way we deploy it."}}</ref><ref>{{Cite news |date=2023-03-23 |title=Impressed by artificial intelligence? Experts say AGI is coming next, and it has 'existential' risks |url=https://www.abc.net.au/news/2023-03-24/what-is-agi-artificial-general-intelligence-ai-experts-risks/102035132 |access-date=2023-04-06 |work=ABC News |language=en-AU |quote="AGI could pose existential risks to humanity."}}</ref><ref>{{Cite book |last=Bostrom |first=Nick |title=Superintelligence: Paths, Dangers, Strategies |publisher=Oxford University Press |date=2014 |isbn=978-0-1996-7811-2 |quote="The first superintelligence will be the last invention that humanity needs to make."}}</ref> Many AI experts [[Statement on AI risk of extinction|have stated]] that mitigating the risk of human extinction posed by AGI should be a global priority.<ref>{{Cite news |last=Roose |first=Kevin |date=May 30, 2023 |title=A.I. Poses 'Risk of Extinction,' Industry Leaders Warn |url=https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html |work=The New York Times |quote="Mitigating the risk of extinction from AI should be a global priority."}}</ref><ref>{{Cite web |title=Statement on AI Risk |url=https://www.safe.ai/statement-on-ai-risk |access-date=2024-03-01 |website=Center for AI Safety |quote="AI experts warn of risk of extinction from AI."}}</ref> Others find the development of AGI to be in too remote a stage to present such a risk.<ref>{{Cite news |last=Mitchell |first=Melanie |date=May 30, 2023 |title=Are AI's Doomsday Scenarios Worth Taking Seriously? |url=https://www.nytimes.com/2023/05/30/opinion/ai-risk.html |work=The New York Times |quote="We are far from creating machines that can outthink us in general ways."}}</ref><ref>{{Cite web |last=LeCun |first=Yann |date=June 2023 |title=AGI does not present an existential risk |url=https://yosinski.medium.com/agi-does-not-present-an-existential-risk-b55b6e03c0de |website=Medium |quote="There is no reason to fear AI as an existential threat."}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)