Template:Short description In the field of artificial intelligence (AI), tasks that are hypothesized to require artificial general intelligence to solve are informally known as AI-complete or AI-hard.<ref name=" Shapiro92">Shapiro, Stuart C. (1992). Artificial Intelligence Template:Webarchive In Stuart C. Shapiro (Ed.), Encyclopedia of Artificial Intelligence (Second Edition, pp. 54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".)</ref> Calling a problem AI-complete reflects the belief that it cannot be solved by a simple specific algorithm.
In the past, problems supposed to be AI-complete included computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem.<ref>Template:Cite journal</ref> AI-complete tasks were notably considered useful for testing the presence of humans, as CAPTCHAs aim to do, and in computer security to circumvent brute-force attacks.<ref>Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. CAPTCHA: Using Hard AI Problems for Security Template:Webarchive. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311.</ref><ref>Template:Cite journal (unpublished?)</ref>
HistoryEdit
The term was coined by Fanya Montalvo by analogy with NP-complete and NP-hard in complexity theory, which formally describes the most famous class of difficult problems.<ref>Template:Citation.</ref> Early uses of the term are in Erik Mueller's 1987 PhD dissertation<ref>Mueller, Erik T. (1987, March). Daydreaming and Computation (Technical Report CSD-870017) Template:Webarchive PhD dissertation, University of California, Los Angeles. ("Daydreaming is but one more AI-complete problem: if we could solve anyone artificial intelligence problem, we could solve all the others", p. 302)</ref> and in Eric Raymond's 1991 Jargon File.<ref>Raymond, Eric S. (1991, March 22). Jargon File Version 2.8.1 Template:Webarchive (Definition of "AI-complete" first added to jargon file.)</ref>
Expert systems, that were popular in the 1980s, were able to solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempted to "scale up" their systems to handle more complicated, real-world situations, the programs tended to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they would fail as unexpected circumstances outside of its original problem context would begin to appear. When human beings are dealing with new situations in the world, they are helped by their awareness of the general context: they know what the things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. Expert systems lacked this adaptability and were brittle when facing new situations.<ref>Template:Citation</ref>
DeepMind published a work in May 2022 in which they trained a single model to do several things at the same time. The model, named Gato, can "play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens."<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Similarly, some tasks once considered to be AI-complete, like machine translation,<ref>Template:Cite magazine</ref> are among the capabilities of large language models.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
AI-complete problemsEdit
AI-complete problems have been hypothesized to include:
- AI peer review<ref>Template:Cite magazine</ref> (composite natural language understanding, automated reasoning, automated theorem proving, formalized logic expert system)
- Bongard problems<ref name="Sekrst2020">Template:Citation</ref>
- Computer vision (and subproblems such as object recognition)<ref>Template:Cite journal</ref>
- Natural language understanding (and subproblems such as text mining,<ref>Template:Cite book</ref> machine translation,<ref>Template:Citation</ref> and word-sense disambiguation<ref>Template:Cite journal</ref>)
- Autonomous driving<ref>Template:Cite interview</ref>
- Dealing with unexpected circumstances while solving any real world problem,<ref>Template:Citation</ref> whether navigation, planning, or even the kind of reasoning done by expert systems.Template:Citation needed
FormalizationEdit
Computational complexity theory deals with the relative computational difficulty of computable functions. By definition, it does not cover problems whose solution is unknown or has not been characterized formally. Since many AI problems have no formalization yet, conventional complexity theory does not enable a formal definition of AI-completeness.
ResearchEdit
Roman Yampolskiy<ref>Template:Citation</ref> suggests that a problem <math>C</math> is AI-Complete if it has two properties:
- It is in the set of AI problems (Human Oracle-solvable).
- Any AI problem can be converted into <math>C</math> by some polynomial time algorithm.
On the other hand, a problem <math>H</math> is AI-Hard if and only if there is an AI-Complete problem <math>C</math> that is polynomial time Turing-reducible to <math>H</math>. This also gives as a consequence the existence of AI-Easy problems, that are solvable in polynomial time by a deterministic Turing machine with an oracle for some problem.
Yampolskiy<ref>Template:Citation</ref> has also hypothesized that the Turing Test is a defining feature of AI-completeness.
Groppe and Jain<ref>Template:Citation</ref> classify problems which require artificial general intelligence to reach human-level machine performance as AI-complete, while only restricted versions of AI-complete problems can be solved by the current AI systems. For Šekrst,<ref name="Sekrst2020" /> getting a polynomial solution to AI-complete problems would not necessarily be equal to solving the issue of artificial general intelligence, while emphasizing the lack of computational complexity research being the limiting factor towards achieving artificial general intelligence.
For Kwee-Bintoro and Velez,<ref>Template:Citation</ref> solving AI-complete problems would have strong repercussions on society.
See alsoEdit
ReferencesEdit
<references/>