Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Evaluating approaches to AI === No established unifying theory or [[paradigm]] has guided AI research for most of its history.{{Efn |[[Nils Nilsson (researcher)|Nils Nilsson]] wrote in 1983: "Simply put, there is wide disagreement in the field about what AI is all about."{{Sfnp|Nilsson|1983|p=10}}}} The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostly [[sub-symbolic]], [[soft computing|soft]] and [[artificial general intelligence|narrow]]. Critics argue that these questions may have to be revisited by future generations of AI researchers. ====Symbolic AI and its limits==== [[Symbolic AI]] (or "[[GOFAI]]"){{Sfnp|Haugeland|1985|pp=112β117}} simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the [[physical symbol systems hypothesis]]: "A physical symbol system has the necessary and sufficient means of general intelligent action."<ref>Physical symbol system hypothesis: {{Harvtxt|Newell|Simon|1976|p=116}} Historical significance: {{Harvtxt|McCorduck|2004|p=153}}, {{Harvtxt|Russell|Norvig|2021|p=19}}</ref> However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or [[Commonsense reasoning|commonsense reasoning]]. [[Moravec's paradox]] is the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult.<ref>[[Moravec's paradox]]: {{Harvtxt|Moravec|1988|pp=15β16}}, {{Harvtxt|Minsky|1986|p=29}}, {{Harvtxt|Pinker|2007|pp=190β191}}</ref> Philosopher [[Hubert Dreyfus]] had [[Dreyfus' critique of AI|argued]] since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge.<ref>[[Dreyfus' critique of AI]]: {{Harvtxt|Dreyfus|1972}}, {{Harvtxt|Dreyfus|Dreyfus|1986}} Historical significance and philosophical implications: {{Harvtxt|Crevier|1993|pp=120β132}}, {{Harvtxt|McCorduck|2004|pp=211β239}}, {{Harvtxt|Russell|Norvig|2021|pp=981β982}}, {{Harvtxt|Fearn|2007|loc=chpt. 3}}</ref> Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him.{{Efn| Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."{{Sfnp|Crevier|1993|p=125}} }}<ref name="Psychological evidence of the prevalence of sub"/> The issue is not resolved: [[sub-symbolic]] reasoning can make many of the same inscrutable mistakes that human intuition does, such as [[algorithmic bias]]. Critics such as [[Noam Chomsky]] argue continuing research into symbolic AI will still be necessary to attain general intelligence,{{Sfnp|Langley|2011}}{{Sfnp|Katz|2012}} in part because sub-symbolic AI is a move away from [[explainable AI]]: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of [[Neuro-symbolic AI|neuro-symbolic artificial intelligence]] attempts to bridge the two approaches. ==== Neat vs. scruffy ==== {{Main|Neats and scruffies}} "Neats" hope that intelligent behavior is described using simple, elegant principles (such as [[logic]], [[optimization]], or [[Artificial neural network|neural networks]]). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s,<ref>[[Neats vs. scruffies]], the historic debate: {{Harvtxt|McCorduck|2004|pp=421β424, 486β489}}, {{Harvtxt|Crevier|1993|p=168}}, {{Harvtxt|Nilsson|1983|pp=10β11}}, {{Harvtxt|Russell|Norvig|2021|p=24}} A classic example of the "scruffy" approach to intelligence: {{Harvtxt|Minsky|1986}} A modern example of neat AI and its aspirations in the 21st century: {{Harvtxt|Domingos|2015}}</ref> but eventually was seen as irrelevant. Modern AI has elements of both. ==== Soft vs. hard computing ==== {{Main|Soft computing}} Finding a provably correct or optimal solution is [[Intractability (complexity)|intractable]] for many important problems.<ref name="Intractability and efficiency and the combinatorial explosion"/> Soft computing is a set of techniques, including [[genetic algorithms]], [[fuzzy logic]] and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks. ==== Narrow vs. general AI ==== {{Main|Weak artificial intelligence|Artificial general intelligence}} AI researchers are divided as to whether to pursue the goals of artificial general intelligence and [[superintelligence]] directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals.{{Sfnp|Pennachin|Goertzel|2007}}{{Sfnp|Roberts|2016}} General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The sub-field of artificial general intelligence studies this area exclusively.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)