Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Philosophy == {{Main|Philosophy of artificial intelligence}} Philosophical debates have historically sought to determine the nature of intelligence and how to make intelligent machines.<ref>{{Cite web |last1=Grayling |first1=Anthony |last2=Ball |first2=Brian |date=2024-08-01 |title=Philosophy is crucial in the age of AI |url=https://theconversation.com/philosophy-is-crucial-in-the-age-of-ai-235907 |access-date=2024-10-04 |website=The Conversation |language=en-US |archive-date=5 October 2024 |archive-url=https://web.archive.org/web/20241005170243/https://theconversation.com/philosophy-is-crucial-in-the-age-of-ai-235907 |url-status=live }}</ref> Another major focus has been whether machines can be conscious, and the associated ethical implications.<ref name="Jarow-2024">{{Cite web |last=Jarow |first=Oshan |date=2024-06-15 |title=Will AI ever become conscious? It depends on how you think about biology. |url=https://www.vox.com/future-perfect/351893/consciousness-ai-machines-neuroscience-mind |access-date=2024-10-04 |website=Vox |language=en-US |archive-date=21 September 2024 |archive-url=https://web.archive.org/web/20240921035218/https://www.vox.com/future-perfect/351893/consciousness-ai-machines-neuroscience-mind |url-status=live }}</ref> Many other topics in philosophy are relevant to AI, such as [[epistemology]] and [[free will]].<ref>{{Cite web |last=McCarthy |first=John |title=The Philosophy of AI and the AI of Philosophy |url=http://jmc.stanford.edu/articles/aiphil2.html |archive-url=https://web.archive.org/web/20181023181725/http://jmc.stanford.edu/articles/aiphil2.html |archive-date=2018-10-23 |access-date=2024-10-03 |website=jmc.stanford.edu}}</ref> Rapid advancements have intensified public discussions on the philosophy and [[ethics of AI]].<ref name="Jarow-2024" /> === Defining artificial intelligence === {{See also|Turing test|Intelligent agent|Dartmouth workshop|Synthetic intelligence}} [[Alan Turing]] wrote in 1950 "I propose to consider the question 'can machines think'?"{{Sfnp|Turing|1950|p=1}} He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour".{{Sfnp|Turing|1950|p=1}} He devised the Turing test, which measures the ability of a machine to simulate human conversation.<ref name="Turing">Turing's original publication of the [[Turing test]] in "[[Computing machinery and intelligence]]": {{Harvtxt|Turing|1950}} Historical influence and philosophical implications: {{Harvtxt|Haugeland|1985|pp=6β9}}, {{Harvtxt|Crevier|1993|p=24}}, {{Harvtxt|McCorduck|2004|pp=70β71}}, {{Harvtxt|Russell|Norvig|2021|pp=2, 984}}</ref> Since we can only observe the behavior of the machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes that [[Problem of other minds|we can not determine these things about other people]] but "it is usual to have a polite convention that everyone thinks."{{Sfnp|Turing|1950|loc=Under "The Argument from Consciousness"}} [[File:Weakness of Turing test 1.svg|thumb|The Turing test can provide some evidence of intelligence, but it penalizes non-human intelligent behavior.<ref>{{Cite web |last1=Kirk-Giannini |first1=Cameron Domenico |last2=Goldstein |first2=Simon |date=2023-10-16 |title=AI is closer than ever to passing the Turing test for 'intelligence'. What happens when it does? |url=https://theconversation.com/ai-is-closer-than-ever-to-passing-the-turing-test-for-intelligence-what-happens-when-it-does-214721 |access-date=2024-08-17 |website=The Conversation |archive-date=25 September 2024 |archive-url=https://web.archive.org/web/20240925040612/https://theconversation.com/ai-is-closer-than-ever-to-passing-the-turing-test-for-intelligence-what-happens-when-it-does-214721 |url-status=live }}</ref>]] [[Stuart J. Russell|Russell]] and [[Norvig]] agree with Turing that intelligence must be defined in terms of external behavior, not internal structure.{{Sfnp|Russell|Norvig|2021|pp=1β4}} However, they are critical that the test requires the machine to imitate humans. "[[Aeronautics|Aeronautical engineering]] texts", they wrote, "do not define the goal of their field as making 'machines that fly so exactly like [[pigeon]]s that they can fool other pigeons.{{' "}}{{Sfnp|Russell|Norvig|2021|p=3}} AI founder [[John McCarthy (computer scientist)|John McCarthy]] agreed, writing that "Artificial intelligence is not, by definition, simulation of human intelligence".{{Sfnp|Maker|2006}} McCarthy defines intelligence as "the computational part of the ability to achieve goals in the world".{{Sfnp|McCarthy|1999}} Another AI founder, [[Marvin Minsky]], similarly describes it as "the ability to solve hard problems".{{Sfnp|Minsky|1986}} The leading AI textbook defines it as the study of agents that perceive their environment and take actions that maximize their chances of achieving defined goals.{{Sfnp|Russell|Norvig|2021|pp=1β4}} These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the "intelligence" of the machineβand no other philosophical discussion is required, or may not even be possible. Another definition has been adopted by Google,<ref>{{Cite web |title=What Is Artificial Intelligence (AI)? |url=https://cloud.google.com/learn/what-is-artificial-intelligence |url-status=live |archive-url=https://web.archive.org/web/20230731114802/https://cloud.google.com/learn/what-is-artificial-intelligence |archive-date=31 July 2023 |access-date=16 October 2023 |website=[[Google Cloud Platform]]}}</ref> a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence. Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI,<ref>{{Cite web |title=One of the Biggest Problems in Regulating AI Is Agreeing on a Definition |url=https://carnegieendowment.org/posts/2022/10/one-of-the-biggest-problems-in-regulating-ai-is-agreeing-on-a-definition?lang=en |access-date=2024-07-31 |website=[[Carnegie Endowment for International Peace]]}}</ref> with many companies during the early 2020s AI boom using the term as a marketing [[buzzword]], often even if they did "not actually use AI in a material way".<ref>{{Cite web |title=AI or BS? How to tell if a marketing tool really uses artificial intelligence |url=https://www.thedrum.com/opinion/2023/03/30/ai-or-bs-how-tell-if-marketing-tool-really-uses-artificial-intelligence |access-date=2024-07-31 |website=The Drum}}</ref> === Evaluating approaches to AI === No established unifying theory or [[paradigm]] has guided AI research for most of its history.{{Efn |[[Nils Nilsson (researcher)|Nils Nilsson]] wrote in 1983: "Simply put, there is wide disagreement in the field about what AI is all about."{{Sfnp|Nilsson|1983|p=10}}}} The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostly [[sub-symbolic]], [[soft computing|soft]] and [[artificial general intelligence|narrow]]. Critics argue that these questions may have to be revisited by future generations of AI researchers. ====Symbolic AI and its limits==== [[Symbolic AI]] (or "[[GOFAI]]"){{Sfnp|Haugeland|1985|pp=112β117}} simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the [[physical symbol systems hypothesis]]: "A physical symbol system has the necessary and sufficient means of general intelligent action."<ref>Physical symbol system hypothesis: {{Harvtxt|Newell|Simon|1976|p=116}} Historical significance: {{Harvtxt|McCorduck|2004|p=153}}, {{Harvtxt|Russell|Norvig|2021|p=19}}</ref> However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or [[Commonsense reasoning|commonsense reasoning]]. [[Moravec's paradox]] is the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult.<ref>[[Moravec's paradox]]: {{Harvtxt|Moravec|1988|pp=15β16}}, {{Harvtxt|Minsky|1986|p=29}}, {{Harvtxt|Pinker|2007|pp=190β191}}</ref> Philosopher [[Hubert Dreyfus]] had [[Dreyfus' critique of AI|argued]] since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge.<ref>[[Dreyfus' critique of AI]]: {{Harvtxt|Dreyfus|1972}}, {{Harvtxt|Dreyfus|Dreyfus|1986}} Historical significance and philosophical implications: {{Harvtxt|Crevier|1993|pp=120β132}}, {{Harvtxt|McCorduck|2004|pp=211β239}}, {{Harvtxt|Russell|Norvig|2021|pp=981β982}}, {{Harvtxt|Fearn|2007|loc=chpt. 3}}</ref> Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him.{{Efn| Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."{{Sfnp|Crevier|1993|p=125}} }}<ref name="Psychological evidence of the prevalence of sub"/> The issue is not resolved: [[sub-symbolic]] reasoning can make many of the same inscrutable mistakes that human intuition does, such as [[algorithmic bias]]. Critics such as [[Noam Chomsky]] argue continuing research into symbolic AI will still be necessary to attain general intelligence,{{Sfnp|Langley|2011}}{{Sfnp|Katz|2012}} in part because sub-symbolic AI is a move away from [[explainable AI]]: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of [[Neuro-symbolic AI|neuro-symbolic artificial intelligence]] attempts to bridge the two approaches. ==== Neat vs. scruffy ==== {{Main|Neats and scruffies}} "Neats" hope that intelligent behavior is described using simple, elegant principles (such as [[logic]], [[optimization]], or [[Artificial neural network|neural networks]]). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s,<ref>[[Neats vs. scruffies]], the historic debate: {{Harvtxt|McCorduck|2004|pp=421β424, 486β489}}, {{Harvtxt|Crevier|1993|p=168}}, {{Harvtxt|Nilsson|1983|pp=10β11}}, {{Harvtxt|Russell|Norvig|2021|p=24}} A classic example of the "scruffy" approach to intelligence: {{Harvtxt|Minsky|1986}} A modern example of neat AI and its aspirations in the 21st century: {{Harvtxt|Domingos|2015}}</ref> but eventually was seen as irrelevant. Modern AI has elements of both. ==== Soft vs. hard computing ==== {{Main|Soft computing}} Finding a provably correct or optimal solution is [[Intractability (complexity)|intractable]] for many important problems.<ref name="Intractability and efficiency and the combinatorial explosion"/> Soft computing is a set of techniques, including [[genetic algorithms]], [[fuzzy logic]] and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks. ==== Narrow vs. general AI ==== {{Main|Weak artificial intelligence|Artificial general intelligence}} AI researchers are divided as to whether to pursue the goals of artificial general intelligence and [[superintelligence]] directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals.{{Sfnp|Pennachin|Goertzel|2007}}{{Sfnp|Roberts|2016}} General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The sub-field of artificial general intelligence studies this area exclusively. === Machine consciousness, sentience, and mind === {{Main|Philosophy of artificial intelligence|Artificial consciousness}} There is no settled consensus in [[philosophy of mind]] on whether a machine can have a [[mind]], [[consciousness]] and [[philosophy of mind|mental states]] in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. [[Stuart J. Russell|Russell]] and [[Norvig]] add that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on."{{Sfnp|Russell|Norvig|2021|p=986}} However, the question has become central to the philosophy of mind. It is also typically the central question at issue in [[artificial intelligence in fiction]]. ==== Consciousness ==== {{Main|Hard problem of consciousness|Theory of mind}} [[David Chalmers]] identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness.{{Sfnp|Chalmers|1995}} The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this ''feels'' or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While human [[Information processing (psychology)|information processing]] is easy to explain, human [[subjective experience]] is difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to ''know what red looks like''.{{Sfnp|Dennett|1991}} ==== Computationalism and functionalism ==== {{Main|Computational theory of mind|Functionalism (philosophy of mind)}} Computationalism is the position in the [[philosophy of mind]] that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the [[mindβbody problem]]. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers [[Jerry Fodor]] and [[Hilary Putnam]].{{Sfnp|Horst|2005}} Philosopher [[John Searle]] characterized this position as "[[Strong AI hypothesis|strong AI]]": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."{{Efn|name="Searle's strong AI"| Searle presented this definition of "Strong AI" in 1999.{{Sfnp|Searle|1999}} Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states."{{Sfnp|Searle|1980|p=1}} Strong AI is defined similarly by [[Stuart J. Russell|Russell]] and [[Norvig]]: "Stong AI β the assertion that machines that do so are ''actually'' thinking (as opposed to ''simulating'' thinking)."{{Sfnp|Russell|Norvig|2021|p=9817}} }} Searle challenges this claim with his [[Chinese room]] argument, which attempts to show that even a computer capable of perfectly simulating human behavior would not have a mind.<ref>Searle's [[Chinese room]] argument: {{Harvtxt|Searle|1980}}. Searle's original presentation of the thought experiment., {{Harvtxt|Searle|1999}}. Discussion: {{Harvtxt|Russell|Norvig|2021|pp=985}}, {{Harvtxt|McCorduck|2004|pp=443β445}}, {{Harvtxt|Crevier|1993|pp=269β271}}</ref> ==== AI welfare and rights ==== It is difficult or impossible to reliably evaluate whether an advanced [[Sentient AI|AI is sentient]] (has the ability to feel), and if so, to what degree.<ref>{{Cite web |last=Leith |first=Sam |date=2022-07-07 |title=Nick Bostrom: How can we be certain a machine isn't conscious? |url=https://www.spectator.co.uk/article/nick-bostrom-how-can-we-be-certain-a-machine-isnt-conscious |access-date=2024-02-23 |website=The Spectator |archive-date=26 September 2024 |archive-url=https://web.archive.org/web/20240926155639/https://www.spectator.co.uk/article/nick-bostrom-how-can-we-be-certain-a-machine-isnt-conscious/ |url-status=live }}</ref> But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals.<ref name="Thomson-2022">{{Cite web |last=Thomson |first=Jonny |date=2022-10-31 |title=Why don't robots have rights? |url=https://bigthink.com/thinking/why-dont-robots-have-rights |access-date=2024-02-23 |website=Big Think |archive-date=13 September 2024 |archive-url=https://web.archive.org/web/20240913055336/https://bigthink.com/thinking/why-dont-robots-have-rights/ |url-status=live }}</ref><ref name="Kateman-2023">{{Cite magazine |last=Kateman |first=Brian |date=2023-07-24 |title=AI Should Be Terrified of Humans |url=https://time.com/6296234/ai-should-be-terrified-of-humans |access-date=2024-02-23 |magazine=Time |archive-date=25 September 2024 |archive-url=https://web.archive.org/web/20240925041601/https://time.com/6296234/ai-should-be-terrified-of-humans/ |url-status=live }}</ref> [[Sapience]] (a set of capacities related to high intelligence, such as discernment or [[self-awareness]]) may provide another moral basis for AI rights.<ref name="Thomson-2022"/> [[Robot rights]] are also sometimes proposed as a practical way to integrate autonomous agents into society.<ref>{{Cite news |last=Wong |first=Jeff |date=July 10, 2023 |title=What leaders need to know about robot rights |url=https://www.fastcompany.com/90920769/what-leaders-need-to-know-about-robot-rights |work=Fast Company |ref=none}}</ref> In 2017, the European Union considered granting "electronic personhood" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities.<ref>{{Cite news |last=Hern |first=Alex |date=2017-01-12 |title=Give robots 'personhood' status, EU committee argues |url=https://www.theguardian.com/technology/2017/jan/12/give-robots-personhood-status-eu-committee-argues |access-date=2024-02-23 |work=The Guardian |issn=0261-3077 |archive-date=5 October 2024 |archive-url=https://web.archive.org/web/20241005171222/https://www.theguardian.com/technology/2017/jan/12/give-robots-personhood-status-eu-committee-argues |url-status=live }}</ref> Critics argued in 2018 that granting rights to AI systems would downplay the importance of [[human rights]], and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.<ref>{{Cite web |last=Dovey |first=Dana |date=2018-04-14 |title=Experts Don't Think Robots Should Have Rights |url=https://www.newsweek.com/robots-human-rights-electronic-persons-humans-versus-machines-886075 |access-date=2024-02-23 |website=Newsweek |archive-date=5 October 2024 |archive-url=https://web.archive.org/web/20241005171333/https://www.newsweek.com/robots-human-rights-electronic-persons-humans-versus-machines-886075 |url-status=live }}</ref><ref>{{Cite web |last=Cuddy |first=Alice |date=2018-04-13 |title=Robot rights violate human rights, experts warn EU |url=https://www.euronews.com/2018/04/13/robot-rights-violate-human-rights-experts-warn-eu |access-date=2024-02-23 |website=euronews |archive-date=19 September 2024 |archive-url=https://web.archive.org/web/20240919022327/https://www.euronews.com/2018/04/13/robot-rights-violate-human-rights-experts-warn-eu |url-status=live }}</ref> Progress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a [[Moral blindness|moral blind spot]] analogous to [[slavery]] or [[factory farming]], which could lead to [[Suffering risks|large-scale suffering]] if sentient AI is created and carelessly exploited.<ref name="Kateman-2023"/><ref name="Thomson-2022"/>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)