Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial general intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Narrow AI research=== {{Main|Artificial intelligence}} In the 1990s and early 21st century, mainstream AI achieved commercial success and academic respectability by focusing on specific sub-problems where AI can produce verifiable results and commercial applications, such as [[speech recognition]] and [[recommendation algorithm]]s.<ref>{{Harvnb|Russell|Norvig|2003|pp=25β26}}</ref> These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is heavily funded in both academia and industry. {{As of|2018}}, development in this field was considered an emerging trend, and a mature stage was expected to be reached in more than 10 years.<ref>{{Cite web |title=Trends in the Emerging Tech Hype Cycle |url=https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |url-status=live |archive-url=https://web.archive.org/web/20190522024829/https://blogs.gartner.com/smarterwithgartner/files/2018/08/PR_490866_5_Trends_in_the_Emerging_Tech_Hype_Cycle_2018_Hype_Cycle.png |archive-date=22 May 2019 |access-date=7 May 2019 |publisher=Gartner Reports}}</ref> At the turn of the century, many mainstream AI researchers<ref name=":4"/> hoped that strong AI could be developed by combining programs that solve various sub-problems. [[Hans Moravec]] wrote in 1988: <blockquote>I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real-world competence and the [[commonsense knowledge (artificial intelligence)|commonsense knowledge]] that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical [[golden spike]] is driven uniting the two efforts.<ref name=":4">{{Harvnb|Moravec|1988|p=20}}</ref></blockquote> However, even at the time, this was disputed. For example, Stevan Harnad of Princeton University concluded his 1990 paper on the [[Symbol grounding problem|symbol grounding hypothesis]] by stating: <blockquote>The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) β nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer).<ref>{{Cite journal |last=Harnad |first=S. |date=1990 |title=The Symbol Grounding Problem |journal=Physica D |volume=42 |issue=1β3 |pages=335β346 |arxiv=cs/9906002 |bibcode=1990PhyD...42..335H |doi=10.1016/0167-2789(90)90087-6 |s2cid=3204300}}</ref></blockquote>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)