Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Neural network (machine learning)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Theory=== A central claim{{citation needed|date=January 2023}} of ANNs is that they embody new and powerful general principles for processing information. These principles are ill-defined. It is often claimed{{by whom|date=January 2023}} that they are [[Emergent properties|emergent]] from the network itself. This allows simple statistical association (the basic function of artificial neural networks) to be described as learning or recognition. In 1997, [[Alexander Dewdney]], a former ''[[Scientific American]]'' columnist, commented that as a result, artificial neural networks have a "something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are. No human hand (or mind) intervenes; solutions are found as if by magic; and no one, it seems, has learned anything".<ref>{{cite book|url={{google books |plainurl=y |id=KcHaAAAAMAAJ|page=82}}|title=Yes, we have no neutrons: an eye-opening tour through the twists and turns of bad science|last=Dewdney|first=A. K.|date=1 April 1997|publisher=Wiley|isbn=978-0-471-10806-1|page=82}}</ref> One response to Dewdney is that neural networks have been successfully used to handle many complex and diverse tasks, ranging from autonomously flying aircraft<ref>[http://www.nasa.gov/centers/dryden/news/NewsReleases/2003/03-49.html NASA β Dryden Flight Research Center β News Room: News Releases: NASA NEURAL NETWORK PROJECT PASSES MILESTONE] {{Webarchive|url=https://web.archive.org/web/20100402065100/http://www.nasa.gov/centers/dryden/news/NewsReleases/2003/03-49.html |date=2 April 2010 }}. Nasa.gov. Retrieved on 20 November 2013.</ref> to detecting credit card fraud to mastering the game of [[Go (game)|Go]]. Technology writer Roger Bridgman commented: {{blockquote|Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource". In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having.<ref>{{Cite web |url=http://members.fortunecity.com/templarseries/popper.html |title=Roger Bridgman's defence of neural networks |access-date=12 July 2010 |archive-url=https://web.archive.org/web/20120319163352/http://members.fortunecity.com/templarseries/popper.html |archive-date=19 March 2012 }}</ref> }} Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network. Moreover, recent emphasis on the [[Explainable artificial intelligence|explainability]] of AI has contributed towards the development of methods, notably those based on [[Attention (machine learning)|attention]] mechanisms, for visualizing and explaining learned neural networks. Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering generic principles that allow a learning machine to be successful. For example, Bengio and LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep architecture.<ref>{{cite web|url=http://www.iro.umontreal.ca/~lisa/publications2/index.php/publications/show/4|title=Scaling Learning Algorithms towards {AI} β LISA β Publications β Aigaion 2.0|website=iro.umontreal.ca |url-status=dead }}</ref> Biological brains use both shallow and deep circuits as reported by brain anatomy,<ref name="VanEssen1991">D. J. Felleman and D. C. Van Essen, "[https://archive.today/20150120022056/http://cercor.oxfordjournals.org/content/1/1/1.1.full.pdf+html Distributed hierarchical processing in the primate cerebral cortex]," ''Cerebral Cortex'', 1, pp. 1β47, 1991.</ref> displaying a wide variety of invariance. Weng<ref name="Weng2012">J. Weng, "[https://www.amazon.com/Natural-Artificial-Intelligence-Introduction-Computational/dp/0985875720 Natural and Artificial Intelligence: Introduction to Computational Brain-Mind] {{Webarchive|url=https://web.archive.org/web/20240519082645/https://www.amazon.com/Natural-Artificial-Intelligence-Introduction-Computational/dp/0985875720 |date=19 May 2024 }}," BMI Press, {{ISBN|978-0-9858757-2-5}}, 2012.</ref> argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)