Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Neural network (machine learning)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Training== Neural networks are typically trained through [[empirical risk minimization]]. This method is based on the idea of optimizing the network's parameters to minimize the difference, or empirical risk, between the predicted output and the actual target values in a given dataset.<ref name=":2">{{Cite book |last1=Vapnik |first1=Vladimir N. |title=The nature of statistical learning theory |last2=Vapnik |first2=Vladimir Naumovich |date=1998 |publisher=Springer |isbn=978-0-387-94559-0 |edition=Corrected 2nd print. |location=New York Berlin Heidelberg}}</ref> Gradient-based methods such as [[backpropagation]] are usually used to estimate the parameters of the network.<ref name=":2" /> During the training phase, ANNs learn from [[Labeled data|labeled]] training data by iteratively updating their parameters to minimize a defined [[Loss functions for classification|loss function]].<ref name=":4">{{cite book |author=Ian Goodfellow and Yoshua Bengio and Aaron Courville |url=http://www.deeplearningbook.org/ |title=Deep Learning |publisher=MIT Press |year=2016 |access-date=1 June 2016 |archive-url=https://web.archive.org/web/20160416111010/http://www.deeplearningbook.org/ |archive-date=16 April 2016 |url-status=live}}</ref> This method allows the network to generalize to unseen data.{{multiple image | direction = horizontal | total_width = 400 | footer = | image1 = Simplified neural network training example.svg | alt1 = | caption1 = Simplified example of training a neural network in object detection: The network is trained by multiple images that are known to depict [[starfish]] and [[sea urchin]]s, which are correlated with "nodes" that represent visual [[Feature (computer vision)|features]]. The starfish match with a ringed texture and a star outline, whereas most sea urchins match with a striped texture and oval shape. However, the instance of a ring textured sea urchin creates a weakly weighted association between them. | image2 = Simplified neural network example.svg | alt2 = | caption2 = Subsequent run of the network on an input image (left):<ref>{{cite book |author=Ferrie, C. |author2=Kaiser, S. |year=2019|title=Neural Networks for Babies|publisher=Sourcebooks|isbn=978-1-4926-7120-6}}</ref> The network correctly detects the starfish. However, the weakly weighted association between ringed texture and sea urchin also confers a weak signal to the latter from one of two intermediate nodes. In addition, a shell that was not included in the training gives a weak signal for the oval shape, also resulting in a weak signal for the sea urchin output. These weak signals may result in a [[false positive]] result for sea urchin.<br />In reality, textures and outlines would not be represented by single nodes, but rather by associated weight patterns of multiple nodes.}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)