Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Algorithmic learning theory
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Distinguishing characteristics== Unlike statistical learning theory and most statistical theory in general, algorithmic learning theory does not assume that data are random samples, that is, that data points are independent of each other. This makes the theory suitable for domains where observations are (relatively) noise-free but not random, such as language learning<ref>{{cite book |last1=Jain |first1=Sanjay |title=Systems that Learn: An Introduction to Learning Theory |date=1999 |publisher=MIT Press |isbn=978-0-262-10077-9 }}{{pn|date=October 2024}}</ref> and automated scientific discovery.<ref>{{cite book |last1=Langley |first1=Pat |title=Scientific Discovery: Computational Explorations of the Creative Processes |date=1987 |publisher=MIT Press |isbn=978-0-262-62052-9 }}{{pn|date=October 2024}}</ref><ref>{{cite conference |last1=Schulte |first1=Oliver |title=Simultaneous discovery of conservation laws and hidden particles with Smith matrix decomposition |conference=Proceedings of the 21st International Joint Conference on Artificial Intelligence |date=11 July 2009 |pages=1481β1487 |url=http://www.aaai.org/ocs/index.php/IJCAI/IJCAI-09/paper/download/652/925 }}</ref> The fundamental concept of algorithmic learning theory is learning in the limit: as the number of data points increases, a learning algorithm should converge to a correct hypothesis on ''every'' possible data sequence consistent with the problem space. This is a non-probabilistic version of [[Consistency (statistics)|statistical consistency]], which also requires convergence to a correct model in the limit, but allows a learner to fail on data sequences with probability measure 0 {{citation needed|date=January 2021}}. Algorithmic learning theory investigates the learning power of [[Turing machine]]s. Other frameworks consider a much more restricted class of learning algorithms than Turing machines, for example, learners that compute hypotheses more quickly, for instance in [[polynomial time]]. An example of such a framework is [[probably approximately correct learning]] {{citation needed|date=January 2021}}.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)