Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Probably approximately correct learning
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Framework for mathematical analysis of machine learning}} {{Machine learning|Theory}} In [[computational learning theory]], '''probably approximately correct''' ('''PAC''') '''learning''' is a framework for mathematical analysis of [[machine learning]]. It was proposed in 1984 by [[Leslie Valiant]].<ref name="valiant">L. Valiant. ''[http://web.mit.edu/6.435/www/Valiant84.pdf A theory of the learnable.]'' Communications of the ACM, 27, 1984.</ref> In this framework, the learner receives samples and must select a generalization function (called the ''hypothesis'') from a certain class of possible functions. The goal is that, with high probability (the "probably" part), the selected function will have low [[generalization error]] (the "approximately correct" part). The learner must be able to learn the concept given any arbitrary approximation ratio, probability of success, or [[Empirical distribution function|distribution of the samples]]. The model was later extended to treat noise (misclassified samples). An important innovation of the PAC framework is the introduction of [[computational complexity theory]] concepts to machine learning. In particular, the learner is expected to find efficient functions (time and space requirements bounded to a [[polynomial]] of the example size), and the learner itself must implement an efficient procedure (requiring an example count bounded to a polynomial of the concept size, modified by the approximation and [[likelihood]] bounds). == Definitions and terminology == In order to give the definition for something that is PAC-learnable, we first have to introduce some terminology.<ref>Kearns and Vazirani, pg. 1-12,</ref> For the following definitions, two examples will be used. The first is the problem of [[character recognition]] given an array of <math>n</math> bits encoding a binary-valued image. The other example is the problem of finding an interval that will correctly classify points within the interval as positive and the points outside of the range as negative. Let <math>X</math> be a set called the ''instance space'' or the encoding of all the samples. In the character recognition problem, the instance space is <math>X=\{0,1\}^n</math>. In the interval problem the instance space, <math>X</math>, is the set of all bounded intervals in <math>\mathbb{R}</math>, where <math>\mathbb{R}</math> denotes the set of all [[real numbers]]. A ''concept'' is a subset <math>c \subset X</math>. One concept is the set of all patterns of bits in <math>X=\{0,1\}^n</math> that encode a picture of the letter "P". An example concept from the second example is the set of open intervals, <math>\{ (a,b) \mid 0 \leq a \leq \pi/2, \pi \leq b \leq \sqrt{13} \}</math>, each of which contains only the positive points. A ''[[concept class]]'' <math>C</math> is a collection of concepts over <math>X</math>. This could be the set of all subsets of the array of bits that are [[Morphological skeleton|skeletonized]] [[Pixel connectivity#4-connected|4-connected]] (width of the font is 1). Let <math>\operatorname{EX}(c, D)</math> be a procedure that draws an example, <math>x</math>, using a probability distribution <math>D</math> and gives the correct label <math>c(x)</math>, that is 1 if <math>x \in c</math> and 0 otherwise. Now, given <math>0<\epsilon,\delta<1 </math>, assume there is an algorithm <math>A</math> and a polynomial <math>p</math> in <math>1/\epsilon, 1/\delta</math> (and other relevant parameters of the class <math>C</math>) such that, given a sample of size <math>p</math> drawn according to <math>\operatorname{EX}(c, D)</math>, then, with probability of at least <math>1-\delta</math>, <math>A</math> outputs a hypothesis <math>h \in C</math> that has an average error less than or equal to <math>\epsilon</math> on <math>X</math> with the same distribution <math>D</math>. Further if the above statement for algorithm <math> A </math> is true for every concept <math>c \in C</math> and for every distribution <math>D</math> over <math>X</math>, and for all <math>0<\epsilon, \delta<1</math> then <math>C</math> is (efficiently) '''PAC learnable''' (or ''distribution-free PAC learnable''). We can also say that <math>A</math> is a '''PAC learning algorithm''' for <math>C</math>. == Equivalence == Under some regularity conditions these conditions are equivalent: <ref>{{cite journal |last1=Blumer |first1=Anselm |last2=Ehrenfeucht |first2=Andrzej |last3=David |first3=Haussler |last4=Manfred |first4=Warmuth |s2cid=1138467 |title=Learnability and the Vapnik-Chervonenkis Dimension |journal=Journal of the Association for Computing Machinery |date=October 1989 |volume=36 |issue=4 |pages=929–965 |doi=10.1145/76359.76371 |doi-access=free }}</ref> # The concept class ''C'' is PAC learnable. # The [[VC dimension]] of ''C'' is finite. # ''C'' is a uniformly [[Glivenko–Cantelli theorem#Glivenko–Cantelli class|Glivenko-Cantelli class]].{{clarify|date=March 2018}} # ''C'' is [[compressible (Littlestone and Warmuth)|compressible]] in the sense of Littlestone and Warmuth == See also == * [[Occam learning]] * [[Data mining]] * [[Error tolerance (PAC learning)]] * [[Sample complexity]] == References == <references/> == Further reading == * M. Kearns, U. Vazirani. ''[https://books.google.com/books?id=vCA01wY6iywC An Introduction to Computational Learning Theory].'' MIT Press, 1994. A textbook. * M. Mohri, A. Rostamizadeh, and A. Talwalkar. ''Foundations of Machine Learning''. MIT Press, 2018. Chapter 2 contains a detailed treatment of PAC-learnability. [https://mitpress.ublish.com/ebook/foundations-of-machine-learning--2-preview/7093/9 Readable through open access from the publisher.] * D. Haussler. [http://www.cs.iastate.edu/~honavar/pac.pdf Overview of the Probably Approximately Correct (PAC) Learning Framework]. An introduction to the topic. * L. Valiant. [https://web.archive.org/web/20170228150047/http://www.probablyapproximatelycorrect.com/ ''Probably Approximately Correct.''] Basic Books, 2013. In which Valiant argues that PAC learning describes how organisms evolve and learn. * {{cite web |author1=Littlestone, N.|author2=Warmuth, M. K.|title=Relating Data Compression and Learnability|date=June 10, 1986 |url=http://www.cse.ucsc.edu/~manfred/pubs/T1.pdf |archive-url=https://web.archive.org/web/20170809095748/https://users.soe.ucsc.edu/~manfred/pubs/lrnk-olivier.pdf|archive-date=2017-08-09|url-status=dead}} * {{cite arXiv|eprint=1503.06960|last1=Moran|first1=Shay|last2=Yehudayoff|first2=Amir|title=Sample compression schemes for VC classes|year=2015|class=cs.LG}} == External links == * [https://www.cs.brandeis.edu/~dylan/pac_learning/ Interactive explanation of PAC learning] [[Category:Computational learning theory]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite arXiv
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Clarify
(
edit
)
Template:Machine learning
(
edit
)
Template:Short description
(
edit
)