Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Boosting (machine learning)
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Algorithms == While boosting is not algorithmically constrained, most boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier. When they are added, they are weighted in a way that is related to the weak learners' accuracy. After a weak learner is added, the data weights are readjusted, known as "re-[[weighting]]". Misclassified input data gain a higher weight and examples that are classified correctly lose weight.{{NoteTag|Some boosting-based classification algorithms actually decrease the weight of repeatedly misclassified examples; for example boost by majority and [[BrownBoost]].}} Thus, future weak learners focus more on the examples that previous weak learners misclassified. [[File:Ensemble Boosting.svg|thumb|An illustration presenting the intuition behind the boosting algorithm, consisting of the parallel learners and weighted dataset]] There are many boosting algorithms. The original ones, proposed by [[Robert Schapire]] (a [[Recursion (computer science)|recursive]] majority gate formulation),<ref name="Schapire90" /> and [[Yoav Freund]] (boost by majority),<ref name="Mason00">Llew Mason, Jonathan Baxter, Peter Bartlett, and Marcus Frean (2000); ''Boosting Algorithms as Gradient Descent'', in [[Sara Solla|S. A. Solla]], T. K. Leen, and K.-R. Muller, editors, ''Advances in Neural Information Processing Systems'' 12, pp. 512-518, MIT Press</ref> were not [[Adaptive algorithm|adaptive]] and could not take full advantage of the weak learners. Schapire and Freund then developed [[AdaBoost]], an adaptive boosting algorithm that won the prestigious [[Gödel Prize]]. Only algorithms that are provable boosting algorithms in the [[probably approximately correct learning]] formulation can accurately be called ''boosting algorithms''. Other algorithms that are similar in spirit{{Clarify|reason=What spirit?|date=October 2018}} to boosting algorithms are sometimes called "leveraging algorithms", although they are also sometimes incorrectly called boosting algorithms.<ref name="Mason00"/> The main variation between many boosting algorithms is their method of [[weighting]] [[training data]] points and [[Hypothesis|hypotheses]]. [[AdaBoost]] is very popular and the most significant historically as it was the first algorithm that could adapt to the weak learners. It is often the basis of introductory coverage of boosting in university machine learning courses.<ref>{{Cite web|url=http://math.mit.edu/~rothvoss/18.304.3PM/Presentations/1-Eric-Boosting304FinalRpdf.pdf |archive-url=https://ghostarchive.org/archive/20221009/http://math.mit.edu/~rothvoss/18.304.3PM/Presentations/1-Eric-Boosting304FinalRpdf.pdf |archive-date=2022-10-09 |url-status=live|title=Boosting (AdaBoost algorithm)|last=Emer|first=Eric|website=MIT|access-date=2018-10-10}}</ref> There are many more recent algorithms such as [[LPBoost]], TotalBoost, [[BrownBoost]], [[xgboost]], MadaBoost, [[LogitBoost]], and others. Many boosting algorithms fit into the AnyBoost framework,<ref name="Mason00"/> which shows that boosting performs [[gradient descent]] in a [[function space]] using a [[Convex function|convex]] [[Loss functions for classification|cost function]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)