Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Empirical risk minimization
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Computational complexity === Empirical risk minimization for a classification problem with a [[0-1 loss function]] is known to be an [[NP-hard]] problem even for a relatively simple class of functions such as [[linear classifier]]s.<ref>V. Feldman, V. Guruswami, P. Raghavendra and Yi Wu (2009). [https://arxiv.org/abs/1012.0729 ''Agnostic Learning of Monomials by Halfspaces is Hard.''] (See the paper and references therein)</ref> Nevertheless, it can be solved efficiently when the minimal empirical risk is zero, i.e., data is [[linearly separable]].{{Cn|date=December 2023}} In practice, machine learning algorithms cope with this issue either by employing a [[Convex optimization|convex approximation]] to the 0β1 loss function (like [[hinge loss]] for [[Support vector machine|SVM]]), which is easier to optimize, or by imposing assumptions on the distribution <math>P(x, y)</math> (and thus stop being agnostic learning algorithms to which the above result applies). In the case of convexification, Zhang's lemma majors the excess risk of the original problem using the excess risk of the convexified problem.<ref>{{Cite web |title=Mathematics of Machine Learning Lecture 9 Notes {{!}} Mathematics of Machine Learning {{!}} Mathematics |url=https://ocw.mit.edu/courses/18-657-mathematics-of-machine-learning-fall-2015/resources/mit18_657f15_l9/ |access-date=2023-10-28 |website=MIT OpenCourseWare |language=en}}</ref> Minimizing the latter using convex optimization also allow to control the former.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)