Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Loss function
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Mathematical relation assigning a probability event to a cost}} In [[mathematical optimization]] and [[decision theory]], a '''loss function''' or '''cost function''' (sometimes also called an error function)<ref name="ttf2001">{{cite book|first1=Trevor |last1=Hastie |authorlink1= |first2=Robert |last2=Tibshirani |authorlink2=Robert Tibshirani|first3=Jerome H. |last3=Friedman |authorlink3=Jerome H. Friedman |title=The Elements of Statistical Learning |publisher=Springer |year=2001 |isbn=0-387-95284-5 |page=18 |url=https://web.stanford.edu/~hastie/ElemStatLearn/}}</ref> is a function that maps an [[event (probability theory)|event]] or values of one or more variables onto a [[real number]] intuitively representing some "cost" associated with the event. An [[optimization problem]] seeks to minimize a loss function. An '''objective function''' is either a loss function or its opposite (in specific domains, variously called a [[reward function]], a [[profit function]], a [[utility function]], a [[fitness function]], etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used for [[parameter estimation]], and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as [[Pierre-Simon Laplace|Laplace]], was reintroduced in statistics by [[Abraham Wald]] in the middle of the 20th century.<ref>{{cite journal |first=A. |last=Wald |title=Statistical Decision Functions |journal=Apa Psycnet |publisher=Wiley |year=1950 |url=https://psycnet.apa.org/record/1951-01400-000}}</ref> In the context of [[economics]], for example, this is usually [[economic cost]] or [[Regret (decision theory)|regret]]. In [[Statistical classification|classification]], it is the penalty for an incorrect classification of an example. In [[actuarial science]], it is used in an insurance context to model benefits paid over premiums, particularly since the works of [[Harald Cramér]] in the 1920s.<ref>{{cite book |last=Cramér |first=H. |year=1930 |title=On the mathematical theory of risk |publisher=Centraltryckeriet }}</ref> In [[optimal control]], the loss is the penalty for failing to achieve a desired value. In [[financial risk management]], the function is mapped to a monetary loss. [[File:Comparison of loss functions.png|thumb|Comparison of common loss functions ([[Mean absolute error|MAE]], [[Symmetric mean absolute percentage error|SMAE]], [[Huber loss]], and Log-Cosh Loss) used for regression]] ==Examples== ===Regret=== {{main|Regret (decision theory)}} [[Leonard J. Savage]] argued that using non-Bayesian methods such as [[minimax]], the loss function should be based on the idea of ''[[regret (decision theory)|regret]]'', i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made under circumstances will be known and the decision that was in fact taken before they were known. ===Quadratic loss function=== The use of a [[quadratic function|quadratic]] loss function is common, for example when using [[least squares]] techniques. It is often more mathematically tractable than other loss functions because of the properties of [[variance]]s, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target is ''t'', then a quadratic loss function is :<math>\lambda(x) = C (t-x)^2 \; </math> for some constant ''C''; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as the '''squared error loss''' ('''SEL''').<ref name="ttf2001" /> Many common [[statistic]]s, including [[t-test]]s, [[Regression analysis|regression]] models, [[design of experiments]], and much else, use [[least squares]] methods applied using [[linear regression]] theory, which is based on the quadratic loss function. The quadratic loss function is also used in [[Linear-quadratic regulator|linear-quadratic optimal control problems]]. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as a [[quadratic form]] in the deviations of the variables of interest from their desired values; this approach is [[closed-form expression|tractable]] because it results in linear [[first-order condition]]s. In the context of [[stochastic control]], the expected value of the quadratic form is used. The quadratic loss assigns more importance to outliers than to the true data due to its square nature, so alternatives like the [[Huber loss|Huber]], Log-Cosh and SMAE losses are used when the data has many large outliers. [[File:Fitting a straight line to a data with outliers.png|thumb|Effect of using different loss functions, when the data has outliers]] ===0-1 loss function=== In [[statistics]] and [[decision theory]], a frequently used loss function is the ''0-1 loss function'' : <math>L(\hat{y}, y) = \left[ \hat{y} \ne y \right] </math> using [[Iverson bracket]] notation, i.e. it evaluates to 1 when <math>\hat{y} \ne y</math>, and 0 otherwise. ==Constructing loss and objective functions== {{See also|Scoring rule}} In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called also [[utility]] function) in a form suitable for optimization — the problem that [[Ragnar Frisch]] has highlighted in his [[Nobel Prize]] lecture.<ref>{{cite book| first=Ragnar|last=Frisch|date=1969 |title= The Nobel Prize–Prize Lecture|chapter=From utopian theory to practical applications: the case of econometrics|url=https://www.nobelprize.org/prizes/economic-sciences/1969/frisch/lecture/|access-date=15 February 2021}}</ref> The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences.<ref name="TangianGruber1997">{{Cite book |last1=Tangian |first1=Andranik |last2=Gruber |first2=Josef |date=1997 |title= Constructing Scalar-Valued Objective Functions. Proceedings of the Third International Conference on Econometric Decision Models: Constructing Scalar-Valued Objective Functions, University of Hagen, held in Katholische Akademie Schwerte September 5–8, 1995|series= Lecture Notes in Economics and Mathematical Systems |volume=453|isbn= 978-3-540-63061-6 |doi= 10.1007/978-3-642-48773-6 |publisher=Springer |location=Berlin }}</ref><ref name="TangianGruber2002">{{Cite book |last1=Tangian |first1=Andranik |last2=Gruber |first2=Josef |date=2002 |title= Constructing and Applying Objective Functions. Proceedings of the Fourth International Conference on Econometric Decision Models Constructing and Applying Objective Functions, University of Hagen, held in Haus Nordhelle, August, 28 — 31, 2000 |series= Lecture Notes in Economics and Mathematical Systems |volume=510 |publisher=Springer |location=Berlin|isbn= 978-3-540-42669-1 |doi= 10.1007/978-3-642-56038-5 }}</ref> In particular, [[Andranik Tangian]] showed that the most usable objective functions — quadratic and additive — are determined by a few [[Principle of indifference|indifference]] points. He used this property in the models for constructing these objective functions from either [[ordinal utility|ordinal]] or [[cardinal utility|cardinal]] data that were elicited through computer-assisted interviews with decision makers.<ref name="Tangian2002">{{Cite journal|last=Tangian |first=Andranik |year=2002|title= Constructing a quasi-concave quadratic objective function from interviewing a decision maker|journal= European Journal of Operational Research |volume=141 |issue=3 |pages=608–640 |doi=10.1016/S0377-2217(01)00185-0 |s2cid= 39623350 }}</ref><ref name="Tangian2004additiveUtility">{{Cite journal|last=Tangian |first=Andranik |year=2004|title= A model for ordinally constructing additive objective functions|journal= European Journal of Operational Research |volume=159 |issue=2 |pages=476–512|doi = 10.1016/S0377-2217(03)00413-2 | s2cid= 31019036 }}</ref> Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities<ref name="Tangian2004universityBudgets">{{Cite journal |last=Tangian |first=Andranik |year=2004 |title= Redistribution of university budgets with respect to the status quo |journal= European Journal of Operational Research |volume=157 |issue=2 |pages=409–428|doi = 10.1016/S0377-2217(03)00271-6 }}</ref> and the European subsidies for equalizing unemployment rates among 271 German regions.<ref name="Tangian2008RegionalEnemployment">{{Cite journal|last=Tangian |first=Andranik |year=2008 |title= Multi-criteria optimization of regional employment policy: A simulation analysis for Germany |journal= Review of Urban and Regional Development |volume=20 |issue=2|pages=103–122 |url= https://onlinelibrary.wiley.com/doi/10.1111/j.1467-940X.2008.00144.x |doi = 10.1111/j.1467-940X.2008.00144.x }}</ref> ==Expected loss== {{See also|Empirical risk minimization}} In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variable ''X''. ===Statistics=== Both [[Frequentist probability|frequentist]] and [[Bayesian probability|Bayesian]] statistical theory involve making a decision based on the [[expected value]] of the loss function; however, this quantity is defined differently under the two paradigms. ====Frequentist expected loss==== We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to the [[probability distribution]], ''P''<sub>''θ''</sub>, of the observed data, ''X''. This is also referred to as the '''risk function'''<ref>{{SpringerEOM| title=Risk of a statistical procedure |id=R/r082490 |first=M.S. |last=Nikulin}}</ref><ref> {{cite book |title=Statistical decision theory and Bayesian Analysis |first=James O. |last=Berger |author-link=James Berger (statistician) |year=1985 |edition=2nd |publisher=Springer-Verlag |location=New York |isbn=978-0-387-96098-2 |mr=0804611 |url=https://books.google.com/books?id=oY_x7dE15_AC |bibcode=1985sdtb.book.....B }}</ref><ref>{{cite book |first=Morris |last=DeGroot |author-link=Morris H. DeGroot |title=Optimal Statistical Decisions |publisher=Wiley Classics Library |year=2004 |orig-year=1970 |isbn=978-0-471-68029-1 |mr=2288194 }}</ref><ref>{{cite book |last=Robert |first=Christian P. |title=The Bayesian Choice |publisher=Springer |location=New York |year=2007|edition=2nd |doi=10.1007/0-387-71599-1 |isbn=978-0-387-95231-4 |mr=1835885 |series=Springer Texts in Statistics }}</ref> of the decision rule ''δ'' and the parameter ''θ''. Here the decision rule depends on the outcome of ''X''. The risk function is given by: : <math>R(\theta, \delta) = \operatorname{E}_\theta L\big( \theta, \delta(X) \big) = \int_X L\big( \theta, \delta(x) \big) \, \mathrm{d} P_\theta (x) .</math> Here, ''θ'' is a fixed but possibly unknown state of nature, ''X'' is a vector of observations stochastically drawn from a [[Statistical population|population]], <math>\operatorname{E}_\theta</math> is the expectation over all population values of ''X'', ''dP''<sub>''θ''</sub> is a [[probability measure]] over the event space of ''X'' (parametrized by ''θ'') and the integral is evaluated over the entire [[Support (measure theory)|support]] of ''X''. ====Bayes Risk ==== In a Bayesian approach, the expectation is calculated using the [[prior distribution]] {{pi}}<sup>*</sup> of the parameter ''θ'': :<math>\rho(\pi^*,a) = \int_\Theta \int _{\bold X} L(\theta, a(\bold x)) \, \mathrm{d} P(\bold x \vert \theta) \,\mathrm{d} \pi^* (\theta)= \int_{\bold X} \int_\Theta L(\theta,a(\bold x))\,\mathrm{d} \pi^*(\theta\vert \bold x)\,\mathrm{d}M(\bold x)</math> where m(x) is known as the ''predictive likelihood'' wherein θ has been "integrated out," {{pi}}<sup>*</sup> (θ | x) is the posterior distribution, and the order of integration has been changed. One then should choose the action ''a<sup>*</sup>'' which minimises this expected loss, which is referred to as ''Bayes Risk''. In the latter equation, the integrand inside dx is known as the ''Posterior Risk'', and minimising it with respect to decision ''a'' also minimizes the overall Bayes Risk. This optimal decision, ''a<sup>*</sup>'' is known as the ''Bayes (decision) Rule'' - it minimises the average loss over all possible states of nature θ, over all possible (probability-weighted) data outcomes. One advantage of the Bayesian approach is to that one need only choose the optimal action under the actual observed data to obtain a uniformly optimal one, whereas choosing the actual frequentist optimal decision rule as a function of all possible observations, is a much more difficult problem. Of equal importance though, the Bayes Rule reflects consideration of loss outcomes under different states of nature, θ. ====Examples in statistics==== * For a scalar parameter ''θ'', a decision function whose output <math>\hat\theta</math> is an estimate of ''θ'', and a quadratic loss function ([[squared error loss]]) <math display="block"> L(\theta,\hat\theta)=(\theta-\hat\theta)^2,</math> the risk function becomes the [[mean squared error]] of the estimate, <math display="block">R(\theta,\hat\theta)= \operatorname{E}_\theta \left [ (\theta-\hat\theta)^2 \right ].</math>An [[Estimator]] found by minimizing the [[Mean squared error]] estimates the [[Posterior distribution]]'s mean. * In [[density estimation]], the unknown parameter is [[probability density function|probability density]] itself. The loss function is typically chosen to be a [[Norm (mathematics)|norm]] in an appropriate [[function space]]. For example, for [[L2 norm|''L''<sup>2</sup> norm]], <math display="block">L(f,\hat f) = \|f-\hat f\|_2^2\,,</math> the risk function becomes the [[mean integrated squared error]] <math display="block">R(f,\hat f)=\operatorname{E} \left ( \|f-\hat f\|^2 \right ).\,</math> ===Economic choice under uncertainty=== In economics, decision-making under uncertainty is often modelled using the [[von Neumann–Morgenstern utility function]] of the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized. ==Decision rules== A [[decision rule]] makes a choice using an optimality criterion. Some commonly used criteria are: *'''[[Minimax]]''': Choose the decision rule with the lowest worst loss — that is, minimize the worst-case (maximum possible) loss: <math display="block"> \underset{\delta} {\operatorname{arg\,min}} \ \max_{\theta \in \Theta} \ R(\theta,\delta). </math> *'''[[Invariant estimator|Invariance]]''': Choose the decision rule which satisfies an invariance requirement. *Choose the decision rule with the lowest average loss (i.e. minimize the [[expected value]] of the loss function): <math display="block"> \underset{\delta} {\operatorname{arg\,min}} \operatorname{E}_{\theta \in \Theta} [R(\theta,\delta)] = \underset{\delta} {\operatorname{arg\,min}} \ \int_{\theta \in \Theta} R(\theta,\delta) \, p(\theta) \,d\theta. </math> ==Selecting a loss function== Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances.<ref>{{cite book |last=Pfanzagl |first=J. |year=1994 |title=Parametric Statistical Theory |location=Berlin |publisher=Walter de Gruyter |isbn=978-3-11-013863-4 }}</ref> A common example involves estimating "[[location parameter|location]]". Under typical statistical assumptions, the [[mean]] or average is the statistic for estimating location that minimizes the expected loss experienced under the [[least squares|squared-error]] loss function, while the [[median]] is the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent is [[risk neutral]], the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. For [[Risk aversion|risk-averse]] or [[risk-loving]] agents, loss is measured as the negative of a [[utility|utility function]], and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for example [[Mortality rate|mortality]] or [[morbidity]] in the field of [[public health]] or [[safety engineering]]. For most [[optimization algorithm]]s, it is desirable to have a loss function that is globally [[Continuous function|continuous]] and [[Differentiable function|differentiable]]. Two very commonly used loss functions are the [[mean squared error|squared loss]], <math>L(a) = a^2</math>, and the [[absolute deviation|absolute loss]], <math>L(a)=|a|</math>. However the absolute loss has the disadvantage that it is not differentiable at <math>a=0</math>. The squared loss has the disadvantage that it has the tendency to be dominated by [[outlier]]s—when summing over a set of <math>a</math>'s (as in <math display="inline">\sum_{i=1}^n L(a_i) </math>), the final sum tends to be the result of a few particularly large ''a''-values, rather than an expression of the average ''a''-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties.<ref>Detailed information on mathematical principles of the loss function choice is given in Chapter 2 of the book {{cite book|title=Robust and Non-Robust Models in Statistics|first1=B.|last1=Klebanov|first2=Svetlozat T.|last2=Rachev|first3=Frank J.|last3=Fabozzi|publisher=Nova Scientific Publishers, Inc.|location=New York|year=2009}} (and references there).</ref> Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case of [[i.i.d.]] observations, the principle of complete information, and some others. [[W. Edwards Deming]] and [[Nassim Nicholas Taleb]] argue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases.<ref>{{Cite book|title=Out of the Crisis|last=Deming|first=W. Edwards|publisher=The MIT Press|year=2000|isbn=9780262541152}}</ref> ==See also== *[[Bayesian regret]] *[[Loss functions for classification]] *[[Discounted maximum loss]] *[[Hinge loss]] *[[Scoring rule]] *[[Statistical risk]] ==References== {{reflist}} ==Further reading== *{{cite journal |author2=Bartram, Söhnke M. |author3=Pope, Peter F. |date=April–June 2011 |title=Asymmetric Loss Functions and the Rationality of Expected Stock Returns |journal=International Journal of Forecasting |volume=27 |issue=2 |pages=413–437 |doi= 10.1016/j.ijforecast.2009.10.008|ssrn=889323 |last1= Aretz |first1=Kevin |url=https://mpra.ub.uni-muenchen.de/47343/1/MPRA_paper_47343.pdf }} * {{cite book |title=Statistical decision theory and Bayesian Analysis |first=James O. |last=Berger |author-link=James Berger (statistician) |year=1985 |edition=2nd |publisher=Springer-Verlag |location=New York |isbn=978-0-387-96098-2 |mr=0804611 |bibcode=1985sdtb.book.....B }} *{{cite journal|url=https://www.researchgate.net/publication/5216117|doi=10.1093/oxrep/16.4.43|title=Making monetary policy: Objectives and rules|journal=Oxford Review of Economic Policy|volume=16|issue=4|pages=43–59|year=2000|last1=Cecchetti|first1=S.}} *{{cite journal|doi=10.1016/0164-0704(87)90016-4|title=Loss functions and public policy|journal=Journal of Macroeconomics|volume=9|issue=4|pages=489–504|year=1987|last1=Horowitz|first1=Ann R.}} *{{cite journal|jstor=1911380|title=Asymmetric Policymaker Utility Functions and Optimal Policy under Uncertainty|journal=Econometrica|volume=44|issue=1|pages=53–66|last1=Waud|first1=Roger N.|year=1976|doi=10.2307/1911380}} {{Statistics|inference|collapsed}} {{Differentiable computing}} {{DEFAULTSORT:Loss Function}} [[Category:Optimal decisions]] [[Category:Loss functions|*]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Differentiable computing
(
edit
)
Template:Main
(
edit
)
Template:Pi
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:SpringerEOM
(
edit
)
Template:Statistics
(
edit
)