Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Calibration (statistics)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Ambiguous term in statistics}} {{use dmy dates|date=March 2024}} There are two main uses of the term '''calibration''' in [[statistics]] that denote special types of [[statistical inference]] problems. Calibration can mean :*a reverse process to [[Linear regression|regression]], where instead of a future dependent variable being predicted from known explanatory variables, a known observation of the dependent variables is used to predict a corresponding explanatory variable;<ref>{{cite book|last1=Cook|first1=Ian|last2=Upton|first2=Graham|date=2006|title=Oxford Dictionary of Statistics|location=Oxford|publisher=Oxford University Press|isbn=978-0-19-954145-4}}</ref> :*procedures in [[statistical classification]] to determine [[class membership probabilities]] which assess the uncertainty of a given new observation belonging to each of the already established classes. In addition, calibration is used in statistics with the usual general meaning of [[calibration]]. For example, model calibration can be also used to refer to [[Bayesian inference]] about the value of a model's parameters, given some data set, or more generally to any type of fitting of a [[statistical model]]. As [[Philip Dawid]] puts it, "a forecaster is ''well calibrated'' if, for example, of those events to which he assigns a probability 30 percent, the long-run proportion that actually occurs turns out to be 30 percent."<ref>{{cite journal |doi=10.1080/01621459.1982.10477856 |title=The Well-Calibrated Bayesian |journal=Journal of the American Statistical Association |volume=77 |issue=379 |pages=605–610 |year=1982 |last1=Dawid |first1=A. P}}</ref> ==In classification== {{main article|Probabilistic classification}} Calibration in [[Statistical classification|classification]] means transforming classifier scores into [[Probabilistic classification|class membership probabilities]]. An overview of calibration methods for [[binary classification|two-class]] and [[multiclass classification|multi-class]] classification tasks is given by Gebel (2009).<ref name="Gebel2009">{{cite thesis |type=PhD thesis|first=Martin |last=Gebel |title=Multivariate calibration of classifier scores into the probability space |publisher=University of Dortmund |year=2009 |format=PDF |url=https://d-nb.info/99741989X/34}}</ref> A classifier might separate the classes well, but be poorly calibrated, meaning that the estimated class probabilities are far from the true class probabilities. In this case, a calibration step may help improve the estimated probabilities. A variety of metrics exist that are aimed to measure the extent to which a classifier produces well-calibrated probabilities. Foundational work includes the Expected Calibration Error (ECE).<ref>M.P. Naeini, G. Cooper, and M. Hauskrecht, Obtaining well calibrated probabilities using bayesian binning. In: Proceedings of the AAAI Conference on Artificial Intelligence, 2015.</ref> Into the 2020s, variants include the Adaptive Calibration Error (ACE) and the Test-based Calibration Error (TCE), which address limitations of the ECE metric that may arise when classifier scores concentrate on narrow subset of the [0,1] range.<ref>J. Nixon, M.W. Dusenberry, L. Zhang, G. Jerfel, & D. Tran. Measuring Calibration in Deep Learning. In: CVPR workshops (Vol. 2, No. 7), 2019.</ref><ref>T. Matsubara, N. Tax, R. Mudd, & I. Guy. TCE: A Test-Based Approach to Measuring Calibration Error. In: Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence (UAI), PMLR, 2023.</ref> A 2020s advancement in calibration assessment is the introduction of the Estimated Calibration Index (ECI).<ref>Famiglini, Lorenzo, Andrea Campagner, and Federico Cabitza. "Towards a Rigorous Calibration Assessment Framework: Advancements in Metrics, Methods, and Use." ECAI 2023. IOS Press, 2023. 645-652. Doi 10.3233/FAIA230327</ref> The ECI extends the concepts of the Expected Calibration Error (ECE) to provide a more nuanced measure of a model's calibration, particularly addressing overconfidence and underconfidence tendencies. Originally formulated for binary settings, the ECI has been adapted for multiclass settings, offering both local and global insights into model calibration. This framework aims to overcome some of the theoretical and interpretative limitations of existing calibration metrics. Through a series of experiments, Famiglini et al. demonstrate the framework's effectiveness in delivering a more accurate understanding of model calibration levels and discuss strategies for mitigating biases in calibration assessment. An online tool has been proposed to compute both ECE and ECI.<!--<ref>{{Cite web |title=Towards a Rigorous Calibration Assessment Framework |url=https://calibrationassessment.pythonanywhere.com/ |access-date=2024-03-25 |website=Towards a Rigorous Calibration Assessment Framework}}</ref>--><ref>{{Citation |last1=Famiglini |first1=Lorenzo |title=Towards a Rigorous Calibration Assessment Framework: Advancements in Metrics, Methods, and Use |date=2023 |work=ECAI 2023 |pages=645–652 |url=https://ebooks.iospress.nl/doi/10.3233/FAIA230327 |access-date=2024-03-25 |publisher=IOS Press |doi=10.3233/faia230327 |last2=Campagner |first2=Andrea |last3=Cabitza |first3=Federico|series=Frontiers in Artificial Intelligence and Applications |hdl=10281/456604 |isbn=978-1-64368-436-9 |hdl-access=free }}</ref> The following univariate calibration methods exist for transforming classifier scores into [[class membership probabilities]] in the two-class case: * Assignment value approach, see Garczarek (2002)<ref>U. M. Garczarek "[http://eldorado.uni-dortmund.de:8080/FB5/ls7/forschung/2002/Garczarek] {{Webarchive|url=https://web.archive.org/web/20041123190402/http://eldorado.uni-dortmund.de:8080/FB5/ls7/forschung/2002/Garczarek#|date=2004-11-23}}," Classification Rules in Standardized Partition Spaces, Dissertation, Universität Dortmund, 2002</ref> * Bayes approach, see Bennett (2002)<ref>P. N. Bennett, Using asymmetric distributions to improve text classifier probability estimates: A comparison of new and standard parametric methods, Technical Report CMU-CS-02-126, Carnegie Mellon, School of Computer Science, 2002.</ref> * [[Isotonic regression]], see Zadrozny and Elkan (2002)<ref>B. Zadrozny and C. Elkan, Transforming classifier scores into accurate multiclass probability estimates. In: Proceedings of the Eighth International Conference on Knowledge Discovery and Data Mining, 694–699, Edmonton, ACM Press, 2002.</ref> * [[Platt scaling]] (a form of [[logistic regression]]), see Lewis and Gale (1994)<ref>D. D. Lewis and W. A. Gale, A Sequential Algorithm for Training Text classifiers. In: W. B. Croft and C. J. van Rijsbergen (eds.), Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '94), 3–12. New York, Springer-Verlag, 1994.</ref> and Platt (1999)<ref>J. C. Platt, Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In: A. J. Smola, P. Bartlett, B. Schölkopf and D. Schuurmans (eds.), Advances in Large Margin Classiers, 61–74. Cambridge, MIT Press, 1999.</ref> * Bayesian Binning into Quantiles (BBQ) calibration, see Naeini, Cooper, Hauskrecht (2015)<ref>Naeini MP, Cooper GF, Hauskrecht M. Obtaining Well Calibrated Probabilities Using Bayesian Binning. Proceedings of the . AAAI Conference on Artificial Intelligence AAAI Conference on Artificial Intelligence. 2015;2015:2901-2907.</ref> * Beta calibration, see Kull, Filho, [[Peter Flach|Flach]] (2017)<ref>Meelis Kull, Telmo Silva Filho, Peter Flach; Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:623-631, 2017.</ref> ===In probability prediction and forecasting=== {{See also|Scoring rule}} In [[prediction]] and [[forecasting]], a [[Brier score]] is sometimes used to assess prediction accuracy of a set of predictions, specifically that the magnitude of the assigned probabilities track the relative frequency of the observed outcomes. [[Philip E. Tetlock]] employs the term "calibration" in this sense in his 2015 book ''[[Superforecasting]]''.<ref name="Edge-II"> {{cite web|url=https://www.edge.org/conversation/philip_tetlock-edge-master-class-2015-a-short-course-in-superforecasting-class-ii|title=Edge Master Class 2015: A Short Course in Superforecasting, Class II|author=<!--Staff writer(s); no by-line.--> |date=24 August 2015|website=edge.org |publisher=Edge Foundation |accessdate=13 April 2018 |quote=Calibration is when I say there's a 70 percent likelihood of something happening, things happen 70 percent of time. }}</ref> This differs from [[accuracy and precision]]. For example, as expressed by [[Daniel Kahneman]], "if you give all events that happen a probability of .6 and all the events that don't happen a probability of .4, your calibration is perfect but your discrimination is miserable".<ref name="Edge-II" /> In [[meteorology]], in particular, as concerns [[weather forecasting]], a related mode of assessment is known as [[forecast skill]]. ==In regression== {{Cleanup|reason = unclear what it does|date=September 2023}} The ''calibration problem'' in regression is the use of known data on the observed relationship between a dependent variable and an independent variable to make estimates of other values of the independent variable from new observations of the dependent variable.<ref>Brown, P.J. (1994) ''Measurement, Regression and Calibration'', OUP. {{ISBN|0-19-852245-2}}</ref><ref>Ng, K. H., Pooi, A. H. (2008) "Calibration Intervals in Linear Regression Models", ''Communications in Statistics - Theory and Methods'', 37 (11), 1688–1696. [http://www.informaworld.com/10.1080/03610920701826120]</ref><ref>Hardin, J. W., Schmiediche, H., Carroll, R. J. (2003) "The regression-calibration method for fitting generalized linear models with additive measurement error", ''Stata Journal'', 3 (4), 361–372. [http://www.stata-journal.com/article.html?article=st0050 link], [http://www.stata-journal.com/sjpdf.html?article=st0050 pdf]</ref> This can be known as "inverse regression";<ref>Draper, N.L., Smith, H. (1998) ''Applied Regression analysis, 3rd Edition'', Wiley. {{ISBN|0-471-17082-8}}</ref> there is also [[sliced inverse regression]]. The following multivariate calibration methods exist for transforming classifier scores into [[class membership probabilities]] in the case with classes count greater than two: * Reduction to binary tasks and subsequent pairwise coupling, see Hastie and Tibshirani (1998)<ref>T. Hastie and R. Tibshirani, "[http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.46.6032]," Classification by pairwise coupling. In: M. I. Jordan, M. J. Kearns and [[Sara Solla|S. A. Solla]] (eds.), Advances in Neural Information Processing Systems, volume 10, Cambridge, MIT Press, 1998.</ref> * Dirichlet calibration, see Gebel (2009)<ref name="Gebel2009" /> === Example === One example is that of dating objects, using observable evidence such as [[tree]] rings for [[dendrochronology]] or [[carbon-14]] for [[radiometric dating]]. The observation is [[causality|cause]]d by the age of the object being dated, rather than the reverse, and the aim is to use the method for estimating dates based on new observations. The [[Operational definition|problem]] is whether the model used for relating known ages with observations should aim to minimise the error in the observation, or minimise the error in the date. The two approaches will produce different results, and the difference will increase if the model is then used for [[extrapolation]] at some distance from the known results. ==See also== *{{annotated link|Calibration}} *{{annotated link|Calibrated probability assessment}} * [[Conformal prediction]] ==References== {{Reflist}} {{DEFAULTSORT:Calibration (Statistics)}} [[Category:Regression analysis]] [[Category:Classification algorithms|*]] [[Category:Statistical classification]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Annotated link
(
edit
)
Template:Citation
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite thesis
(
edit
)
Template:Cite web
(
edit
)
Template:Cleanup
(
edit
)
Template:ISBN
(
edit
)
Template:Main article
(
edit
)
Template:Reflist
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:Use dmy dates
(
edit
)
Template:Webarchive
(
edit
)