Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Ridge regression
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Bayesian interpretation== {{main|Bayesian interpretation of regularization}} {{Further|Minimum mean square error#Linear MMSE estimator for linear observation process}} Although at first the choice of the solution to this regularized problem may look artificial, and indeed the matrix <math>\Gamma</math> seems rather arbitrary, the process can be justified from a [[Bayesian probability|Bayesian point of view]].<ref>{{cite book |first1=Edward |last1=Greenberg |first2=Charles E. Jr. |last2=Webster |title=Advanced Econometrics: A Bridge to the Literature |location=New York |publisher=John Wiley & Sons |year=1983 |pages=207–213 |isbn=0-471-09077-8 }}</ref> Note that for an ill-posed problem one must necessarily introduce some additional assumptions in order to get a unique solution. Statistically, the [[prior probability]] distribution of <math>x</math> is sometimes taken to be a [[multivariate normal distribution]].<ref>{{cite journal | last1 = Huang | first1 = Yunfei. | display-authors = etal | year = 2019 | title = Traction force microscopy with optimized regularization and automated Bayesian parameter selection for comparing cells | journal = Scientific Reports | volume = 9 | number = 1| page = 537 | doi = 10.1038/s41598-018-36896-x | pmid = 30679578 | doi-access = free | pmc = 6345967 | arxiv = 1810.05848 | bibcode = 2019NatSR...9..539H }}</ref> For simplicity here, the following assumptions are made: the means are zero; their components are independent; the components have the same [[standard deviation]] <math>\sigma _x</math>. The data are also subject to errors, and the errors in <math>b</math> are also assumed to be [[statistical independence|independent]] with zero mean and standard deviation <math>\sigma _b</math>. Under these assumptions the Tikhonov-regularized solution is the [[maximum a posteriori|most probable]] solution given the data and the ''a priori'' distribution of <math>x</math>, according to [[Bayes' theorem]].<ref>{{cite book |author=Vogel, Curtis R. |title=Computational methods for inverse problems |publisher=Society for Industrial and Applied Mathematics |location=Philadelphia |year=2002 |isbn=0-89871-550-4 }}</ref> If the assumption of [[normal distribution|normality]] is replaced by assumptions of [[homoscedasticity]] and uncorrelatedness of [[errors and residuals in statistics|errors]], and if one still assumes zero mean, then the [[Gauss–Markov theorem]] entails that the solution is the minimal [[Bias of an estimator|unbiased linear estimator]].<ref>{{cite book |last=Amemiya |first=Takeshi |author-link=Takeshi Amemiya |year=1985 |title=Advanced Econometrics |publisher=Harvard University Press |pages=[https://archive.org/details/advancedeconomet00amem/page/60 60–61] |isbn=0-674-00560-0 |url-access=registration |url=https://archive.org/details/advancedeconomet00amem/page/60 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)