Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Gaussian process
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Usual covariance functions=== [[File:Gaussian process draws from prior distribution.png|thumbnail|300px|right|The effect of choosing different kernels on the prior function distribution of the Gaussian process. Left is a squared exponential kernel. Middle is Brownian. Right is quadratic.]] There are a number of common covariance functions:<ref name="gpml"/> *Constant : <math> K_\operatorname{C}(x,x') = C </math> *Linear: <math> K_\operatorname{L}(x,x') = x^\mathsf{T} x'</math> *white Gaussian noise: <math> K_\operatorname{GN}(x,x') = \sigma^2 \delta_{x,x'}</math> *Squared exponential: <math> K_\operatorname{SE}(x,x') = \exp \left(-\tfrac{d^2}{2\ell^2} \right)</math> *Ornstein–Uhlenbeck: <math> K_\operatorname{OU}(x,x') = \exp \left(-\tfrac{d} \ell \right)</math> *Matérn: <math> K_\operatorname{Matern}(x,x') = \tfrac{2^{1-\nu}}{\Gamma(\nu)} \left(\tfrac{\sqrt{2\nu}d}{\ell} \right)^\nu K_\nu \left(\tfrac{\sqrt{2\nu}d}{\ell} \right)</math> *Periodic: <math> K_\operatorname{P}(x,x') = \exp\left(-\tfrac{2}{\ell^2} \sin^2 (d/2) \right)</math> *Rational quadratic: <math> K_\operatorname{RQ}(x,x') = \left(1+d^2\right)^{-\alpha}, \quad \alpha \geq 0</math> Here <math>d = |x- x'| </math>. The parameter <math>\ell</math> is the characteristic length-scale of the process (practically, "how close" two points <math>x</math> and <math>x'</math> have to be to influence each other significantly), ''<math>\delta</math>'' is the [[Kronecker delta]] and <math>\sigma</math> the [[standard deviation]] of the noise fluctuations. Moreover, <math>K_\nu</math> is the [[modified Bessel function]] of order <math>\nu</math> and <math>\Gamma(\nu)</math> is the [[gamma function]] evaluated at <math>\nu</math>. Importantly, a complicated covariance function can be defined as a linear combination of other simpler covariance functions in order to incorporate different insights about the data-set at hand. The inferential results are dependent on the values of the hyperparameters <math>\theta</math> (e.g. <math>\ell</math> and <math>\sigma</math>) defining the model's behaviour. A popular choice for <math>\theta</math> is to provide ''[[maximum a posteriori]]'' (MAP) estimates of it with some chosen prior. If the prior is very near uniform, this is the same as maximizing the [[marginal likelihood]] of the process; the marginalization being done over the observed process values <math>y</math>.<ref name= "gpml"/> This approach is also known as ''maximum likelihood II'', ''evidence maximization'', or ''[[empirical Bayes]]''.<ref name="seegerGPML">{{cite journal |last1= Seeger| first1= Matthias |year= 2004 |title= Gaussian Processes for Machine Learning|journal= International Journal of Neural Systems|volume= 14|issue= 2|pages= 69–104 |doi=10.1142/s0129065704001899 | pmid= 15112367 | citeseerx= 10.1.1.71.1079 | s2cid= 52807317 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)