Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Autoregressive model
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Example: An AR(1) process== An AR(1) process is given by:<math display="block">X_t = \varphi X_{t-1}+\varepsilon_t\,</math>where <math>\varepsilon_t</math> is a white noise process with zero mean and constant variance <math>\sigma_\varepsilon^2</math>. (Note: The subscript on <math>\varphi_1</math> has been dropped.) The process is [[Stationary process#Weak or wide-sense stationarity|weak-sense stationary]] if <math>|\varphi|<1</math> since it is obtained as the output of a stable filter whose input is white noise. (If <math>\varphi=1</math> then the variance of <math>X_t</math> depends on time lag t, so that the variance of the series diverges to infinity as t goes to infinity, and is therefore not weak sense stationary.) Assuming <math>|\varphi|<1</math>, the mean <math>\operatorname{E} (X_t)</math> is identical for all values of ''t'' by the very definition of weak sense stationarity. If the mean is denoted by <math>\mu</math>, it follows from<math display="block">\operatorname{E} (X_t)=\varphi\operatorname{E} (X_{t-1})+\operatorname{E}(\varepsilon_t), </math>that<math display="block"> \mu=\varphi\mu+0,</math>and hence :<math>\mu=0.</math> The [[variance]] is :<math>\textrm{var}(X_t)=\operatorname{E}(X_t^2)-\mu^2=\frac{\sigma_\varepsilon^2}{1-\varphi^2},</math> where <math>\sigma_\varepsilon</math> is the standard deviation of <math>\varepsilon_t</math>. This can be shown by noting that :<math>\textrm{var}(X_t) = \varphi^2\textrm{var}(X_{t-1}) + \sigma_\varepsilon^2,</math> and then by noticing that the quantity above is a stable fixed point of this relation. The [[autocovariance]] is given by :<math>B_n=\operatorname{E}(X_{t+n}X_t)-\mu^2=\frac{\sigma_\varepsilon^2}{1-\varphi^2}\,\,\varphi^{|n|}.</math> It can be seen that the autocovariance function decays with a decay time (also called [[time constant]]) of <math>\tau=1-\varphi</math>.<ref>Lai, Dihui; and Lu, Bingfeng; [https://www.soa.org/globalassets/assets/library/newsletters/predictive-analytics-and-futurism/2017/june/2017-predictive-analytics-iss15-lai-lu.pdf "Understanding Autoregressive Model for Time Series as a Deterministic Dynamic System"] {{Webarchive|url=https://web.archive.org/web/20230324041726/https://www.soa.org/globalassets/assets/library/newsletters/predictive-analytics-and-futurism/2017/june/2017-predictive-analytics-iss15-lai-lu.pdf |date=2023-03-24 }}, in ''Predictive Analytics and Futurism'', June 2017, number 15, June 2017, pages 7-9</ref> The [[spectral density]] function is the [[Fourier transform]] of the autocovariance function. In discrete terms this will be the discrete-time Fourier transform: :<math>\Phi(\omega)= \frac{1}{\sqrt{2\pi}}\,\sum_{n=-\infty}^\infty B_n e^{-i\omega n} =\frac{1}{\sqrt{2\pi}}\,\left(\frac{\sigma_\varepsilon^2}{1+\varphi^2-2\varphi\cos(\omega)}\right). </math> This expression is periodic due to the discrete nature of the <math>X_j</math>, which is manifested as the cosine term in the denominator. If we assume that the sampling time (<math>\Delta t=1</math>) is much smaller than the decay time (<math>\tau</math>), then we can use a continuum approximation to <math>B_n</math>: :<math>B(t)\approx \frac{\sigma_\varepsilon^2}{1-\varphi^2}\,\,\varphi^{|t|}</math> which yields a [[Cauchy distribution|Lorentzian profile]] for the spectral density: :<math>\Phi(\omega)= \frac{1}{\sqrt{2\pi}}\,\frac{\sigma_\varepsilon^2}{1-\varphi^2}\,\frac{\gamma}{\pi(\gamma^2+\omega^2)}</math> where <math>\gamma=1/\tau</math> is the angular frequency associated with the decay time <math>\tau</math>. An alternative expression for <math>X_t</math> can be derived by first substituting <math>\varphi X_{t-2}+\varepsilon_{t-1}</math> for <math>X_{t-1}</math> in the defining equation. Continuing this process ''N'' times yields :<math>X_t=\varphi^NX_{t-N}+\sum_{k=0}^{N-1}\varphi^k\varepsilon_{t-k}.</math> For ''N'' approaching infinity, <math>\varphi^N</math> will approach zero and: :<math>X_t=\sum_{k=0}^\infty\varphi^k\varepsilon_{t-k}.</math> It is seen that <math>X_t</math> is white noise convolved with the <math>\varphi^k</math> kernel plus the constant mean. If the white noise <math>\varepsilon_t</math> is a [[Gaussian process]] then <math>X_t</math> is also a Gaussian process. In other cases, the [[central limit theorem]] indicates that <math>X_t</math> will be approximately normally distributed when <math>\varphi</math> is close to one. For <math>\varepsilon_t = 0</math>, the process <math>X_t = \varphi X_{t-1}</math> will be a [[geometric progression]] (''exponential'' growth or decay). In this case, the solution can be found analytically: <math>X_t = a \varphi^t</math> whereby <math>a</math> is an unknown constant ([[initial condition]]). === Explicit mean/difference form of AR(1) process === The AR(1) model is the discrete-time analogy of the continuous [[Ornstein-Uhlenbeck process]]. It is therefore sometimes useful to understand the properties of the AR(1) model cast in an equivalent form. In this form, the AR(1) model, with process parameter <math>\theta \in \mathbb{R}</math>, is given by :<math>X_{t+1} = X_t + (1-\theta)(\mu - X_t) + \varepsilon_{t+1}</math>, where <math>|\theta| < 1 \,</math>, <math>\mu := E(X)</math> is the model mean, and <math>\{\epsilon_{t}\}</math> is a white-noise process with zero mean and constant variance <math>\sigma</math>. By rewriting this as <math> X_{t+1} = \theta X_t + (1 - \theta)\mu + \varepsilon_{t+1} </math> and then deriving (by induction) <math>X_{t+n} = \theta ^ n X_{t} + (1 - \theta ^ n) \mu + \Sigma_{i = 1}^{n} \left(\theta ^ {n - i} \epsilon_{t + i}\right)</math>, one can show that :<math> \operatorname{E}(X_{t+n} | X_t) = \mu\left[1-\theta^n\right] + X_t\theta^n</math> and :<math> \operatorname{Var} (X_{t+n} | X_t) = \sigma^2 \frac{ 1 - \theta^{2n} }{1 -\theta^2}</math>.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)