Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Forecasting
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Forecasting accuracy== The forecast error (also known as a [[errors and residuals|residual]]) is the difference between the actual value and the forecast value for the corresponding period: :<math>\ E_t = Y_t - F_t </math> where E is the forecast error at period t, Y is the actual value at period t, and F is the forecast for period t. A good forecasting method will yield residuals that are '''uncorrelated'''. If there are [[correlation]]s between residual values, then there is information left in the residuals which should be used in computing forecasts. This can be accomplished by computing the expected value of a residual as a function of the known past residuals, and adjusting the forecast by the amount by which this expected value differs from zero. A good forecasting method will also have '''zero mean'''. If the residuals have a mean other than zero, then the forecasts are biased and can be improved by adjusting the forecasting technique by an additive constant that equals the mean of the unadjusted residuals. Measures of aggregate error: ===Scaled-dependent errors=== The forecast error, E, is on the same scale as the data, as such, these accuracy measures are scale-dependent and cannot be used to make comparisons between series on different scales. [[Mean absolute error]] (MAE) or [[average absolute deviation|mean absolute deviation]] (MAD): <math>\ MAE = MAD = \frac{\sum_{t=1}^{N} |E_t|}{N} </math> [[Mean squared error]] (MSE) or [[mean squared prediction error]] (MSPE): <math>\ MSE = MSPE = \frac{\sum_{t=1}^N {E_t^2}}{N} </math> [[Root-mean-square deviation|Root mean squared error]] (RMSE): <math>\ RMSE = \sqrt{\frac{\sum_{t=1}^N {E_t^2}}{N}} </math> Average of Errors (E): <math>\ \bar{E}= \frac{\sum_{i=1}^N {E_i}}{N} </math> ===Percentage errors=== These are more frequently used to compare forecast performance between different data sets because they are scale-independent. However, they have the disadvantage of being extremely large or undefined if Y is close to or equal to zero. [[Mean absolute percentage error]] (MAPE): <math>\ MAPE = 100* \frac{\sum_{t=1}^N |\frac{E_t}{Y_t}|}{N} </math> Mean absolute percentage deviation (MAPD): <math>\ MAPD = \frac{\sum_{t=1}^{N} |E_t|}{\sum_{t=1}^{N} |Y_t|} </math> ===Scaled errors=== Hyndman and Koehler (2006) proposed using scaled errors as an alternative to percentage errors. [[Mean absolute scaled error]] (MASE): <math>MASE = \frac{\sum_{t=1}^{N} |\frac{E_t}{\frac{1}{N-m}\sum_{t=m+1}^{N}|Y_t - Y_{t-m}|}|}{N}</math> where ''m''=seasonal period or 1 if non-seasonal ===Other measures=== [[Forecast skill]] (SS): <math>\ SS = 1- \frac{MSE_{forecast}}{MSE_{ref}} </math> Business forecasters and practitioners sometimes use different terminology. They refer to the PMAD as the MAPE, although they compute this as a volume weighted MAPE. For more information, see [[Calculating demand forecast accuracy]]. When comparing the accuracy of different forecasting methods on a specific data set, the measures of aggregate error are compared with each other and the method that yields the lowest error is preferred. ===Training and test sets=== When evaluating the quality of forecasts, it is invalid to look at how well a model fits the historical data; the accuracy of forecasts can only be determined by considering how well a model performs on new data that were not used when fitting the model. When choosing models, it is common to use a portion of the available data for fitting, and use the rest of the data for testing the model, as was done in the above examples.<ref name=e2.5>{{Cite web |url=https://www.otexts.org/fpp/2/5|title=2.5 Evaluating forecast accuracy |website=OTexts |access-date=2016-05-14}}</ref> ===Cross-validation=== [[cross-validation (statistics)|Cross-validation]] is a more sophisticated version of training a test set. For [[cross-sectional data]], one approach to cross-validation works as follows: # Select observation ''i'' for the test set, and use the remaining observations in the training set. Compute the error on the test observation. # Repeat the above step for ''i'' = 1,2,..., ''N'' where ''N'' is the total number of observations. # Compute the forecast accuracy measures based on the errors obtained. This makes efficient use of the available data, as only one observation is omitted at each step For time series data, the training set can only include observations prior to the test set. Therefore, no future observations can be used in constructing the forecast. Suppose ''k'' observations are needed to produce a reliable forecast; then the process works as follows: # Starting with ''i''=1, select the observation ''k + i'' for the test set, and use the observations at times 1, 2, ..., ''k+i''β1 to estimate the forecasting model. Compute the error on the forecast for ''k+i''. # Repeat the above step for ''i'' = 2,...,''Tβk'' where ''T'' is the total number of observations. # Compute the forecast accuracy over all errors. This procedure is sometimes known as a "rolling forecasting origin" because the "origin" (''k+i -1)'' at which the forecast is based rolls forward in time.<ref name=e2.5 /> Further, two-step-ahead or in general ''p''-step-ahead forecasts can be computed by first forecasting the value immediately after the training set, then using this value with the training set values to forecast two periods ahead, etc. ''See also'' *[[Calculating demand forecast accuracy]] *[[Consensus forecasts]] *[[Forecast error]] *[[Predictability]] *[[Prediction interval]]s, similar to [[confidence interval]]s *[[Reference class forecasting]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)