Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Simultaneous equations model
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Structural and reduced form == Suppose there are ''m'' regression equations of the form : <math> y_{it} = y_{-i,t}'\gamma_i + x_{it}'\;\!\beta_i + u_{it}, \quad i=1,\ldots,m, </math> where ''i'' is the equation number, and {{nowrap|''t'' {{=}} 1, ..., ''T''}} is the observation index. In these equations ''x<sub>it</sub>'' is the ''k<sub>i</sub>×''1 vector of exogenous variables, ''y<sub>it</sub>'' is the dependent variable, ''y<sub>−i,t</sub>'' is the ''n<sub>i</sub>×''1 vector of all other endogenous variables which enter the ''i''<sup>th</sup> equation on the right-hand side, and ''u<sub>it</sub>'' are the error terms. The “−''i''” notation indicates that the vector ''y<sub>−i,t</sub>'' may contain any of the ''y''’s except for ''y<sub>it</sub>'' (since it is already present on the left-hand side). The regression coefficients ''β<sub>i</sub>'' and ''γ<sub>i</sub>'' are of dimensions ''k<sub>i</sub>×''1 and ''n<sub>i</sub>×''1 correspondingly. Vertically stacking the ''T'' observations corresponding to the ''i''<sup>th</sup> equation, we can write each equation in vector form as : <math> y_i = Y_{-i}\gamma_i + X_i\beta_i + u_i, \quad i=1,\ldots,m, </math> where ''y<sub>i</sub>'' and ''u<sub>i</sub>'' are ''T×''1 vectors, ''X<sub>i</sub>'' is a ''T×k<sub>i</sub>'' matrix of exogenous regressors, and ''Y<sub>−i</sub>'' is a ''T×n<sub>i</sub>'' matrix of endogenous regressors on the right-hand side of the ''i''<sup>th</sup> equation. Finally, we can move all endogenous variables to the left-hand side and write the ''m'' equations jointly in vector form as : <math> Y\Gamma = X\Beta + U.\, </math> This representation is known as the '''structural form'''. In this equation {{nowrap|''Y'' {{=}} [''y''<sub>1</sub> ''y''<sub>2</sub> ... ''y<sub>m</sub>'']}} is the ''T×m'' matrix of dependent variables. Each of the matrices ''Y<sub>−i</sub>'' is in fact an ''n<sub>i</sub>''-columned submatrix of this ''Y''. The ''m×m'' matrix Γ, which describes the relation between the dependent variables, has a complicated structure. It has ones on the diagonal, and all other elements of each column ''i'' are either the components of the vector ''−γ<sub>i</sub>'' or zeros, depending on which columns of ''Y'' were included in the matrix ''Y<sub>−i</sub>''. The ''T×k'' matrix ''X'' contains all exogenous regressors from all equations, but without repetitions (that is, matrix ''X'' should be of full rank). Thus, each ''X<sub>i</sub>'' is a ''k<sub>i</sub>''-columned submatrix of ''X''. Matrix Β has size ''k×m'', and each of its columns consists of the components of vectors ''β<sub>i</sub>'' and zeros, depending on which of the regressors from ''X'' were included or excluded from ''X<sub>i</sub>''. Finally, {{nowrap|''U'' {{=}} [''u''<sub>1</sub> ''u''<sub>2</sub> ... ''u<sub>m</sub>'']}} is a ''T×m'' matrix of the error terms. Postmultiplying the structural equation by {{nowrap|Γ<sup> −1</sup>}}, the system can be written in the '''[[reduced form]]''' as : <math> Y = X\Beta\Gamma^{-1} + U\Gamma^{-1} = X\Pi + V.\, </math> This is already a simple [[general linear model]], and it can be estimated for example by [[ordinary least squares]]. Unfortunately, the task of decomposing the estimated matrix <math style="vertical-align:0">\scriptstyle\hat\Pi</math> into the individual factors Β and {{nowrap|Γ<sup> −1</sup>}} is quite complicated, and therefore the reduced form is more suitable for prediction but not inference. === Assumptions === Firstly, the rank of the matrix ''X'' of exogenous regressors must be equal to ''k'', both in finite samples and in the limit as {{nowrap|''T'' → ∞}} (this later requirement means that in the limit the expression <math style="vertical-align:-.4em">\scriptstyle \frac1TX'\!X</math> should converge to a nondegenerate ''k×k'' matrix). Matrix Γ is also assumed to be non-degenerate. Secondly, error terms are assumed to be serially [[independent and identically distributed]]. That is, if the ''t''<sup>th</sup> row of matrix ''U'' is denoted by ''u''<sub>(''t'')</sub>, then the sequence of vectors {''u''<sub>(''t'')</sub>} should be iid, with zero mean and some covariance matrix Σ (which is unknown). In particular, this implies that {{nowrap|E[''U''] {{=}} 0}}, and {{nowrap|E[''U′U''] {{=}} ''T'' Σ}}. Lastly, assumptions are required for identification.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)