Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Independent component analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Linear noiseless ICA ==== The components <math>x_i</math> of the observed random vector <math>\boldsymbol{x}=(x_1,\ldots,x_m)^T</math> are generated as a sum of the independent components <math>s_k</math>, <math>k=1,\ldots,n</math>: <math>x_i = a_{i,1} s_1 + \cdots + a_{i,k} s_k + \cdots + a_{i,n} s_n</math> weighted by the mixing weights <math>a_{i,k}</math>. The same generative model can be written in vector form as <math>\boldsymbol{x}=\sum_{k=1}^{n} s_k \boldsymbol{a}_k</math>, where the observed random vector <math>\boldsymbol{x}</math> is represented by the basis vectors <math>\boldsymbol{a}_k=(\boldsymbol{a}_{1,k},\ldots,\boldsymbol{a}_{m,k})^T</math>. The basis vectors <math>\boldsymbol{a}_k</math> form the columns of the mixing matrix <math>\boldsymbol{A}=(\boldsymbol{a}_1,\ldots,\boldsymbol{a}_n)</math> and the generative formula can be written as <math>\boldsymbol{x}=\boldsymbol{A} \boldsymbol{s}</math>, where <math>\boldsymbol{s}=(s_1,\ldots,s_n)^T</math>. Given the model and realizations (samples) <math>\boldsymbol{x}_1,\ldots,\boldsymbol{x}_N</math> of the random vector <math>\boldsymbol{x}</math>, the task is to estimate both the mixing matrix <math>\boldsymbol{A}</math> and the sources <math>\boldsymbol{s}</math>. This is done by adaptively calculating the <math>\boldsymbol{w}</math> vectors and setting up a cost function which either maximizes the non-gaussianity of the calculated <math>s_k = \boldsymbol{w}^T \boldsymbol{x}</math> or minimizes the mutual information. In some cases, a priori knowledge of the probability distributions of the sources can be used in the cost function. The original sources <math>\boldsymbol{s}</math> can be recovered by multiplying the observed signals <math>\boldsymbol{x}</math> with the inverse of the mixing matrix <math>\boldsymbol{W}=\boldsymbol{A}^{-1}</math>, also known as the unmixing matrix. Here it is assumed that the mixing matrix is square (<math>n=m</math>). If the number of basis vectors is greater than the dimensionality of the observed vectors, <math>n>m</math>, the task is overcomplete but is still solvable with the [[pseudo inverse]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)