Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Independent component analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Based on maximum likelihood estimation === '''[[Maximum likelihood]] estimation (MLE)''' is a standard statistical tool for finding parameter values (e.g. the unmixing matrix <math>\mathbf{W}</math>) that provide the best fit of some data (e.g., the extracted signals <math>y</math>) to a given a model (e.g., the assumed joint probability density function (pdf) <math>p_s</math> of source signals).<ref name="ReferenceA"/> The '''ML''' "model" includes a specification of a pdf, which in this case is the pdf <math>p_s</math> of the unknown source signals <math>s</math>. Using '''ML ICA''', the objective is to find an unmixing matrix that yields extracted signals <math>y = \mathbf{W}x</math> with a joint pdf as similar as possible to the joint pdf <math>p_s</math> of the unknown source signals <math>s</math>. '''MLE''' is thus based on the assumption that if the model pdf <math>p_s</math> and the model parameters <math>\mathbf{A}</math> are correct then a high probability should be obtained for the data <math>x</math> that were actually observed. Conversely, if <math>\mathbf{A}</math> is far from the correct parameter values then a low probability of the observed data would be expected. Using '''MLE''', we call the probability of the observed data for a given set of model parameter values (e.g., a pdf <math>p_s</math> and a matrix <math>\mathbf{A}</math>) the ''likelihood'' of the model parameter values given the observed data. We define a ''likelihood'' function <math>\mathbf{L(W)}</math> of <math>\mathbf{W}</math>: <math>\mathbf{ L(W)} = p_s (\mathbf{W}x)|\det \mathbf{W}|. </math> This equals to the probability density at <math>x</math>, since <math>s = \mathbf{W}x</math>. Thus, if we wish to find a <math>\mathbf{W}</math> that is most likely to have generated the observed mixtures <math>x</math> from the unknown source signals <math>s</math> with pdf <math>p_s</math> then we need only find that <math>\mathbf{W}</math> which maximizes the ''likelihood'' <math>\mathbf{L(W)}</math>. The unmixing matrix that maximizes equation is known as the '''MLE''' of the optimal unmixing matrix. It is common practice to use the log ''likelihood'', because this is easier to evaluate. As the logarithm is a monotonic function, the <math>\mathbf{W}</math> that maximizes the function <math>\mathbf{L(W)}</math> also maximizes its logarithm <math>\ln \mathbf{L(W)}</math>. This allows us to take the logarithm of equation above, which yields the log ''likelihood'' function <math>\ln \mathbf{L(W)} =\sum_{i}\sum_{t} \ln p_s(w^T_ix_t) + N\ln|\det \mathbf{W}|</math> If we substitute a commonly used high-[[Kurtosis]] model pdf for the source signals <math>p_s = (1-\tanh(s)^2)</math> then we have <math>\ln \mathbf{L(W)} ={1 \over N}\sum_{i}^{M} \sum_{t}^{N}\ln(1-\tanh(w^T_i x_t )^2) + \ln |\det \mathbf{W}|</math> This matrix <math>\mathbf{W}</math> that maximizes this function is the '''''[[maximum likelihood]] estimation'''''.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)