Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Hidden Markov model
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Probability of the latent variables === A number of related tasks ask about the probability of one or more of the latent variables, given the model's parameters and a sequence of observations <math>y(1),\dots,y(t)</math>. ==== Filtering ==== The task is to compute, given the model's parameters and a sequence of observations, the distribution over hidden states of the last latent variable at the end of the sequence, i.e. to compute <math>P(x(t) \mid y(1),\dots,y(t))</math>. This task is used when the sequence of latent variables is thought of as the underlying states that a process moves through at a sequence of points in time, with corresponding observations at each point. Then, it is natural to ask about the state of the process at the end. This problem can be handled efficiently using the [[forward algorithm]]. An example is when the algorithm is applied to a Hidden Markov Network to determine <math>\mathrm{P}\big( h_t \mid v_{1:t} \big)</math>. ==== Smoothing ==== This is similar to filtering but asks about the distribution of a latent variable somewhere in the middle of a sequence, i.e. to compute <math>P(x(k)\mid y(1), \dots, y(t))</math> for some <math>k < t</math>. From the perspective described above, this can be thought of as the probability distribution over hidden states for a point in time ''k'' in the past, relative to time ''t''. The [[forward-backward algorithm]] is a good method for computing the smoothed values for all hidden state variables. ==== Most likely explanation ==== The task, unlike the previous two, asks about the [[joint probability]] of the ''entire'' sequence of hidden states that generated a particular sequence of observations (see illustration on the right). This task is generally applicable when HMM's are applied to different sorts of problems from those for which the tasks of filtering and smoothing are applicable. An example is [[part-of-speech tagging]], where the hidden states represent the underlying [[Part-of-speech tagging|parts of speech]] corresponding to an observed sequence of words. In this case, what is of interest is the entire sequence of parts of speech, rather than simply the part of speech for a single word, as filtering or smoothing would compute. This task requires finding a maximum over all possible state sequences, and can be solved efficiently by the [[Viterbi algorithm]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)