Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Forward algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Introduction== The forward and backward algorithms should be placed within the context of probability as they appear to simply be names given to a set of standard mathematical procedures within a few fields. For example, neither "forward algorithm" nor "Viterbi" appear in the Cambridge encyclopedia of mathematics. The main observation to take away from these algorithms is how to organize Bayesian updates and inference to be computationally efficient in the context of directed graphs of variables (see [[Belief_propagation|sum-product networks]]). For an HMM such as this one: [[Image:hmm temporal bayesian net.svg|600px|center|Temporal evolution of a hidden Markov model]] this probability is written as <math>p(x_t | y_{1:t} )</math>. Here <math>x(t)</math> is the hidden state which is abbreviated as <math>x_t</math> and <math>y_{1:t}</math> are the observations <math>1</math> to <math>t</math>. The backward algorithm complements the forward algorithm by taking into account the future history if one wanted to improve the estimate for past times. This is referred to as ''smoothing'' and the [[forward/backward algorithm]] computes <math>p(x_t | y_{1:T} )</math> for <math>1 < t < T</math>. Thus, the full forward/backward algorithm takes into account all evidence. Note that a belief state can be calculated at each time step, but doing this does not, in a strict sense, produce the most likely state ''sequence'', but rather the most likely state at each time step, given the previous history. In order to achieve the most likely sequence, the [[Viterbi algorithm]] is required. It computes the most likely state sequence given the history of observations, that is, the state sequence that maximizes <math>p(x_{0:t}|y_{0:t})</math>.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)