Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Viterbi algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Extensions == A generalization of the Viterbi algorithm, termed the ''max-sum algorithm'' (or ''max-product algorithm'') can be used to find the most likely assignment of all or some subset of [[latent variable]]s in a large number of [[graphical model]]s, e.g. [[Bayesian network]]s, [[Markov random field]]s and [[conditional random field]]s. The latent variables need, in general, to be connected in a way somewhat similar to a [[hidden Markov model]] (HMM), with a limited number of connections between variables and some type of linear structure among the variables. The general algorithm involves ''message passing'' and is substantially similar to the [[belief propagation]] algorithm (which is the generalization of the [[forward-backward algorithm]]). With an algorithm called [[iterative Viterbi decoding]], one can find the subsequence of an observation that matches best (on average) to a given hidden Markov model. This algorithm is proposed by Qi Wang et al. to deal with [[turbo code]].<ref>{{cite journal |author1=Qi Wang |author2=Lei Wei |author3=Rodney A. Kennedy |year=2002 |title=Iterative Viterbi Decoding, Trellis Shaping, and Multilevel Structure for High-Rate Parity-Concatenated TCM |journal=IEEE Transactions on Communications |volume=50 |pages=48β55 |doi=10.1109/26.975743}}</ref> Iterative Viterbi decoding works by iteratively invoking a modified Viterbi algorithm, reestimating the score for a filler until convergence. An alternative algorithm, the [[Lazy Viterbi algorithm]], has been proposed.<ref>{{cite conference |date=December 2002 |title=A fast maximum-likelihood decoder for convolutional codes |url=http://people.csail.mit.edu/jonfeld/pubs/lazyviterbi.pdf |conference=Vehicular Technology Conference |pages=371β375 |doi=10.1109/VETECF.2002.1040367 |conference-url=http://www.ieeevtc.org/}}</ref> For many applications of practical interest, under reasonable noise conditions, the lazy decoder (using Lazy Viterbi algorithm) is much faster than the original [[Viterbi decoder]] (using Viterbi algorithm). While the original Viterbi algorithm calculates every node in the [[Trellis (graph)|trellis]] of possible outcomes, the Lazy Viterbi algorithm maintains a prioritized list of nodes to evaluate in order, and the number of calculations required is typically fewer (and never more) than the ordinary Viterbi algorithm for the same result. However, it is not so easy{{clarify|date=November 2017}} to parallelize in hardware.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)