Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Convolutional code
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Decoding convolutional codes == {{See also|Viterbi algorithm}} [[File:Convolutional codes PSK QAM LLR.svg|thumb|right|300px| Bit error ratio curves for convolutional codes with different options of digital modulations ([[Phase-shift keying|QPSK, 8-PSK]], [[Quadrature amplitude modulation|16-QAM, 64-QAM]]) and [[Likelihood function#Log-likelihood|LLR]] Algorithms.<ref>[https://www.mathworks.com/help/comm/examples/llr-vs-hard-decision-demodulation.html LLR vs. Hard Decision Demodulation (MathWorks)]</ref><ref>[https://www.mathworks.com/help/comm/ug/estimate-ber-for-hard-and-soft-decision-viterbi-decoding.html Estimate BER for Hard and Soft Decision Viterbi Decoding (MathWorks)]</ref> (Exact<ref>[https://www.mathworks.com/help/comm/ug/digital-modulation.html#brc6yjx Digital modulation: Exact LLR Algorithm (MathWorks)]</ref> and Approximate<ref>[https://www.mathworks.com/help/comm/ug/digital-modulation.html#brc6ymu Digital modulation: Approximate LLR Algorithm (MathWorks)]</ref>) over additive white Gaussian noise channel.]] Several [[algorithm]]s exist for decoding convolutional codes. For relatively small values of ''k'', the [[Viterbi algorithm]] is universally used as it provides [[Maximum likelihood estimation|maximum likelihood]] performance and is highly parallelizable. Viterbi decoders are thus easy to implement in [[Very-large-scale integration|VLSI]] hardware and in software on CPUs with [[Single instruction, multiple data|SIMD]] instruction sets. Longer constraint length codes are more practically decoded with any of several [[sequential decoding]] algorithms, of which the [[Robert Fano|Fano]] algorithm is the best known. Unlike Viterbi decoding, sequential decoding is not maximum likelihood but its complexity increases only slightly with constraint length, allowing the use of strong, long-constraint-length codes. Such codes were used in the [[Pioneer program]] of the early 1970s to Jupiter and Saturn, but gave way to shorter, Viterbi-decoded codes, usually concatenated with large [[Reed–Solomon error correction]] codes that steepen the overall bit-error-rate curve and produce extremely low residual undetected error rates. Both Viterbi and sequential decoding algorithms return hard decisions: the bits that form the most likely codeword. An approximate confidence measure can be added to each bit by use of the [[Viterbi algorithm#Soft output Viterbi algorithm|Soft output Viterbi algorithm]]. [[Maximum a posteriori estimation|Maximum a posteriori]] (MAP) soft decisions for each bit can be obtained by use of the [[BCJR algorithm]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)