Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Reed–Solomon error correction
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== BCH view decoders == The decoders described in this section use the BCH view of a codeword as a sequence of coefficients. They use a fixed generator polynomial known to both encoder and decoder. === Peterson–Gorenstein–Zierler decoder === {{main|BCH code#Peterson–Gorenstein–Zierler algorithm|l1=Peterson–Gorenstein–Zierler algorithm}} Daniel Gorenstein and Neal Zierler developed a decoder that was described in a MIT Lincoln Laboratory report by Zierler in January 1960 and later in a paper in June 1961.<ref>{{cite journal |last1=Peterson |first1=W. |title=Encoding and error-correction procedures for the Bose-Chaudhuri codes |journal=IEEE Transactions on Information Theory |date=September 1960 |volume=6 |issue=4 |pages=459–470 |doi=10.1109/TIT.1960.1057586}}</ref><ref>{{cite journal |last1=Gorenstein |first1=Daniel |last2=Zierler |first2=Neal |title=A Class of Error-Correcting Codes in $p^m $ Symbols |journal=Journal of the Society for Industrial and Applied Mathematics |date=June 1961 |volume=9 |issue=2 |pages=207–214 |doi=10.1137/0109020}}</ref> The Gorenstein–Zierler decoder and the related work on BCH codes are described in the book ''Error Correcting Codes'' by [[W. Wesley Peterson]] (1961).<ref name="Peterson61"/>{{page needed|date=March 2025}} ====Formulation==== The transmitted message, <math>(c_0, \ldots, c_i, \ldots,c_{n-1})</math>, is viewed as the coefficients of a polynomial <math display="block"> s(x) = \sum_{i=0}^{n-1} c_i x^i. </math> As a result of the Reed–Solomon encoding procedure, ''s''(''x'') is divisible by the generator polynomial <math display="block"> g(x) = \prod_{j=1}^{n-k} (x - \alpha^j), </math> where ''α'' is a primitive element. Since ''s''(''x'') is a multiple of the generator ''g''(''x''), it follows that it "inherits" all its roots: <math display="block"> s(x) \bmod (x - \alpha^j) = g(x) \bmod (x - \alpha^j) = 0. </math> Therefore, <math display="block"> s(\alpha^j) = 0,\ j = 1, 2, \ldots, n - k. </math> The transmitted polynomial is corrupted in transit by an error polynomial <math display="block"> e(x) = \sum_{i=0}^{n-1} e_i x^i </math> to produce the received polynomial <math display="block"> r(x) = s(x) + e(x). </math> Coefficient ''e<sub>i</sub>'' will be zero if there is no error at that power of ''x'', and nonzero if there is an error. If there are ''ν'' errors at distinct powers ''i<sub>k</sub>'' of ''x'', then <math display="block"> e(x) = \sum_{k=1}^\nu e_{i_k} x^{i_k}. </math> The goal of the decoder is to find the number of errors (''ν''), the positions of the errors (''i<sub>k</sub>''), and the error values at those positions (''e<sub>i<sub>k</sub></sub>''). From those, ''e''(''x'') can be calculated and subtracted from ''r''(''x'') to get the originally sent message ''s''(''x''). ====Syndrome decoding==== The decoder starts by evaluating the polynomial as received at points <math>\alpha^1 \dots \alpha^{n-k}</math>. We call the results of that evaluation the "syndromes" ''S''<sub>''j''</sub>. They are defined as <math display="block"> \begin{align} S_j &= r(\alpha^j) = s(\alpha^j) + e(\alpha^j) = 0 + e(\alpha^j) \\ &= e(\alpha^j) \\ &= \sum_{k=1}^\nu e_{i_k} {(\alpha^j)}^{i_k}, \quad j = 1, 2, \ldots, n - k. \end{align} </math> Note that <math>s(\alpha^j) = 0</math> because <math>s(x)</math> has roots at <math>\alpha^j</math>, as shown in the previous section. The advantage of looking at the syndromes is that the message polynomial drops out. In other words, the syndromes only relate to the error and are unaffected by the actual contents of the message being transmitted. If the syndromes are all zero, the algorithm stops here and reports that the message was not corrupted in transit. ====Error locators and error values==== <!-- There's confusion between index k and the k in (n,k). Literature also has confusion? Or does it use kappa? --> For convenience, define the '''error locators''' ''X<sub>k</sub>'' and '''error values''' ''Y<sub>k</sub>'' as <math display="block"> X_k = \alpha^{i_k}, \quad Y_k = e_{i_k}. </math> Then the syndromes can be written in terms of these error locators and error values as <math display="block"> S_j = \sum_{k=1}^\nu Y_k X_k^j. </math> This definition of the syndrome values is equivalent to the previous since <math>{(\alpha^j)}^{i_k} = \alpha^{j \cdot i_k} = {(\alpha^{i_k})}^j = X_k^j</math>. The syndromes give a system of {{math|''n'' − ''k'' ≥ 2''ν''}} equations in 2''ν'' unknowns, but that system of equations is nonlinear in the ''X<sub>k</sub>'' and does not have an obvious solution. However, if the ''X<sub>k</sub>'' were known (see below), then the syndrome equations provide a linear system of equations <!-- Vandermonde comment. Matrix equation --> <math display="block">\begin{bmatrix} X_1^1 & X_2^1 & \cdots & X_\nu^1 \\ X_1^2 & X_2^2 & \cdots & X_\nu^2 \\ \vdots & \vdots & \ddots & \vdots \\ X_1^{n-k} & X_2^{n-k} & \cdots & X_\nu^{n-k} \\ \end{bmatrix} \begin{bmatrix} Y_1 \\ Y_2 \\ \vdots \\ Y_\nu \end{bmatrix} = \begin{bmatrix} S_1 \\ S_2 \\ \vdots \\ S_{n-k} \end{bmatrix}, </math> which can easily be solved for the ''Y<sub>k</sub>'' error values. Consequently, the problem is finding the ''X<sub>k</sub>'', because then the leftmost matrix would be known, and both sides of the equation could be multiplied by its inverse, yielding Y''<sub>k</sub>'' In the variant of this algorithm where the locations of the errors are already known (when it is being used as an [[erasure code]]), this is the end. The error locations (''X<sub>k</sub>'') are already known by some other method (for example, in an FM transmission, the sections where the bitstream was unclear or overcome with interference are probabilistically determinable from frequency analysis). In this scenario, up to <math>n - k</math> errors can be corrected. The rest of the algorithm serves to locate the errors and will require syndrome values up to <math>2\nu</math>, instead of just the <math>\nu</math> used thus far. This is why twice as many error-correcting symbols need to be added as can be corrected without knowing their locations. ====Error locator polynomial==== There is a linear recurrence relation that gives rise to a system of linear equations. Solving those equations identifies those error locations ''X<sub>k</sub>''. <!--This is credited to Daniel Gorenstein and Neal Zierler in the book Error Correcting Codes by Peterson - 1961 --> <!--Petersons decoder is a simplified implementation for bit oriented BCH codes where the error values only have magnitude one. --> Define the '''error locator polynomial''' {{math|Λ(''x'')}} as <math display="block"> \Lambda(x) = \prod_{k=1}^\nu (1 - x X_k ) = 1 + \Lambda_1 x^1 + \Lambda_2 x^2 + \cdots + \Lambda_\nu x^\nu. </math> The zeros of {{math|Λ(''x'')}} are the reciprocals <math>X_k^{-1}</math>. This follows from the above product notation construction, since if <math>x = X_k^{-1}</math>, then one of the multiplied terms will be zero, <math>(1 - X_k^{-1} \cdot X_k) = 1 - 1 = 0</math>, making the whole polynomial evaluate to zero: <math display="block"> \Lambda(X_k^{-1}) = 0. </math> Let <math>j</math> be any integer such that <math>1 \leq j \leq \nu</math>. Multiply both sides by <math>Y_k X_k^{j+\nu}</math>, and it will still be zero: <math display="block">\begin{align} & Y_k X_k^{j+\nu} \Lambda(X_k^{-1}) = 0, \\ & Y_k X_k^{j+\nu} (1 + \Lambda_1 X_k^{-1} + \Lambda_2 X_k^{-2} + \cdots + \Lambda_\nu X_k^{-\nu}) = 0, \\ & Y_k X_k^{j+\nu} + \Lambda_1 Y_k X_k^{j+\nu} X_k^{-1} + \Lambda_2 Y_k X_k^{j+\nu} X_k^{-2} + \cdots + \Lambda_\nu Y_k X_k^{j+\nu} X_k^{-\nu} = 0, \\ & Y_k X_k^{j+\nu} + \Lambda_1 Y_k X_k^{j+\nu-1} + \Lambda_2 Y_k X_k^{j+\nu-2} + \cdots + \Lambda_\nu Y_k X_k^j = 0. \end{align} </math> Sum for ''k'' = 1 to ''ν'', and it will still be zero: <math display="block"> \sum_{k=1}^\nu (Y_k X_k^{j+\nu} + \Lambda_1 Y_k X_k^{j+\nu-1} + \Lambda_2 Y_k X_k^{j+\nu-2} + \cdots + \Lambda_{\nu} Y_k X_k^j) = 0. </math> Collect each term into its own sum: <math display="block"> \left(\sum_{k=1}^\nu Y_k X_k^{j+\nu}\right) + \left(\sum_{k=1}^\nu \Lambda_1 Y_k X_k^{j+\nu-1}\right) + \left(\sum_{k=1}^\nu \Lambda_2 Y_k X_k^{j+\nu -2}\right) + \cdots + \left(\sum_{k=1}^\nu \Lambda_\nu Y_k X_k^j\right) = 0. </math> Extract the constant values of <math>\Lambda</math> that are unaffected by the summation: <math display="block"> \left(\sum_{k=1}^\nu Y_k X_k^{j+\nu}\right) + \Lambda_1 \left(\sum_{k=1}^\nu Y_k X_k^{j+\nu-1}\right) + \Lambda_2 \left(\sum_{k=1}^\nu Y_k X_k^{j+\nu-2}\right) + \cdots + \Lambda_\nu \left(\sum_{k=1}^\nu Y_k X_k^j\right) = 0. </math> These summations are now equivalent to the syndrome values, which we know and can substitute in. This therefore reduces to <math display="block"> S_{j+\nu} + \Lambda_1 S_{j+\nu-1} + \cdots + \Lambda_{\nu-1} S_{j+1} + \Lambda_\nu S_j = 0. </math> Subtracting <math>S_{j+\nu}</math> from both sides yields <math display="block"> S_j \Lambda_\nu + S_{j+1} \Lambda_{\nu-1} + \cdots + S_{j+\nu-1} \Lambda_1 = -S_{j+\nu}. </math> Recall that ''j'' was chosen to be any integer between 1 and ''v'' inclusive, and this equivalence is true for all such values. Therefore, we have ''v'' linear equations, not just one. This system of linear equations can therefore be solved for the coefficients Λ<sub>''i''</sub> of the error-location polynomial: <math display="block"> \begin{bmatrix} S_1 & S_2 & \cdots & S_\nu \\ S_2 & S_3 & \cdots & S_{\nu+1} \\ \vdots & \vdots & \ddots & \vdots \\ S_\nu & S_{\nu+1} & \cdots & S_{2\nu-1} \end{bmatrix} \begin{bmatrix} \Lambda_\nu \\ \Lambda_{\nu-1} \\ \vdots \\ \Lambda_1 \end{bmatrix} = \begin{bmatrix} -S_{\nu+1} \\ -S_{\nu+2} \\ \vdots \\ -S_{\nu+\nu} \end{bmatrix}. </math> The above assumes that the decoder knows the number of errors ''ν'', but that number has not been determined yet. The PGZ decoder does not determine ''ν'' directly but rather searches for it by trying successive values. The decoder first assumes the largest value for a trial ''ν'' and sets up the linear system for that value. If the equations can be solved (i.e., the matrix determinant is nonzero), then that trial value is the number of errors. If the linear system cannot be solved, then the trial ''ν'' is reduced by one and the next smaller system is examined.<ref>{{cite web |last= Gill |first= John |title= EE387 Notes #7, Handout #28 |date= n.d. |access-date= April 21, 2010 |publisher= Stanford University |url= http://www.stanford.edu/class/ee387/handouts/notes7.pdf |url-status= dead |archive-url= https://web.archive.org/web/20140630172526/http://web.stanford.edu/class/ee387/handouts/notes7.pdf |archive-date= June 30, 2014 }}</ref> ====Find the roots of the error locator polynomial==== Use the coefficients Λ<sub>''i''</sub> found in the last step to build the error location polynomial. The roots of the error location polynomial can be found by exhaustive search. The error locators ''X<sub>k</sub>'' are the reciprocals of those roots. The order of coefficients of the error location polynomial can be reversed, in which case the roots of that reversed polynomial are the error locators <math>X_k</math> (not their reciprocals <math>X_k^{-1}</math>). [[Chien search]] is an efficient implementation of this step. ====Calculate the error values==== Once the error locators ''X<sub>k</sub>'' are known, the error values can be determined. This can be done by direct solution for ''Y<sub>k</sub>'' in the [[#Error locators and error values|error equations]] matrix given above, or using the [[Forney algorithm]]. ====Calculate the error locations==== Calculate ''i<sub>k</sub>'' by taking the log base <math>\alpha</math> of ''X<sub>k</sub>''. This is generally done using a precomputed lookup table. ====Fix the errors==== Finally, ''e''(''x'') is generated from ''i<sub>k</sub>'' and ''e<sub>i<sub>k</sub></sub>'' and then is subtracted from ''r''(''x'') to get the originally sent message ''s''(''x''), with errors corrected. ==== Example ==== Consider the Reed–Solomon code defined in {{math|''GF''(929)}} with {{math|''α'' {{=}} 3}} and {{math|''t'' {{=}} 4}} (this is used in [[PDF417]] barcodes) for a RS(7,3) code. The generator polynomial is <math display="block"> g(x) = (x - 3)(x - 3^2)(x - 3^3)(x - 3^4) = x^4 + 809 x^3 + 723 x^2 + 568 x + 522. </math> If the message polynomial is {{math|''p''(''x'') {{=}} 3 ''x''<sup>2</sup> + 2 ''x'' + 1}}, then a systematic codeword is encoded as follows: <math display="block"> s_r(x) = p(x) \, x^t \bmod g(x) = 547 x^3 + 738 x^2 + 442 x + 455, </math> <math display="block"> s(x) = p(x) \, x^t - s_r(x) = 3 x^6 + 2 x^5 + 1 x^4 + 382 x^3 + 191 x^2 + 487 x + 474. </math> Errors in transmission might cause this to be received instead: <math display="block"> r(x) = s(x) + e(x) = 3 x^6 + 2 x^5 + 123 x^4 + 456 x^3 + 191 x^2 + 487 x + 474. </math> The syndromes are calculated by evaluating ''r'' at powers of ''α'': <math display="block"> S_1 = r(3^1) = 3 \cdot 3^6 + 2 \cdot 3^5 + 123 \cdot 3^4 + 456 \cdot 3^3 + 191 \cdot 3^2 + 487 \cdot 3 + 474 = 732, </math> <math display="block"> S_2 = r(3^2) = 637,\quad S_3 = r(3^3) = 762,\quad S_4 = r(3^4) = 925, </math> yielding the system <math display="block"> \begin{bmatrix} 732 & 637 \\ 637 & 762 \end{bmatrix} \begin{bmatrix} \Lambda_2 \\ \Lambda_1 \end{bmatrix} = \begin{bmatrix} -762 \\ -925 \end{bmatrix} = \begin{bmatrix} 167 \\ 004 \end{bmatrix}. </math> Using [[Gaussian elimination]], <math display="block"> \begin{bmatrix} 001 & 000 \\ 000 & 001 \end{bmatrix} \begin{bmatrix} \Lambda_2 \\ \Lambda_1 \end{bmatrix} = \begin{bmatrix} 329 \\ 821 \end{bmatrix}, </math> so <math display="block"> \Lambda(x) = 329 x^2 + 821 x + 001, </math> with roots ''x''<sub>1</sub> = 757 = 3<sup>−3</sup> and ''x''<sub>2</sub> = 562 = 3<sup>−4</sup>. The coefficients can be reversed: <math display="block"> R(x) = 001 x^2 + 821 x + 329, </math> to produce roots 27 = 3<sup>3</sup> and 81 = 3<sup>4</sup> with positive exponents, but typically this isn't used. The logarithm of the inverted roots corresponds to the error locations (right to left, location 0 is the last term in the codeword). To calculate the error values, apply the [[Forney algorithm]]: <math display="block"> \Omega(x) = S(x) \Lambda(x) \bmod x^4 = 546 x + 732, </math> <math display="block"> \Lambda'(x) = 658 x + 821, </math> <math display="block"> e_1 = -\Omega(x_1)/\Lambda'(x_1) = 074, </math> <math display="block"> e_2 = -\Omega(x_2)/\Lambda'(x_2) = 122. </math> Subtracting <math>e_1 x^3 + e_2 x^4 = 74x^3 + 122x^4</math> from the received polynomial ''r''(''x'') reproduces the original codeword ''s''. === Berlekamp–Massey decoder === The [[Berlekamp–Massey algorithm]] is an alternate iterative procedure for finding the error locator polynomial. During each iteration, it calculates a discrepancy based on a current instance of Λ(''x'') with an assumed number of errors ''e'': <math display="block"> \Delta = S_{i} + \Lambda_1 \ S_{i-1} + \cdots + \Lambda_e \ S_{i-e}</math> and then adjusts Λ(''x'') and ''e'' so that a recalculated Δ would be zero. The article [[Berlekamp–Massey algorithm]] has a detailed description of the procedure. In the following example, ''C''(''x'') is used to represent Λ(''x''). ==== Example ==== Using the same data as the Peterson Gorenstein Zierler example above: {| class="wikitable" |- ! ''n'' !! ''S''<sub>''n''+1</sub> !! ''d'' !! ''C'' !! ''B'' !! ''b'' !! ''m'' |- | 0 || 732 || 732 || 197 ''x'' + 1 || 1 || 732 || 1 |- | 1 || 637 || 846 || 173 ''x'' + 1 || 1 || 732 || 2 |- | 2 || 762 || 412 || 634 ''x''<sup>2</sup> + 173 ''x'' + 1 || 173 ''x'' + 1 || 412 || 1 |- | 3 || 925 || 576 || 329 ''x''<sup>2</sup> + 821 ''x'' + 1 || 173 ''x'' + 1 || 412 || 2 |} The final value of ''C'' is the error locator polynomial, Λ(''x''). === Sugiyama decoder <span class="anchor" id="Euclidean decoder"></span> === Another iterative method for calculating both the error locator polynomial and the error value polynomial is based on Sugiyama's adaptation of the [[extended Euclidean algorithm]]. Define ''S''(''x''), Λ(''x''), and Ω(''x'') for ''t'' syndromes and ''e'' errors: <math display="block"> \begin{align} S(x) &= S_{t} x^{t-1} + S_{t-1} x^{t-2} + \cdots + S_2 x + S_1 \\[1ex] \Lambda(x) &= \Lambda_{e} x^{e} + \Lambda_{e-1} x^{e-1} + \cdots + \Lambda_{1} x + 1 \\[1ex] \Omega(x) &= \Omega_{e} x^{e} + \Omega_{e-1} x^{e-1} + \cdots + \Omega_{1} x + \Omega_{0} \end{align} </math> The key equation is: <math display="block"> \Lambda(x) S(x) = Q(x) x^{t} + \Omega(x) </math> For ''t'' = 6 and ''e'' = 3: <math display="block">\begin{bmatrix} \Lambda_3 S_6 & x^8 \\ \Lambda_2 S_6 + \Lambda_3 S_5 & x^7 \\ \Lambda_1 S_6 + \Lambda_2 S_5 + \Lambda_3 S_4 & x^6 \\ S_6 + \Lambda_1 S_5 + \Lambda_2 S_4 + \Lambda_3 S_3 & x^5 \\ S_5 + \Lambda_1 S_4 + \Lambda_2 S_3 + \Lambda_3 S_2 & x^4 \\ S_4 + \Lambda_1 S_3 + \Lambda_2 S_2 + \Lambda_3 S_1 & x^3 \\ S_3 + \Lambda_1 S_2 + \Lambda_2 S_1 & x^2 \\ S_2 + \Lambda_1 S_1 & x \\ S_1 \end{bmatrix} = \begin{bmatrix} Q_2 x^8 \\ Q_1 x^7 \\ Q_0 x^6 \\ 0 \\ 0 \\ 0 \\ \Omega_2 x^2 \\ \Omega_1 x \\ \Omega_0 \end{bmatrix} </math> The middle terms are zero due to the relationship between Λ and syndromes. The extended Euclidean algorithm can find a series of polynomials of the form {{block indent | em = 1.5 | text = ''A''<sub>''i''</sub>(''x'') ''S''(''x'') + ''B''<sub>''i''</sub>(''x'') ''x''<sup>''t''</sup> = ''R''<sub>''i''</sub>(''x'')}} where the degree of ''R'' decreases as ''i'' increases. Once the degree of ''R''<sub>''i''</sub>(''x'') < ''t''/2, then {{block indent | em = 1.5 | text = ''A''<sub>''i''</sub>(''x'') = Λ(''x'')}} {{block indent | em = 1.5 | text = ''B''<sub>''i''</sub>(''x'') = −Q(''x'')}} {{block indent | em = 1.5 | text = ''R''<sub>''i''</sub>(''x'') = Ω(''x'').}} ''B''(''x'') and ''Q''(''x'') don't need to be saved, so the algorithm becomes: ''R''<sub>−1</sub> := ''x''<sup>''t''</sup> ''R''<sub>0</sub> := ''S''(''x'') ''A''<sub>−1</sub> := 0 ''A''<sub>0</sub> := 1 ''i'' := 0 '''while''' degree of ''R''<sub>''i''</sub> ≥ ''t''/2 ''i'' := ''i'' + 1 ''Q'' := ''R''<sub>''i''-2</sub> / ''R''<sub>''i''-1</sub> ''R''<sub>''i''</sub> := ''R''<sub>''i''-2</sub> - ''Q'' ''R''<sub>''i''-1</sub> ''A''<sub>''i''</sub> := ''A''<sub>''i''-2</sub> - ''Q'' ''A''<sub>''i''-1</sub> to set low order term of Λ(''x'') to 1, divide Λ(''x'') and Ω(''x'') by ''A''<sub>''i''</sub>(0): {{block indent | em = 1.5 | text = Λ(''x'') = ''A''<sub>''i''</sub> / ''A''<sub>''i''</sub>(0)}} {{block indent | em = 1.5 | text = Ω(''x'') = ''R''<sub>''i''</sub> / ''A''<sub>''i''</sub>(0)}} ''A''<sub>''i''</sub>(0) is the constant (low order) term of A<sub>i</sub>. ==== Example ==== Using the same data as the Peterson–Gorenstein–Zierler example above: {| class="wikitable" |- ! ''i'' ! R<sub>''i''</sub> ! A<sub>''i''</sub> |- | −1 | 001 ''x''<sup>4</sup> + 000 ''x''<sup>3</sup> + 000 ''x''<sup>2</sup> + 000 ''x'' + 000 | 000 |- | 0 | 925 ''x''<sup>3</sup> + 762 ''x''<sup>2</sup> + 637 ''x'' + 732 | 001 |- | 1 | 683 ''x''<sup>2</sup> + 676 ''x'' + 024 | 697 ''x'' + 396 |- | 2 | 673 ''x'' + 596 | 608 ''x''<sup>2</sup> + 704 ''x'' + 544 |} {{block indent | em = 1.5 | text = Λ(''x'') = ''A''<sub>2</sub> / 544 = 329 ''x''<sup>2</sup> + 821 ''x'' + 001 }} {{block indent | em = 1.5 | text = Ω(''x'') = ''R''<sub>2</sub> / 544 = 546 ''x'' + 732}} === Decoder using discrete Fourier transform === A discrete Fourier transform can be used for decoding.<ref>{{cite book |last1=Lin |first1=Shu |last2=Costello |first2=Daniel J. |title=Error control coding: fundamentals and applications |date=2004 |publisher=Pearson/Prentice Hall |location=Upper Saddle River, NJ |isbn=978-0130426727 |pages=255–262 |edition=2.}}</ref> To avoid conflict with syndrome names, let ''c''(''x'') = ''s''(''x'') the encoded codeword. ''r''(''x'') and ''e''(''x'') are the same as above. Define ''C''(''x''), ''E''(''x''), and ''R''(''x'') as the discrete Fourier transforms of ''c''(''x''), ''e''(''x''), and ''r''(''x''). Since ''r''(''x'') = ''c''(''x'') + ''e''(''x''), and since a discrete Fourier transform is a linear operator, ''R''(''x'') = ''C''(''x'') + ''E''(''x''). Transform ''r''(''x'') to ''R''(''x'') using discrete Fourier transform. Since the calculation for a discrete Fourier transform is the same as the calculation for syndromes, ''t'' coefficients of ''R''(''x'') and ''E''(''x'') are the same as the syndromes: <math display="block">R_j = E_j = S_j = r(\alpha^j) \qquad \text{for } 1 \le j \le t</math> Use <math>R_1</math> through <math>R_t</math> as syndromes (they're the same) and generate the error locator polynomial using the methods from any of the above decoders. Let ''v'' = number of errors. Generate ''E''(''x'') using the known coefficients <math>E_1</math> to <math>E_t</math>, the error locator polynomial, and these formulas <math display="block">\begin{align} E_0 &= - \frac{1}{\Lambda_v}(E_{v} + \Lambda_1 E_{v-1} + \cdots + \Lambda_{v-1} E_{1}) \\ E_j &= -(\Lambda_1 E_{j-1} + \Lambda_2 E_{j-2} + \cdots + \Lambda_v E_{j-v}) & \text{for } t < j < n \end{align}</math> Then calculate ''C''(''x'') = ''R''(''x'') − ''E''(''x'') and take the inverse transform (polynomial interpolation) of ''C''(''x'') to produce ''c''(''x''). ===Decoding beyond the error-correction bound=== The [[Singleton bound]] states that the minimum distance {{mvar|d}} of a linear block code of size ({{mvar|n}},{{mvar|k}}) is upper-bounded by {{math|''n'' - ''k'' + 1}}. The distance {{mvar|d}} was usually understood to limit the error-correction capability to {{math|⌊(''d'' - 1) / 2⌋}}. The Reed–Solomon code achieves this bound with equality, and can thus correct up to {{math|⌊(''n'' - ''k'') / 2⌋}} errors. However, this error-correction bound is not exact. In 1999, [[Madhu Sudan]] and [[Venkatesan Guruswami]] at MIT published "Improved Decoding of Reed–Solomon and Algebraic-Geometry Codes" introducing an algorithm that allowed for the correction of errors beyond half the minimum distance of the code.<ref>{{Citation |first1=V. |last1=Guruswami |first2=M. |last2=Sudan |title=Improved decoding of Reed–Solomon codes and algebraic geometry codes |journal=[[IEEE Transactions on Information Theory]] |volume=45 |issue=6 |pages=1757–1767 |date=September 1999 |doi=10.1109/18.782097 |citeseerx=10.1.1.115.292 }}</ref> It applies to Reed–Solomon codes and more generally to [[algebraic geometric code]]s. This algorithm produces a list of codewords (it is a [[list-decoding]] algorithm) and is based on interpolation and factorization of polynomials over {{math|''GF''(2{{sup|''m''}})}} and its extensions. In 2023, building on three exciting{{according to whom?|date=March 2025}} works,<ref>{{Cite book |last1=Brakensiek |first1=Joshua |last2=Gopi |first2=Sivakanth |last3=Makam |first3=Visu |chapter=Generic Reed-Solomon Codes Achieve List-Decoding Capacity |date=2023-06-02 |title=Proceedings of the 55th Annual ACM Symposium on Theory of Computing |chapter-url=https://doi.org/10.1145/3564246.3585128 |series=STOC 2023 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=1488–1501 |doi=10.1145/3564246.3585128 |isbn=978-1-4503-9913-5|arxiv=2206.05256 }}</ref><ref>{{Cite book |last1=Guo |first1=Zeyu |last2=Zhang |first2=Zihan |chapter=Randomly Punctured Reed-Solomon Codes Achieve the List Decoding Capacity over Polynomial-Size Alphabets |title=2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS) |chapter-url=https://doi.org/10.1109/FOCS57990.2023.00019 |series=FOCS 2023, Santa Cruz, CA, USA, 2023 |date=2023 |pages=164–176 |doi=10.1109/FOCS57990.2023.00019 |isbn=979-8-3503-1894-4|arxiv=2304.01403 }}</ref><ref>{{Citation |last1=Alrabiah |first1=Omar |title=Randomly punctured Reed--Solomon codes achieve list-decoding capacity over linear-sized fields |date=2023-08-18 |arxiv=2304.09445 |last2=Guruswami |first2=Venkatesan |last3=Li |first3=Ray}}</ref> coding theorists showed that Reed-Solomon codes defined over random evaluation points can actually achieve [[list decoding]] capacity (up to {{math|''n'' - ''k''}} errors) over linear size alphabets with high probability. However, this result is combinatorial rather than algorithmic.{{Citation needed|date=March 2025}} ===Soft-decoding=== The algebraic decoding methods described above are hard-decision methods, which means that for every symbol a hard decision is made about its value. For example, a decoder could associate with each symbol an additional value corresponding to the channel [[demodulator]]'s confidence in the correctness of the symbol. The advent of [[low-density parity-check code|LDPC]] and [[turbo code]]s, which employ iterated [[soft-decision decoding|soft-decision]] belief propagation decoding methods to achieve error-correction performance close to the [[Shannon limit|theoretical limit]], has spurred interest in applying soft-decision decoding to conventional algebraic codes. In 2003, Ralf Koetter and [[Alexander Vardy]] presented a polynomial-time soft-decision algebraic list-decoding algorithm for Reed–Solomon codes, which was based upon the work by Sudan and Guruswami.<ref>{{cite journal | first1=Ralf | last1=Koetter | first2=Alexander | last2=Vardy | title=Algebraic soft-decision decoding of Reed–Solomon codes | journal=[[IEEE Transactions on Information Theory]] | volume=49 | issue=11 | year=2003 | pages=2809–2825 | doi=10.1109/TIT.2003.819332| citeseerx=10.1.1.13.2021 }}</ref> In 2016, Steven J. Franke and Joseph H. Taylor published a novel soft-decision decoder.<ref>{{cite journal | first1=Steven J. | last1=Franke | first2=Joseph H. | last2=Taylor | title=Open Source Soft-Decision Decoder for the JT65 (63,12) Reed–Solomon Code | journal=QEX | issue=May/June | year=2016 | pages=8–17 | url=http://physics.princeton.edu/pulsar/K1JT/FrankeTaylor_QEX_2016.pdf | access-date=2017-06-07 | archive-date=2017-03-09 | archive-url=https://web.archive.org/web/20170309134627/http://www.physics.princeton.edu/pulsar/k1jt/FrankeTaylor_QEX_2016.pdf | url-status=live }}</ref> === MATLAB example === ==== Encoder ==== Here we present a simple [[MATLAB]] implementation for an encoder. <syntaxhighlight lang="matlab" line="1"> function encoded = rsEncoder(msg, m, prim_poly, n, k) % RSENCODER Encode message with the Reed-Solomon algorithm % m is the number of bits per symbol % prim_poly: Primitive polynomial p(x). Ie for DM is 301 % k is the size of the message % n is the total size (k+redundant) % Example: msg = uint8('Test') % enc_msg = rsEncoder(msg, 8, 301, 12, numel(msg)); % Get the alpha alpha = gf(2, m, prim_poly); % Get the Reed-Solomon generating polynomial g(x) g_x = genpoly(k, n, alpha); % Multiply the information by X^(n-k), or just pad with zeros at the end to % get space to add the redundant information msg_padded = gf([msg zeros(1, n - k)], m, prim_poly); % Get the remainder of the division of the extended message by the % Reed-Solomon generating polynomial g(x) [~, remainder] = deconv(msg_padded, g_x); % Now return the message with the redundant information encoded = msg_padded - remainder; end % Find the Reed-Solomon generating polynomial g(x), by the way this is the % same as the rsgenpoly function on matlab function g = genpoly(k, n, alpha) g = 1; % A multiplication on the galois field is just a convolution for k = mod(1 : n - k, n) g = conv(g, [1 alpha .^ (k)]); end end </syntaxhighlight> ==== Decoder ==== Now the decoding part: <syntaxhighlight lang="matlab" line="1"> function [decoded, error_pos, error_mag, g, S] = rsDecoder(encoded, m, prim_poly, n, k) % RSDECODER Decode a Reed-Solomon encoded message % Example: % [dec, ~, ~, ~, ~] = rsDecoder(enc_msg, 8, 301, 12, numel(msg)) max_errors = floor((n - k) / 2); orig_vals = encoded.x; % Initialize the error vector errors = zeros(1, n); g = []; S = []; % Get the alpha alpha = gf(2, m, prim_poly); % Find the syndromes (Check if dividing the message by the generator % polynomial the result is zero) Synd = polyval(encoded, alpha .^ (1:n - k)); Syndromes = trim(Synd); % If all syndromes are zeros (perfectly divisible) there are no errors if isempty(Syndromes.x) decoded = orig_vals(1:k); error_pos = []; error_mag = []; g = []; S = Synd; return; end % Prepare for the euclidean algorithm (Used to find the error locating % polynomials) r0 = [1, zeros(1, 2 * max_errors)]; r0 = gf(r0, m, prim_poly); r0 = trim(r0); size_r0 = length(r0); r1 = Syndromes; f0 = gf([zeros(1, size_r0 - 1) 1], m, prim_poly); f1 = gf(zeros(1, size_r0), m, prim_poly); g0 = f1; g1 = f0; % Do the euclidean algorithm on the polynomials r0(x) and Syndromes(x) in % order to find the error locating polynomial while true % Do a long division [quotient, remainder] = deconv(r0, r1); % Add some zeros quotient = pad(quotient, length(g1)); % Find quotient*g1 and pad c = conv(quotient, g1); c = trim(c); c = pad(c, length(g0)); % Update g as g0-quotient*g1 g = g0 - c; % Check if the degree of remainder(x) is less than max_errors if all(remainder(1:end - max_errors) == 0) break; end % Update r0, r1, g0, g1 and remove leading zeros r0 = trim(r1); r1 = trim(remainder); g0 = g1; g1 = g; end % Remove leading zeros g = trim(g); % Find the zeros of the error polynomial on this galois field evalPoly = polyval(g, alpha .^ (n - 1 : - 1 : 0)); error_pos = gf(find(evalPoly == 0), m); % If no error position is found we return the received work, because % basically is nothing that we could do and we return the received message if isempty(error_pos) decoded = orig_vals(1:k); error_mag = []; return; end % Prepare a linear system to solve the error polynomial and find the error % magnitudes size_error = length(error_pos); Syndrome_Vals = Syndromes.x; b(:, 1) = Syndrome_Vals(1:size_error); for idx = 1 : size_error e = alpha .^ (idx * (n - error_pos.x)); err = e.x; er(idx, :) = err; end % Solve the linear system error_mag = (gf(er, m, prim_poly) \ gf(b, m, prim_poly))'; % Put the error magnitude on the error vector errors(error_pos.x) = error_mag.x; % Bring this vector to the galois field errors_gf = gf(errors, m, prim_poly); % Now to fix the errors just add with the encoded code decoded_gf = encoded(1:k) + errors_gf(1:k); decoded = decoded_gf.x; end % Remove leading zeros from Galois array function gt = trim(g) gx = g.x; gt = gf(gx(find(gx, 1) : end), g.m, g.prim_poly); end % Add leading zeros function xpad = pad(x, k) len = length(x); if len < k xpad = [zeros(1, k - len) x]; end end </syntaxhighlight>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)