Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Reed–Solomon error correction
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Peterson–Gorenstein–Zierler decoder === {{main|BCH code#Peterson–Gorenstein–Zierler algorithm|l1=Peterson–Gorenstein–Zierler algorithm}} Daniel Gorenstein and Neal Zierler developed a decoder that was described in a MIT Lincoln Laboratory report by Zierler in January 1960 and later in a paper in June 1961.<ref>{{cite journal |last1=Peterson |first1=W. |title=Encoding and error-correction procedures for the Bose-Chaudhuri codes |journal=IEEE Transactions on Information Theory |date=September 1960 |volume=6 |issue=4 |pages=459–470 |doi=10.1109/TIT.1960.1057586}}</ref><ref>{{cite journal |last1=Gorenstein |first1=Daniel |last2=Zierler |first2=Neal |title=A Class of Error-Correcting Codes in $p^m $ Symbols |journal=Journal of the Society for Industrial and Applied Mathematics |date=June 1961 |volume=9 |issue=2 |pages=207–214 |doi=10.1137/0109020}}</ref> The Gorenstein–Zierler decoder and the related work on BCH codes are described in the book ''Error Correcting Codes'' by [[W. Wesley Peterson]] (1961).<ref name="Peterson61"/>{{page needed|date=March 2025}} ====Formulation==== The transmitted message, <math>(c_0, \ldots, c_i, \ldots,c_{n-1})</math>, is viewed as the coefficients of a polynomial <math display="block"> s(x) = \sum_{i=0}^{n-1} c_i x^i. </math> As a result of the Reed–Solomon encoding procedure, ''s''(''x'') is divisible by the generator polynomial <math display="block"> g(x) = \prod_{j=1}^{n-k} (x - \alpha^j), </math> where ''α'' is a primitive element. Since ''s''(''x'') is a multiple of the generator ''g''(''x''), it follows that it "inherits" all its roots: <math display="block"> s(x) \bmod (x - \alpha^j) = g(x) \bmod (x - \alpha^j) = 0. </math> Therefore, <math display="block"> s(\alpha^j) = 0,\ j = 1, 2, \ldots, n - k. </math> The transmitted polynomial is corrupted in transit by an error polynomial <math display="block"> e(x) = \sum_{i=0}^{n-1} e_i x^i </math> to produce the received polynomial <math display="block"> r(x) = s(x) + e(x). </math> Coefficient ''e<sub>i</sub>'' will be zero if there is no error at that power of ''x'', and nonzero if there is an error. If there are ''ν'' errors at distinct powers ''i<sub>k</sub>'' of ''x'', then <math display="block"> e(x) = \sum_{k=1}^\nu e_{i_k} x^{i_k}. </math> The goal of the decoder is to find the number of errors (''ν''), the positions of the errors (''i<sub>k</sub>''), and the error values at those positions (''e<sub>i<sub>k</sub></sub>''). From those, ''e''(''x'') can be calculated and subtracted from ''r''(''x'') to get the originally sent message ''s''(''x''). ====Syndrome decoding==== The decoder starts by evaluating the polynomial as received at points <math>\alpha^1 \dots \alpha^{n-k}</math>. We call the results of that evaluation the "syndromes" ''S''<sub>''j''</sub>. They are defined as <math display="block"> \begin{align} S_j &= r(\alpha^j) = s(\alpha^j) + e(\alpha^j) = 0 + e(\alpha^j) \\ &= e(\alpha^j) \\ &= \sum_{k=1}^\nu e_{i_k} {(\alpha^j)}^{i_k}, \quad j = 1, 2, \ldots, n - k. \end{align} </math> Note that <math>s(\alpha^j) = 0</math> because <math>s(x)</math> has roots at <math>\alpha^j</math>, as shown in the previous section. The advantage of looking at the syndromes is that the message polynomial drops out. In other words, the syndromes only relate to the error and are unaffected by the actual contents of the message being transmitted. If the syndromes are all zero, the algorithm stops here and reports that the message was not corrupted in transit. ====Error locators and error values==== <!-- There's confusion between index k and the k in (n,k). Literature also has confusion? Or does it use kappa? --> For convenience, define the '''error locators''' ''X<sub>k</sub>'' and '''error values''' ''Y<sub>k</sub>'' as <math display="block"> X_k = \alpha^{i_k}, \quad Y_k = e_{i_k}. </math> Then the syndromes can be written in terms of these error locators and error values as <math display="block"> S_j = \sum_{k=1}^\nu Y_k X_k^j. </math> This definition of the syndrome values is equivalent to the previous since <math>{(\alpha^j)}^{i_k} = \alpha^{j \cdot i_k} = {(\alpha^{i_k})}^j = X_k^j</math>. The syndromes give a system of {{math|''n'' − ''k'' ≥ 2''ν''}} equations in 2''ν'' unknowns, but that system of equations is nonlinear in the ''X<sub>k</sub>'' and does not have an obvious solution. However, if the ''X<sub>k</sub>'' were known (see below), then the syndrome equations provide a linear system of equations <!-- Vandermonde comment. Matrix equation --> <math display="block">\begin{bmatrix} X_1^1 & X_2^1 & \cdots & X_\nu^1 \\ X_1^2 & X_2^2 & \cdots & X_\nu^2 \\ \vdots & \vdots & \ddots & \vdots \\ X_1^{n-k} & X_2^{n-k} & \cdots & X_\nu^{n-k} \\ \end{bmatrix} \begin{bmatrix} Y_1 \\ Y_2 \\ \vdots \\ Y_\nu \end{bmatrix} = \begin{bmatrix} S_1 \\ S_2 \\ \vdots \\ S_{n-k} \end{bmatrix}, </math> which can easily be solved for the ''Y<sub>k</sub>'' error values. Consequently, the problem is finding the ''X<sub>k</sub>'', because then the leftmost matrix would be known, and both sides of the equation could be multiplied by its inverse, yielding Y''<sub>k</sub>'' In the variant of this algorithm where the locations of the errors are already known (when it is being used as an [[erasure code]]), this is the end. The error locations (''X<sub>k</sub>'') are already known by some other method (for example, in an FM transmission, the sections where the bitstream was unclear or overcome with interference are probabilistically determinable from frequency analysis). In this scenario, up to <math>n - k</math> errors can be corrected. The rest of the algorithm serves to locate the errors and will require syndrome values up to <math>2\nu</math>, instead of just the <math>\nu</math> used thus far. This is why twice as many error-correcting symbols need to be added as can be corrected without knowing their locations. ====Error locator polynomial==== There is a linear recurrence relation that gives rise to a system of linear equations. Solving those equations identifies those error locations ''X<sub>k</sub>''. <!--This is credited to Daniel Gorenstein and Neal Zierler in the book Error Correcting Codes by Peterson - 1961 --> <!--Petersons decoder is a simplified implementation for bit oriented BCH codes where the error values only have magnitude one. --> Define the '''error locator polynomial''' {{math|Λ(''x'')}} as <math display="block"> \Lambda(x) = \prod_{k=1}^\nu (1 - x X_k ) = 1 + \Lambda_1 x^1 + \Lambda_2 x^2 + \cdots + \Lambda_\nu x^\nu. </math> The zeros of {{math|Λ(''x'')}} are the reciprocals <math>X_k^{-1}</math>. This follows from the above product notation construction, since if <math>x = X_k^{-1}</math>, then one of the multiplied terms will be zero, <math>(1 - X_k^{-1} \cdot X_k) = 1 - 1 = 0</math>, making the whole polynomial evaluate to zero: <math display="block"> \Lambda(X_k^{-1}) = 0. </math> Let <math>j</math> be any integer such that <math>1 \leq j \leq \nu</math>. Multiply both sides by <math>Y_k X_k^{j+\nu}</math>, and it will still be zero: <math display="block">\begin{align} & Y_k X_k^{j+\nu} \Lambda(X_k^{-1}) = 0, \\ & Y_k X_k^{j+\nu} (1 + \Lambda_1 X_k^{-1} + \Lambda_2 X_k^{-2} + \cdots + \Lambda_\nu X_k^{-\nu}) = 0, \\ & Y_k X_k^{j+\nu} + \Lambda_1 Y_k X_k^{j+\nu} X_k^{-1} + \Lambda_2 Y_k X_k^{j+\nu} X_k^{-2} + \cdots + \Lambda_\nu Y_k X_k^{j+\nu} X_k^{-\nu} = 0, \\ & Y_k X_k^{j+\nu} + \Lambda_1 Y_k X_k^{j+\nu-1} + \Lambda_2 Y_k X_k^{j+\nu-2} + \cdots + \Lambda_\nu Y_k X_k^j = 0. \end{align} </math> Sum for ''k'' = 1 to ''ν'', and it will still be zero: <math display="block"> \sum_{k=1}^\nu (Y_k X_k^{j+\nu} + \Lambda_1 Y_k X_k^{j+\nu-1} + \Lambda_2 Y_k X_k^{j+\nu-2} + \cdots + \Lambda_{\nu} Y_k X_k^j) = 0. </math> Collect each term into its own sum: <math display="block"> \left(\sum_{k=1}^\nu Y_k X_k^{j+\nu}\right) + \left(\sum_{k=1}^\nu \Lambda_1 Y_k X_k^{j+\nu-1}\right) + \left(\sum_{k=1}^\nu \Lambda_2 Y_k X_k^{j+\nu -2}\right) + \cdots + \left(\sum_{k=1}^\nu \Lambda_\nu Y_k X_k^j\right) = 0. </math> Extract the constant values of <math>\Lambda</math> that are unaffected by the summation: <math display="block"> \left(\sum_{k=1}^\nu Y_k X_k^{j+\nu}\right) + \Lambda_1 \left(\sum_{k=1}^\nu Y_k X_k^{j+\nu-1}\right) + \Lambda_2 \left(\sum_{k=1}^\nu Y_k X_k^{j+\nu-2}\right) + \cdots + \Lambda_\nu \left(\sum_{k=1}^\nu Y_k X_k^j\right) = 0. </math> These summations are now equivalent to the syndrome values, which we know and can substitute in. This therefore reduces to <math display="block"> S_{j+\nu} + \Lambda_1 S_{j+\nu-1} + \cdots + \Lambda_{\nu-1} S_{j+1} + \Lambda_\nu S_j = 0. </math> Subtracting <math>S_{j+\nu}</math> from both sides yields <math display="block"> S_j \Lambda_\nu + S_{j+1} \Lambda_{\nu-1} + \cdots + S_{j+\nu-1} \Lambda_1 = -S_{j+\nu}. </math> Recall that ''j'' was chosen to be any integer between 1 and ''v'' inclusive, and this equivalence is true for all such values. Therefore, we have ''v'' linear equations, not just one. This system of linear equations can therefore be solved for the coefficients Λ<sub>''i''</sub> of the error-location polynomial: <math display="block"> \begin{bmatrix} S_1 & S_2 & \cdots & S_\nu \\ S_2 & S_3 & \cdots & S_{\nu+1} \\ \vdots & \vdots & \ddots & \vdots \\ S_\nu & S_{\nu+1} & \cdots & S_{2\nu-1} \end{bmatrix} \begin{bmatrix} \Lambda_\nu \\ \Lambda_{\nu-1} \\ \vdots \\ \Lambda_1 \end{bmatrix} = \begin{bmatrix} -S_{\nu+1} \\ -S_{\nu+2} \\ \vdots \\ -S_{\nu+\nu} \end{bmatrix}. </math> The above assumes that the decoder knows the number of errors ''ν'', but that number has not been determined yet. The PGZ decoder does not determine ''ν'' directly but rather searches for it by trying successive values. The decoder first assumes the largest value for a trial ''ν'' and sets up the linear system for that value. If the equations can be solved (i.e., the matrix determinant is nonzero), then that trial value is the number of errors. If the linear system cannot be solved, then the trial ''ν'' is reduced by one and the next smaller system is examined.<ref>{{cite web |last= Gill |first= John |title= EE387 Notes #7, Handout #28 |date= n.d. |access-date= April 21, 2010 |publisher= Stanford University |url= http://www.stanford.edu/class/ee387/handouts/notes7.pdf |url-status= dead |archive-url= https://web.archive.org/web/20140630172526/http://web.stanford.edu/class/ee387/handouts/notes7.pdf |archive-date= June 30, 2014 }}</ref> ====Find the roots of the error locator polynomial==== Use the coefficients Λ<sub>''i''</sub> found in the last step to build the error location polynomial. The roots of the error location polynomial can be found by exhaustive search. The error locators ''X<sub>k</sub>'' are the reciprocals of those roots. The order of coefficients of the error location polynomial can be reversed, in which case the roots of that reversed polynomial are the error locators <math>X_k</math> (not their reciprocals <math>X_k^{-1}</math>). [[Chien search]] is an efficient implementation of this step. ====Calculate the error values==== Once the error locators ''X<sub>k</sub>'' are known, the error values can be determined. This can be done by direct solution for ''Y<sub>k</sub>'' in the [[#Error locators and error values|error equations]] matrix given above, or using the [[Forney algorithm]]. ====Calculate the error locations==== Calculate ''i<sub>k</sub>'' by taking the log base <math>\alpha</math> of ''X<sub>k</sub>''. This is generally done using a precomputed lookup table. ====Fix the errors==== Finally, ''e''(''x'') is generated from ''i<sub>k</sub>'' and ''e<sub>i<sub>k</sub></sub>'' and then is subtracted from ''r''(''x'') to get the originally sent message ''s''(''x''), with errors corrected. ==== Example ==== Consider the Reed–Solomon code defined in {{math|''GF''(929)}} with {{math|''α'' {{=}} 3}} and {{math|''t'' {{=}} 4}} (this is used in [[PDF417]] barcodes) for a RS(7,3) code. The generator polynomial is <math display="block"> g(x) = (x - 3)(x - 3^2)(x - 3^3)(x - 3^4) = x^4 + 809 x^3 + 723 x^2 + 568 x + 522. </math> If the message polynomial is {{math|''p''(''x'') {{=}} 3 ''x''<sup>2</sup> + 2 ''x'' + 1}}, then a systematic codeword is encoded as follows: <math display="block"> s_r(x) = p(x) \, x^t \bmod g(x) = 547 x^3 + 738 x^2 + 442 x + 455, </math> <math display="block"> s(x) = p(x) \, x^t - s_r(x) = 3 x^6 + 2 x^5 + 1 x^4 + 382 x^3 + 191 x^2 + 487 x + 474. </math> Errors in transmission might cause this to be received instead: <math display="block"> r(x) = s(x) + e(x) = 3 x^6 + 2 x^5 + 123 x^4 + 456 x^3 + 191 x^2 + 487 x + 474. </math> The syndromes are calculated by evaluating ''r'' at powers of ''α'': <math display="block"> S_1 = r(3^1) = 3 \cdot 3^6 + 2 \cdot 3^5 + 123 \cdot 3^4 + 456 \cdot 3^3 + 191 \cdot 3^2 + 487 \cdot 3 + 474 = 732, </math> <math display="block"> S_2 = r(3^2) = 637,\quad S_3 = r(3^3) = 762,\quad S_4 = r(3^4) = 925, </math> yielding the system <math display="block"> \begin{bmatrix} 732 & 637 \\ 637 & 762 \end{bmatrix} \begin{bmatrix} \Lambda_2 \\ \Lambda_1 \end{bmatrix} = \begin{bmatrix} -762 \\ -925 \end{bmatrix} = \begin{bmatrix} 167 \\ 004 \end{bmatrix}. </math> Using [[Gaussian elimination]], <math display="block"> \begin{bmatrix} 001 & 000 \\ 000 & 001 \end{bmatrix} \begin{bmatrix} \Lambda_2 \\ \Lambda_1 \end{bmatrix} = \begin{bmatrix} 329 \\ 821 \end{bmatrix}, </math> so <math display="block"> \Lambda(x) = 329 x^2 + 821 x + 001, </math> with roots ''x''<sub>1</sub> = 757 = 3<sup>−3</sup> and ''x''<sub>2</sub> = 562 = 3<sup>−4</sup>. The coefficients can be reversed: <math display="block"> R(x) = 001 x^2 + 821 x + 329, </math> to produce roots 27 = 3<sup>3</sup> and 81 = 3<sup>4</sup> with positive exponents, but typically this isn't used. The logarithm of the inverted roots corresponds to the error locations (right to left, location 0 is the last term in the codeword). To calculate the error values, apply the [[Forney algorithm]]: <math display="block"> \Omega(x) = S(x) \Lambda(x) \bmod x^4 = 546 x + 732, </math> <math display="block"> \Lambda'(x) = 658 x + 821, </math> <math display="block"> e_1 = -\Omega(x_1)/\Lambda'(x_1) = 074, </math> <math display="block"> e_2 = -\Omega(x_2)/\Lambda'(x_2) = 122. </math> Subtracting <math>e_1 x^3 + e_2 x^4 = 74x^3 + 122x^4</math> from the received polynomial ''r''(''x'') reproduces the original codeword ''s''.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)