Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Reed–Solomon error correction
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
====Error locator polynomial==== There is a linear recurrence relation that gives rise to a system of linear equations. Solving those equations identifies those error locations ''X<sub>k</sub>''. <!--This is credited to Daniel Gorenstein and Neal Zierler in the book Error Correcting Codes by Peterson - 1961 --> <!--Petersons decoder is a simplified implementation for bit oriented BCH codes where the error values only have magnitude one. --> Define the '''error locator polynomial''' {{math|Λ(''x'')}} as <math display="block"> \Lambda(x) = \prod_{k=1}^\nu (1 - x X_k ) = 1 + \Lambda_1 x^1 + \Lambda_2 x^2 + \cdots + \Lambda_\nu x^\nu. </math> The zeros of {{math|Λ(''x'')}} are the reciprocals <math>X_k^{-1}</math>. This follows from the above product notation construction, since if <math>x = X_k^{-1}</math>, then one of the multiplied terms will be zero, <math>(1 - X_k^{-1} \cdot X_k) = 1 - 1 = 0</math>, making the whole polynomial evaluate to zero: <math display="block"> \Lambda(X_k^{-1}) = 0. </math> Let <math>j</math> be any integer such that <math>1 \leq j \leq \nu</math>. Multiply both sides by <math>Y_k X_k^{j+\nu}</math>, and it will still be zero: <math display="block">\begin{align} & Y_k X_k^{j+\nu} \Lambda(X_k^{-1}) = 0, \\ & Y_k X_k^{j+\nu} (1 + \Lambda_1 X_k^{-1} + \Lambda_2 X_k^{-2} + \cdots + \Lambda_\nu X_k^{-\nu}) = 0, \\ & Y_k X_k^{j+\nu} + \Lambda_1 Y_k X_k^{j+\nu} X_k^{-1} + \Lambda_2 Y_k X_k^{j+\nu} X_k^{-2} + \cdots + \Lambda_\nu Y_k X_k^{j+\nu} X_k^{-\nu} = 0, \\ & Y_k X_k^{j+\nu} + \Lambda_1 Y_k X_k^{j+\nu-1} + \Lambda_2 Y_k X_k^{j+\nu-2} + \cdots + \Lambda_\nu Y_k X_k^j = 0. \end{align} </math> Sum for ''k'' = 1 to ''ν'', and it will still be zero: <math display="block"> \sum_{k=1}^\nu (Y_k X_k^{j+\nu} + \Lambda_1 Y_k X_k^{j+\nu-1} + \Lambda_2 Y_k X_k^{j+\nu-2} + \cdots + \Lambda_{\nu} Y_k X_k^j) = 0. </math> Collect each term into its own sum: <math display="block"> \left(\sum_{k=1}^\nu Y_k X_k^{j+\nu}\right) + \left(\sum_{k=1}^\nu \Lambda_1 Y_k X_k^{j+\nu-1}\right) + \left(\sum_{k=1}^\nu \Lambda_2 Y_k X_k^{j+\nu -2}\right) + \cdots + \left(\sum_{k=1}^\nu \Lambda_\nu Y_k X_k^j\right) = 0. </math> Extract the constant values of <math>\Lambda</math> that are unaffected by the summation: <math display="block"> \left(\sum_{k=1}^\nu Y_k X_k^{j+\nu}\right) + \Lambda_1 \left(\sum_{k=1}^\nu Y_k X_k^{j+\nu-1}\right) + \Lambda_2 \left(\sum_{k=1}^\nu Y_k X_k^{j+\nu-2}\right) + \cdots + \Lambda_\nu \left(\sum_{k=1}^\nu Y_k X_k^j\right) = 0. </math> These summations are now equivalent to the syndrome values, which we know and can substitute in. This therefore reduces to <math display="block"> S_{j+\nu} + \Lambda_1 S_{j+\nu-1} + \cdots + \Lambda_{\nu-1} S_{j+1} + \Lambda_\nu S_j = 0. </math> Subtracting <math>S_{j+\nu}</math> from both sides yields <math display="block"> S_j \Lambda_\nu + S_{j+1} \Lambda_{\nu-1} + \cdots + S_{j+\nu-1} \Lambda_1 = -S_{j+\nu}. </math> Recall that ''j'' was chosen to be any integer between 1 and ''v'' inclusive, and this equivalence is true for all such values. Therefore, we have ''v'' linear equations, not just one. This system of linear equations can therefore be solved for the coefficients Λ<sub>''i''</sub> of the error-location polynomial: <math display="block"> \begin{bmatrix} S_1 & S_2 & \cdots & S_\nu \\ S_2 & S_3 & \cdots & S_{\nu+1} \\ \vdots & \vdots & \ddots & \vdots \\ S_\nu & S_{\nu+1} & \cdots & S_{2\nu-1} \end{bmatrix} \begin{bmatrix} \Lambda_\nu \\ \Lambda_{\nu-1} \\ \vdots \\ \Lambda_1 \end{bmatrix} = \begin{bmatrix} -S_{\nu+1} \\ -S_{\nu+2} \\ \vdots \\ -S_{\nu+\nu} \end{bmatrix}. </math> The above assumes that the decoder knows the number of errors ''ν'', but that number has not been determined yet. The PGZ decoder does not determine ''ν'' directly but rather searches for it by trying successive values. The decoder first assumes the largest value for a trial ''ν'' and sets up the linear system for that value. If the equations can be solved (i.e., the matrix determinant is nonzero), then that trial value is the number of errors. If the linear system cannot be solved, then the trial ''ν'' is reduced by one and the next smaller system is examined.<ref>{{cite web |last= Gill |first= John |title= EE387 Notes #7, Handout #28 |date= n.d. |access-date= April 21, 2010 |publisher= Stanford University |url= http://www.stanford.edu/class/ee387/handouts/notes7.pdf |url-status= dead |archive-url= https://web.archive.org/web/20140630172526/http://web.stanford.edu/class/ee387/handouts/notes7.pdf |archive-date= June 30, 2014 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)