Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
BCH code
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Decoding == There are many algorithms for decoding BCH codes. The most common ones follow this general outline: # Calculate the syndromes ''s<sub>j</sub>'' for the received vector <!-- there are d syndromes --> # Determine the number of errors ''t'' and the error locator polynomial ''Λ(x)'' from the syndromes <!-- Gill uses ν for actual errors; some references use ''t'' as max number of correctable errors. --> # Calculate the roots of the error location polynomial to find the error locations ''X<sub>i</sub>'' # Calculate the error values ''Y<sub>i</sub>'' at those error locations <!-- also known as e_i --> # Correct the errors During some of these steps, the decoding algorithm may determine that the received vector has too many errors and cannot be corrected. For example, if an appropriate value of ''t'' is not found, then the correction would fail. In a truncated (not primitive) code, an error location may be out of range. If the received vector has more errors than the code can correct, the decoder may unknowingly produce an apparently valid message that is not the one that was sent. ===Calculate the syndromes=== The received vector <math>R</math> is the sum of the correct codeword <math>C</math> and an unknown error vector <math>E.</math> The syndrome values are formed by considering <math>R</math> as a polynomial and evaluating it at <math>\alpha^c, \ldots, \alpha^{c+d-2}.</math> Thus the syndromes are<ref>{{Harvnb|Lidl|Pilz|1999|p=229}}</ref> :<math>s_j = R\left(\alpha^j\right) = C\left(\alpha^j\right) + E\left(\alpha^j\right)</math> for <math>j = c</math> to <math>c + d - 2.</math> Since <math>\alpha^{j}</math> are the zeros of <math>g(x),</math> of which <math>C(x)</math> is a multiple, <math>C\left(\alpha^j\right) = 0.</math> Examining the syndrome values thus isolates the error vector so one can begin to solve for it. If there is no error, <math>s_j = 0</math> for all <math>j.</math> If the syndromes are all zero, then the decoding is done. ===Calculate the error location polynomial=== If there are nonzero syndromes, then there are errors. The decoder needs to figure out how many errors and the location of those errors. If there is a single error, write this as <math>E(x) = e\,x^i,</math> where <math>i</math> is the location of the error and <math>e</math> is its magnitude. Then the first two syndromes are :<math>\begin{align} s_c &= e\,\alpha^{c\,i} \\ s_{c+1} &= e\,\alpha^{(c+1)\,i} = \alpha^i s_c \end{align}</math> so together they allow us to calculate <math>e</math> and provide some information about <math>i</math> (completely determining it in the case of Reed–Solomon codes). If there are two or more errors, :<math>E(x) = e_1 x^{i_1} + e_2 x^{i_2} + \cdots \, </math> It is not immediately obvious how to begin solving the resulting syndromes for the unknowns <math>e_k</math> and <math>i_k.</math> The first step is finding, compatible with computed syndromes and with minimal possible <math>t,</math> locator polynomial: :<math>\Lambda(x) = \prod_{j=1}^t \left(x\alpha^{i_j} - 1\right)</math> Three popular algorithms for this task are: # [[#Peterson–Gorenstein–Zierler algorithm|Peterson–Gorenstein–Zierler algorithm]] # [[Berlekamp–Massey algorithm]] # [[Reed–Solomon error correction#Euclidean decoder|Sugiyama Euclidean algorithm]] ====Peterson–Gorenstein–Zierler algorithm==== <!-- this confuses t (max number of errors that can be corrected) with ν (actual number of errors) --> Peterson's algorithm is the step 2 of the generalized BCH decoding procedure. Peterson's algorithm is used to calculate the error locator polynomial coefficients <math> \lambda_1 , \lambda_2, \dots, \lambda_{v} </math> of a polynomial : <math> \Lambda(x) = 1 + \lambda_1 x + \lambda_2 x^2 + \cdots + \lambda_v x^v .</math> Now the procedure of the Peterson–Gorenstein–Zierler algorithm.<ref>{{Harvnb|Gorenstein|Peterson|Zierler|1960}}</ref> Expect we have at least 2''t'' syndromes ''s''<sub>''c''</sub>, …, ''s''<sub>''c''+2''t''−1</sub>. Let ''v'' = ''t''. {{ordered list | Start by generating the <math>S_{v\times v}</math> matrix with elements that are syndrome values : <math>S_{v \times v}=\begin{bmatrix}s_c&s_{c+1}&\dots&s_{c+v-1}\\ s_{c+1}&s_{c+2}&\dots&s_{c+v}\\ \vdots&\vdots&\ddots&\vdots\\ s_{c+v-1}&s_{c+v}&\dots&s_{c+2v-2}\end{bmatrix}. </math> | Generate a <math>c_{v \times 1}</math> vector with elements : <math>C_{v \times 1}=\begin{bmatrix}s_{c+v}\\ s_{c+v+1}\\ \vdots\\ s_{c+2v-1}\end{bmatrix}. </math> | Let <math>\Lambda</math> denote the unknown polynomial coefficients, which are given by : <math>\Lambda_{v \times 1} = \begin{bmatrix}\lambda_{v}\\ \lambda_{v-1}\\ \vdots\\ \lambda_{1}\end{bmatrix}. </math> | Form the matrix equation : <math>S_{v \times v} \Lambda_{v \times 1} = -C_{v \times 1\,} .</math> | If the determinant of matrix <math>S_{v \times v}</math> is nonzero, then we can actually find an inverse of this matrix and solve for the values of unknown <math>\Lambda</math> values. | If <math>\det\left(S_{v \times v}\right) = 0,</math> then follow if <math>v = 0</math> then declare an empty error locator polynomial stop Peterson procedure. end set <math>v \leftarrow v -1</math> continue from the beginning of Peterson's decoding by making smaller <math>S_{v \times v}</math> | After you have values of <math>\Lambda</math>, you have the error locator polynomial. | Stop Peterson procedure. }} ===Factor error locator polynomial=== Now that you have the <math>\Lambda(x)</math> polynomial, its roots can be found in the form <math>\Lambda(x) = \left(\alpha^{i_1} x - 1\right)\left(\alpha^{i_2} x - 1\right) \cdots \left(\alpha^{i_v} x - 1\right)</math> by brute force for example using the [[Chien search]] algorithm. The exponential powers of the primitive element <math>\alpha</math> will yield the positions where errors occur in the received word; hence the name 'error locator' polynomial. The zeros of Λ(''x'') are ''α''<sup>−''i''<sub>1</sub></sup>, …, ''α''<sup>−''i''<sub>''v''</sub></sup>. ===Calculate error values=== Once the error locations are known, the next step is to determine the error values at those locations. The error values are then used to correct the received values at those locations to recover the original codeword. For the case of binary BCH, (with all characters readable) this is trivial; just flip the bits for the received word at these positions, and we have the corrected code word. In the more general case, the error weights <math>e_j</math> can be determined by solving the linear system : <math>\begin{align} s_c & = e_1 \alpha^{c\,i_1} + e_2 \alpha^{c\,i_2} + \cdots \\ s_{c+1} & = e_1 \alpha^{(c + 1)\,i_1} + e_2 \alpha^{(c + 1)\,i_2} + \cdots \\ & {}\ \vdots \end{align}</math> ==== Forney algorithm ==== However, there is a more efficient method known as the [[Forney algorithm]]. Let :<math>S(x) = s_c + s_{c+1}x + s_{c+2}x^2 + \cdots + s_{c+d-2}x^{d-2}.</math> :<math>v \leqslant d-1, \lambda_0 \neq 0 \qquad \Lambda(x) = \sum_{i=0}^v\lambda_i x^i = \lambda_0 \prod_{k=0}^{v} \left(\alpha^{-i_k}x - 1\right).</math> And the error evaluator polynomial<ref name="Gill-Forney">{{Harvnb|Gill|n.d.|p=47}}</ref> :<math>\Omega(x) \equiv S(x) \Lambda(x) \bmod{x^{d-1}}</math> Finally: :<math>\Lambda'(x) = \sum_{i=1}^v i \cdot \lambda_i x^{i-1},</math> where :<math>i \cdot x := \sum_{k=1}^i x.</math> Than if syndromes could be explained by an error word, which could be nonzero only on positions <math>i_k</math>, then error values are :<math>e_k = -{\alpha^{i_k}\Omega\left(\alpha^{-i_k}\right) \over \alpha^{c\cdot i_k}\Lambda'\left(\alpha^{-i_k}\right)}.</math> For narrow-sense BCH codes, ''c'' = 1, so the expression simplifies to: :<math>e_k = -{\Omega\left(\alpha^{-i_k}\right) \over \Lambda'\left(\alpha^{-i_k}\right)}.</math> ==== Explanation of Forney algorithm computation ==== It is based on [[Lagrange polynomial|Lagrange interpolation]] and techniques of [[generating function]]s. Consider <math>S(x)\Lambda(x),</math> and for the sake of simplicity suppose <math>\lambda_k = 0</math> for <math>k > v,</math> and <math>s_k = 0</math> for <math>k > c + d - 2.</math> Then :<math>S(x)\Lambda(x) = \sum_{j=0}^{\infty}\sum_{i=0}^j s_{j-i+1}\lambda_i x^j.</math> :<math>\begin{align} S(x)\Lambda(x) &= S(x) \left \{ \lambda_0\prod_{\ell=1}^v \left (\alpha^{i_\ell}x-1 \right ) \right \} \\ &= \left \{ \sum_{i=0}^{d-2}\sum_{j=1}^v e_j\alpha^{(c+i)\cdot i_j} x^i \right \} \left \{ \lambda_0\prod_{\ell=1}^v \left (\alpha^{i_\ell}x-1 \right ) \right \} \\ &= \left \{ \sum_{j=1}^v e_j \alpha^{c i_j}\sum_{i=0}^{d-2} \left (\alpha^{i_j} \right )^i x^i \right \} \left \{ \lambda_0\prod_{\ell=1}^v \left (\alpha^{i_\ell}x-1 \right ) \right \} \\ &= \left \{ \sum_{j=1}^v e_j \alpha^{c i_j} \frac{\left (x \alpha^{i_j} \right )^{d-1}-1}{x \alpha^{i_j}-1} \right \} \left \{ \lambda_0 \prod_{\ell=1}^v \left (\alpha^{i_\ell}x-1 \right ) \right \} \\ &= \lambda_0 \sum_{j=1}^v e_j\alpha^{c i_j} \frac{ \left (x\alpha^{i_j} \right)^{d-1}-1}{x\alpha^{i_j}-1} \prod_{\ell=1}^v \left (\alpha^{i_\ell}x-1 \right ) \\ &= \lambda_0 \sum_{j=1}^v e_j\alpha^{c i_j} \left ( \left (x\alpha^{i_j} \right)^{d-1}-1 \right ) \prod_{\ell\in\{1,\cdots,v\}\setminus\{j\}} \left (\alpha^{i_\ell}x-1 \right ) \end{align}</math> We want to compute unknowns <math>e_j,</math> and we could simplify the context by removing the <math>\left(x\alpha^{i_j}\right)^{d-1}</math> terms. This leads to the error evaluator polynomial :<math>\Omega(x) \equiv S(x) \Lambda(x) \bmod{x^{d-1}}.</math> Thanks to <math>v\leqslant d-1</math> we have :<math>\Omega(x) = -\lambda_0\sum_{j=1}^v e_j\alpha^{c i_j} \prod_{\ell\in\{1,\cdots,v\}\setminus\{j\}} \left(\alpha^{i_\ell}x - 1\right).</math> Thanks to <math>\Lambda</math> (the Lagrange interpolation trick) the sum degenerates to only one summand for <math>x = \alpha^{-i_k}</math> :<math>\Omega \left(\alpha^{-i_k}\right) = -\lambda_0 e_k\alpha^{c\cdot i_k}\prod_{\ell\in\{1,\cdots,v\}\setminus\{k\}} \left(\alpha^{i_\ell}\alpha^{-i_k} - 1\right).</math> To get <math>e_k</math> we just should get rid of the product. We could compute the product directly from already computed roots <math>\alpha^{-i_j}</math> of <math>\Lambda,</math> but we could use simpler form. As [[formal derivative]] :<math>\Lambda'(x) = \lambda_0\sum_{j=1}^v \alpha^{i_j}\prod_{\ell\in\{1,\cdots,v\}\setminus\{j\}} \left(\alpha^{i_\ell}x - 1\right),</math> we get again only one summand in :<math>\Lambda'\left(\alpha^{-i_k}\right) = \lambda_0\alpha^{i_k}\prod_{\ell\in\{1,\cdots,v\}\setminus\{k\}} \left(\alpha^{i_\ell}\alpha^{-i_k} - 1\right).</math> So finally :<math>e_k = -\frac{\alpha^{i_k}\Omega \left(\alpha^{-i_k}\right)}{\alpha^{c\cdot i_k}\Lambda' \left(\alpha^{-i_k}\right)}.</math> This formula is advantageous when one computes the formal derivative of <math>\Lambda</math> form :<math>\Lambda(x) = \sum_{i=1}^v \lambda_i x^i</math> yielding: :<math>\Lambda'(x) = \sum_{i=1}^v i \cdot \lambda_i x^{i-1},</math> where :<math>i\cdot x := \sum_{k=1}^i x.</math> === Decoding based on extended Euclidean algorithm === An alternate process of finding both the polynomial Λ and the error locator polynomial is based on Yasuo Sugiyama's adaptation of the [[Extended Euclidean algorithm]].<ref>Yasuo Sugiyama, Masao Kasahara, Shigeichi Hirasawa, and Toshihiko Namekawa. A method for solving key equation for decoding Goppa codes. Information and Control, 27:87–99, 1975.</ref> Correction of unreadable characters could be incorporated to the algorithm easily as well. Let <math>k_1, ..., k_k</math> be positions of unreadable characters. One creates polynomial localising these positions <math>\Gamma(x) = \prod_{i=1}^k\left(x\alpha^{k_i} - 1\right).</math> Set values on unreadable positions to 0 and compute the syndromes. As we have already defined for the Forney formula let <math>S(x)=\sum_{i=0}^{d-2}s_{c+i}x^i.</math> Let us run extended Euclidean algorithm for locating least common divisor of polynomials <math>S(x)\Gamma(x)</math> and <math>x^{d-1}.</math> The goal is not to find the least common divisor, but a polynomial <math>r(x)</math> of degree at most <math>\lfloor (d+k-3)/2\rfloor</math> and polynomials <math>a(x), b(x)</math> such that <math>r(x)=a(x)S(x)\Gamma(x)+b(x)x^{d-1}.</math> Low degree of <math>r(x)</math> guarantees, that <math>a(x)</math> would satisfy extended (by <math>\Gamma</math>) defining conditions for <math>\Lambda.</math> Defining <math>\Xi(x)=a(x)\Gamma(x)</math> and using <math>\Xi</math> on the place of <math>\Lambda(x)</math> in the Fourney formula will give us error values. The main advantage of the algorithm is that it meanwhile computes <math>\Omega(x)=S(x)\Xi(x)\bmod x^{d-1}=r(x)</math> required in the Forney formula. ==== Explanation of the decoding process ==== The goal is to find a codeword which differs from the received word minimally as possible on readable positions. When expressing the received word as a sum of nearest codeword and error word, we are trying to find error word with minimal number of non-zeros on readable positions. Syndrom <math>s_i</math> restricts error word by condition :<math>s_i=\sum_{j=0}^{n-1}e_j\alpha^{ij}.</math> We could write these conditions separately or we could create polynomial :<math>S(x)=\sum_{i=0}^{d-2}s_{c+i}x^i</math> and compare coefficients near powers <math>0</math> to <math>d-2.</math> :<math>S(x) \stackrel{\{0,\cdots,\,d-2\}}{=} E(x)=\sum_{i=0}^{d-2}\sum_{j=0}^{n-1}e_j\alpha^{ij}\alpha^{cj}x^i.</math> Suppose there is unreadable letter on position <math>k_1,</math> we could replace set of syndromes <math>\{s_c,\cdots,s_{c+d-2}\}</math> by set of syndromes <math>\{t_c,\cdots,t_{c+d-3}\}</math> defined by equation <math>t_i=\alpha^{k_1}s_i-s_{i+1}.</math> Suppose for an error word all restrictions by original set <math>\{s_c,\cdots,s_{c+d-2}\}</math> of syndromes hold, than :<math>t_i=\alpha^{k_1}s_i-s_{i+1}=\alpha^{k_1}\sum_{j=0}^{n-1}e_j\alpha^{ij}-\sum_{j=0}^{n-1}e_j\alpha^j\alpha^{ij}=\sum_{j=0}^{n-1}e_j\left(\alpha^{k_1} - \alpha^j\right)\alpha^{ij}.</math> New set of syndromes restricts error vector :<math>f_j=e_j\left(\alpha^{k_1} - \alpha^j\right)</math> the same way the original set of syndromes restricted the error vector <math>e_j.</math> Except the coordinate <math>k_1,</math> where we have <math>f_{k_1}=0,</math> an <math>f_j</math> is zero, if <math>e_j = 0.</math> For the goal of locating error positions we could change the set of syndromes in the similar way to reflect all unreadable characters. This shortens the set of syndromes by <math>k.</math> In polynomial formulation, the replacement of syndromes set <math>\{s_c,\cdots,s_{c+d-2}\}</math> by syndromes set <math>\{t_c,\cdots,t_{c+d-3}\}</math> leads to :<math>T(x) = \sum_{i=0}^{d-3}t_{c+i}x^i=\alpha^{k_1}\sum_{i=0}^{d-3}s_{c+i}x^i-\sum_{i=1}^{d-2}s_{c+i}x^{i-1}.</math> Therefore, :<math>xT(x) \stackrel{\{1,\cdots,\,d-2\}}{=} \left(x\alpha^{k_1} - 1\right)S(x).</math> After replacement of <math>S(x)</math> by <math>S(x)\Gamma(x)</math>, one would require equation for coefficients near powers <math>k,\cdots,d-2.</math> One could consider looking for error positions from the point of view of eliminating influence of given positions similarly as for unreadable characters. If we found <math>v</math> positions such that eliminating their influence leads to obtaining set of syndromes consisting of all zeros, than there exists error vector with errors only on these coordinates. If <math>\Lambda(x)</math> denotes the polynomial eliminating the influence of these coordinates, we obtain :<math>S(x)\Gamma(x)\Lambda(x) \stackrel{\{k+v, \cdots, d-2\}}{=} 0.</math> In Euclidean algorithm, we try to correct at most <math>\tfrac{1}{2}(d-1-k)</math> errors (on readable positions), because with bigger error count there could be more codewords in the same distance from the received word. Therefore, for <math>\Lambda(x)</math> we are looking for, the equation must hold for coefficients near powers starting from :<math>k + \left\lfloor \frac{1}{2} (d-1-k) \right\rfloor.</math> In Forney formula, <math>\Lambda(x)</math> could be multiplied by a scalar giving the same result. It could happen that the Euclidean algorithm finds <math>\Lambda(x)</math> of degree higher than <math>\tfrac{1}{2}(d-1-k)</math> having number of different roots equal to its degree, where the Fourney formula would be able to correct errors in all its roots, anyway correcting such many errors could be risky (especially with no other restrictions on received word). Usually after getting <math>\Lambda(x)</math> of higher degree, we decide not to correct the errors. Correction could fail in the case <math>\Lambda(x)</math> has roots with higher multiplicity or the number of roots is smaller than its degree. Fail could be detected as well by Forney formula returning error outside the transmitted alphabet. ===Correct the errors=== Using the error values and error location, correct the errors and form a corrected code vector by subtracting error values at error locations. ===Decoding examples=== ==== Decoding of binary code without unreadable characters ==== Consider a BCH code in GF(2<sup>4</sup>) with <math>d=7</math> and <math>g(x) = x^{10} + x^8 + x^5 + x^4 + x^2 + x + 1</math>. (This is used in [[QR code]]s.) Let the message to be transmitted be <nowiki>[1 1 0 1 1]</nowiki>, or in polynomial notation, <math>M(x) = x^4 + x^3 + x + 1.</math> The "checksum" symbols are calculated by dividing <math>x^{10} M(x)</math> by <math>g(x)</math> and taking the remainder, resulting in <math>x^9 + x^4 + x^2</math> or <nowiki>[ 1 0 0 0 0 1 0 1 0 0 ]</nowiki>. These are appended to the message, so the transmitted codeword is <nowiki>[ 1 1 0 1 1 1 0 0 0 0 1 0 1 0 0 ]</nowiki>. Now, imagine that there are two bit-errors in the transmission, so the received codeword is [ 1 {{color|red|0}} 0 1 1 1 0 0 0 {{color|red|1}} 1 0 1 0 0 ]. In polynomial notation: :<math>R(x) = C(x) + x^{13} + x^5 = x^{14} + x^{11} + x^{10} + x^9 + x^5 + x^4 + x^2</math> In order to correct the errors, first calculate the syndromes. Taking <math>\alpha = 0010,</math> we have <math>s_1 = R(\alpha^1) = 1011,</math> <math>s_2 = 1001,</math> <math>s_3 = 1011,</math> <math>s_4 = 1101,</math> <math>s_5 = 0001,</math> and <math>s_6 = 1001.</math> Next, apply the Peterson procedure by row-reducing the following augmented matrix. :<math>\left [ S_{3 \times 3} | C_{3 \times 1} \right ] = \begin{bmatrix}s_1&s_2&s_3&s_4\\ s_2&s_3&s_4&s_5\\ s_3&s_4&s_5&s_6\end{bmatrix} = \begin{bmatrix}1011&1001&1011&1101\\ 1001&1011&1101&0001\\ 1011&1101&0001&1001\end{bmatrix} \Rightarrow \begin{bmatrix}0001&0000&1000&0111\\ 0000&0001&1011&0001\\ 0000&0000&0000&0000 \end{bmatrix}</math> Due to the zero row, {{math|''S''<sub>3×3</sub>}} is singular, which is no surprise since only two errors were introduced into the codeword. However, the upper-left corner of the matrix is identical to {{closed-closed|''S''<sub>2×2</sub> <nowiki>|</nowiki> ''C''<sub>2×1</sub>}}, which gives rise to the solution <math>\lambda_2 = 1000,</math> <math>\lambda_1 = 1011.</math> The resulting error locator polynomial is <math>\Lambda(x) = 1000 x^2 + 1011 x + 0001,</math> which has zeros at <math>0100 = \alpha^{-13}</math> and <math>0111 = \alpha^{-5}.</math> The exponents of <math>\alpha</math> correspond to the error locations. There is no need to calculate the error values in this example, as the only possible value is 1. ==== Decoding with unreadable characters ==== Suppose the same scenario, but the received word has two unreadable characters [ 1 {{color|red|0}} 0 ? 1 1 ? 0 0 {{color|red|1}} 1 0 1 0 0 ]. We replace the unreadable characters by zeros while creating the polynomial reflecting their positions <math>\Gamma(x) = \left(\alpha^8x - 1\right)\left(\alpha^{11}x - 1\right).</math> We compute the syndromes <math>s_1=\alpha^{-7}, s_2=\alpha^{1}, s_3=\alpha^{4}, s_4=\alpha^{2}, s_5=\alpha^{5},</math> and <math>s_6=\alpha^{-7}.</math> (Using log notation which is independent on GF(2<sup>4</sup>) isomorphisms. For computation checking we can use the same representation for addition as was used in previous example. Hexadecimal description of the powers of <math>\alpha</math> are consecutively 1,2,4,8,3,6,C,B,5,A,7,E,F,D,9 with the addition based on bitwise xor.) Let us make syndrome polynomial :<math>S(x)=\alpha^{-7}+\alpha^{1}x+\alpha^{4}x^2+\alpha^{2}x^3+\alpha^{5}x^4+\alpha^{-7}x^5,</math> compute :<math>S(x)\Gamma(x)=\alpha^{-7}+\alpha^{4}x+\alpha^{-1}x^2+\alpha^{6}x^3+\alpha^{-1}x^4+\alpha^{5}x^5+\alpha^{7}x^6+\alpha^{-3}x^7.</math> Run the extended Euclidean algorithm: :<math>\begin{align} &\begin{pmatrix}S(x)\Gamma(x)\\ x^6\end{pmatrix} \\ [6pt] ={} &\begin{pmatrix}\alpha^{-7} +\alpha^{4}x+ \alpha^{-1}x^2+ \alpha^{6}x^3+ \alpha^{-1}x^4+ \alpha^{5}x^5 +\alpha^{7}x^6+ \alpha^{-3}x^7 \\ x^6\end{pmatrix} \\ [6pt] ={} &\begin{pmatrix}\alpha^{7}+ \alpha^{-3}x & 1\\ 1 & 0\end{pmatrix} \begin{pmatrix}x^6\\ \alpha^{-7} +\alpha^{4}x +\alpha^{-1}x^2 +\alpha^{6}x^3 +\alpha^{-1}x^4 +\alpha^{5}x^5 +2\alpha^{7}x^6 +2\alpha^{-3}x^7\end{pmatrix} \\ [6pt] ={} &\begin{pmatrix}\alpha^{7}+ \alpha^{-3}x & 1\\ 1 & 0\end{pmatrix} \begin{pmatrix}\alpha^4 + \alpha^{-5}x & 1\\ 1 & 0\end{pmatrix} \\ &\qquad \begin{pmatrix}\alpha^{-7}+ \alpha^{4}x+ \alpha^{-1}x^2+ \alpha^{6}x^3+ \alpha^{-1}x^4+ \alpha^{5}x^5\\ \alpha^{-3} +\left(\alpha^{-7}+ \alpha^{3}\right)x+ \left(\alpha^{3}+ \alpha^{-1}\right)x^2+ \left(\alpha^{-5}+ \alpha^{-6}\right)x^3+ \left(\alpha^3+ \alpha^{1}\right)x^4+ 2\alpha^{-6}x^5+ 2x^6\end{pmatrix} \\ [6pt] ={} &\begin{pmatrix}\left(1+ \alpha^{-4}\right)+ \left(\alpha^{1}+ \alpha^{2}\right)x+ \alpha^{7}x^2 & \alpha^{7}+ \alpha^{-3}x \\ \alpha^4+ \alpha^{-5}x & 1\end{pmatrix} \begin{pmatrix}\alpha^{-7}+ \alpha^{4}x+ \alpha^{-1}x^2+ \alpha^{6}x^3+ \alpha^{-1}x^4+ \alpha^{5}x^5\\ \alpha^{-3}+ \alpha^{-2}x+ \alpha^{0}x^2+ \alpha^{-2}x^3+ \alpha^{-6}x^4\end{pmatrix} \\ [6pt] ={} &\begin{pmatrix}\alpha^{-3}+ \alpha^{5}x+ \alpha^{7}x^2 & \alpha^{7}+ \alpha^{-3}x \\ \alpha^4+ \alpha^{-5}x & 1\end{pmatrix} \begin{pmatrix}\alpha^{-5}+ \alpha^{-4}x & 1\\ 1 & 0 \end{pmatrix} \\ &\qquad \begin{pmatrix}\alpha^{-3}+ \alpha^{-2}x+ \alpha^{0}x^2+ \alpha^{-2}x^3+ \alpha^{-6}x^4\\ \left(\alpha^{7}+ \alpha^{-7}\right)+ \left(2\alpha^{-7}+ \alpha^{4}\right)x+ \left(\alpha^{-5}+ \alpha^{-6}+ \alpha^{-1}\right)x^2+ \left(\alpha^{-7}+ \alpha^{-4}+ \alpha^{6}\right)x^3+ \left(\alpha^{4}+ \alpha^{-6}+ \alpha^{-1}\right)x^4+ 2\alpha^{5}x^5\end{pmatrix} \\ [6pt] ={} &\begin{pmatrix}\alpha^{7}x+ \alpha^{5}x^2+ \alpha^{3}x^3 & \alpha^{-3}+ \alpha^{5}x+ \alpha^{7}x^2\\ \alpha^{3}+ \alpha^{-5}x+ \alpha^{6}x^2 & \alpha^4+ \alpha^{-5}x\end{pmatrix} \begin{pmatrix}\alpha^{-3}+ \alpha^{-2}x+ \alpha^{0}x^2+ \alpha^{-2}x^3+ \alpha^{-6}x^4\\ \alpha^{-4}+ \alpha^{4}x+ \alpha^{2}x^2+ \alpha^{-5}x^3\end{pmatrix}. \end{align}</math> We have reached polynomial of degree at most 3, and as :<math>\begin{pmatrix}-\left(\alpha^4+ \alpha^{-5}x\right) & \alpha^{-3}+ \alpha^{5}x+ \alpha^{7}x^2\\ \alpha^{3}+ \alpha^{-5}x+ \alpha^{6}x^2 & -\left(\alpha^{7}x+ \alpha^{5}x^2+ \alpha^{3}x^3\right)\end{pmatrix} \begin{pmatrix}\alpha^{7}x+ \alpha^{5}x^2+ \alpha^{3}x^3 & \alpha^{-3} + \alpha^{5}x + \alpha^{7}x^2\\ \alpha^{3} + \alpha^{-5}x + \alpha^{6}x^2 & \alpha^4 + \alpha^{-5}x\end{pmatrix} = \begin{pmatrix}1 & 0\\ 0 & 1\end{pmatrix},</math> we get :<math>\begin{pmatrix}-\left(\alpha^4+ \alpha^{-5}x\right) & \alpha^{-3}+ \alpha^{5}x+ \alpha^{7}x^2\\ \alpha^{3}+ \alpha^{-5}x+ \alpha^{6}x^2 & -\left(\alpha^{7}x+ \alpha^{5}x^2+ \alpha^{3}x^3\right)\end{pmatrix} \begin{pmatrix}S(x)\Gamma(x)\\ x^6\end{pmatrix} = \begin{pmatrix}\alpha^{-3}+ \alpha^{-2}x+ \alpha^{0}x^2+ \alpha^{-2}x^3+ \alpha^{-6}x^4\\ \alpha^{-4}+ \alpha^{4}x+ \alpha^{2}x^2+ \alpha^{-5}x^3\end{pmatrix}. </math> Therefore, :<math>S(x)\Gamma(x)\left(\alpha^{3} + \alpha^{-5}x + \alpha^{6}x^2\right) - \left(\alpha^{7}x + \alpha^{5}x^2 + \alpha^{3}x^3\right)x^6 = \alpha^{-4} + \alpha^{4}x + \alpha^{2}x^2 + \alpha^{-5}x^3.</math> Let <math>\Lambda(x) = \alpha^{3}+ \alpha^{-5}x+ \alpha^{6}x^2.</math> Don't worry that <math>\lambda_0\neq 1.</math> Find by brute force a root of <math>\Lambda.</math> The roots are <math>\alpha^2,</math> and <math>\alpha^{10}</math> (after finding for example <math>\alpha^2</math> we can divide <math>\Lambda</math> by corresponding monom <math>\left(x - \alpha^2\right)</math> and the root of resulting monom could be found easily). Let :<math>\begin{align} \Xi(x) &= \Gamma(x)\Lambda(x) = \alpha^3 + \alpha^4x^2 + \alpha^2x^3 + \alpha^{-5}x^4 \\ \Omega(x) &= S(x)\Xi(x) \equiv \alpha^{-4} + \alpha^4x + \alpha^2x^2 + \alpha^{-5}x^3 \bmod{x^6} \end{align}</math> Let us look for error values using formula :<math>e_j = -\frac{\Omega \left(\alpha^{-i_j} \right)}{\Xi' \left(\alpha^{-i_j} \right)},</math> where <math>\alpha^{-i_j}</math> are roots of <math>\Xi(x).</math> <math>\Xi'(x)=\alpha^{2}x^2.</math> We get :<math>\begin{align} e_1 &=-\frac{\Omega(\alpha^4)}{\Xi'(\alpha^{4})} = \frac{\alpha^{-4}+\alpha^{-7}+\alpha^{-5}+\alpha^{7}}{\alpha^{-5}} =\frac{\alpha^{-5}}{\alpha^{-5}}=1 \\ e_2 &=-\frac{\Omega(\alpha^7)}{\Xi'(\alpha^{7})} = \frac{\alpha^{-4}+\alpha^{-4}+\alpha^{1}+\alpha^{1}}{\alpha^{1}}=0 \\ e_3 &=-\frac{\Omega(\alpha^{10})}{\Xi'(\alpha^{10})} = \frac{\alpha^{-4}+\alpha^{-1}+\alpha^{7}+\alpha^{-5}}{\alpha^{7}}=\frac{\alpha^{7}}{\alpha^{7}}=1 \\ e_4 &=-\frac{\Omega(\alpha^{2})}{\Xi'(\alpha^{2})} = \frac{\alpha^{-4}+\alpha^{6}+\alpha^{6}+\alpha^{1}}{\alpha^{6}}=\frac{\alpha^{6}}{\alpha^{6}}=1 \end{align}</math> Fact, that <math>e_3=e_4=1,</math> should not be surprising. Corrected code is therefore [ 1 {{color|green|1}} 0 {{color|green|1}} 1 1 {{color|green|0}} 0 0 {{color|green|0}} 1 0 1 0 0]. ==== Decoding with unreadable characters with a small number of errors ==== Let us show the algorithm behaviour for the case with small number of errors. Let the received word is [ 1 {{color|red|0}} 0 ? 1 1 ? 0 0 0 1 0 1 0 0 ]. Again, replace the unreadable characters by zeros while creating the polynomial reflecting their positions <math>\Gamma(x) = \left(\alpha^{8}x - 1\right)\left(\alpha^{11}x - 1\right).</math> Compute the syndromes <math>s_1 = \alpha^{4}, s_2 = \alpha^{-7}, s_3 = \alpha^{1}, s_4 = \alpha^{1}, s_5 = \alpha^{0},</math> and <math>s_6 = \alpha^{2}.</math> Create syndrome polynomial :<math>\begin{align} S(x) &= \alpha^{4} + \alpha^{-7}x + \alpha^{1}x^2 + \alpha^{1}x^3 + \alpha^{0}x^4 + \alpha^{2}x^5, \\ S(x)\Gamma(x) &= \alpha^{4} + \alpha^{7}x + \alpha^{5}x^2 + \alpha^{3}x^3 + \alpha^{1}x^4 + \alpha^{-1}x^5 + \alpha^{-1}x^6 + \alpha^{6}x^7. \end{align}</math> Let us run the extended Euclidean algorithm: :<math>\begin{align} \begin{pmatrix} S(x)\Gamma(x) \\ x^6 \end{pmatrix} &= \begin{pmatrix} \alpha^{4} + \alpha^{7}x + \alpha^{5}x^2 + \alpha^{3}x^3 + \alpha^{1}x^4 + \alpha^{-1}x^5 + \alpha^{-1}x^6 + \alpha^{6}x^7 \\ x^6 \end{pmatrix} \\ &= \begin{pmatrix} \alpha^{-1} + \alpha^{6}x & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} x^6 \\ \alpha^{4} + \alpha^{7}x + \alpha^{5}x^2 + \alpha^{3}x^3 + \alpha^{1}x^4 + \alpha^{-1}x^5 + 2\alpha^{-1}x^6 + 2\alpha^{6}x^7 \end{pmatrix} \\ &= \begin{pmatrix} \alpha^{-1} + \alpha^{6}x & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} \alpha^{3} + \alpha^{1}x & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} \alpha^{4} + \alpha^{7}x + \alpha^{5}x^2 + \alpha^{3}x^3 + \alpha^{1}x^4 + \alpha^{-1}x^5 \\ \alpha^{7} + \left(\alpha^{-5} + \alpha^{5}\right)x + 2\alpha^{-7}x^2 + 2\alpha^{6}x^3 + 2\alpha^{4}x^4 + 2\alpha^{2}x^5 + 2x^6 \end{pmatrix} \\ &= \begin{pmatrix} \left(1 + \alpha^{2}\right) + \left(\alpha^{0} + \alpha^{-6}\right)x + \alpha^{7}x^2 & \alpha^{-1} + \alpha^{6}x \\ \alpha^{3} + \alpha^{1}x & 1 \end{pmatrix} \begin{pmatrix} \alpha^{4} + \alpha^{7}x + \alpha^{5}x^2 + \alpha^{3}x^3 + \alpha^{1}x^4 + \alpha^{-1}x^5 \\ \alpha^{7} + \alpha^{0}x \end{pmatrix} \end{align}</math> We have reached polynomial of degree at most 3, and as : <math> \begin{pmatrix} -1 & \alpha^{-1} + \alpha^{6}x \\ \alpha^{3} + \alpha^{1}x & -\left(\alpha^{-7} + \alpha^{7}x + \alpha^{7}x^2\right) \end{pmatrix} \begin{pmatrix} \alpha^{-7} + \alpha^{7}x + \alpha^{7}x^2 & \alpha^{-1} + \alpha^{6}x \\ \alpha^{3} + \alpha^{1}x & 1 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, </math> we get : <math> \begin{pmatrix} -1 & \alpha^{-1} + \alpha^{6}x \\ \alpha^{3} + \alpha^{1}x & -\left(\alpha^{-7} + \alpha^{7}x + \alpha^{7}x^2\right) \end{pmatrix}\begin{pmatrix} S(x)\Gamma(x) \\ x^6 \end{pmatrix} = \begin{pmatrix} \alpha^{4} + \alpha^{7}x + \alpha^{5}x^2 + \alpha^{3}x^3 + \alpha^{1}x^4 + \alpha^{-1}x^5 \\ \alpha^{7} + \alpha^{0}x \end{pmatrix}. </math> Therefore, : <math>S(x)\Gamma(x)\left(\alpha^{3} + \alpha^{1}x\right) - \left(\alpha^{-7} + \alpha^{7}x + \alpha^{7}x^2\right)x^6 = \alpha^{7} + \alpha^{0}x.</math> Let <math>\Lambda(x) = \alpha^{3} + \alpha^{1}x.</math> Don't worry that <math>\lambda_0 \neq 1.</math> The root of <math>\Lambda(x)</math> is <math>\alpha^{3-1}.</math> Let :<math>\begin{align} \Xi(x) &= \Gamma(x)\Lambda(x) = \alpha^{3} + \alpha^{-7}x + \alpha^{-4}x^2 + \alpha^{5}x^3, \\ \Omega(x) &= S(x)\Xi(x) \equiv \alpha^{7} + \alpha^{0}x \bmod{x^6} \end{align}</math> Let us look for error values using formula <math>e_j = -\Omega\left(\alpha^{-i_j}\right)/\Xi'\left(\alpha^{-i_j}\right),</math> where <math>\alpha^{-i_j}</math> are roots of polynomial <math>\Xi(x).</math> : <math>\Xi'(x) = \alpha^{-7} + \alpha^{5}x^2.</math> We get :<math>\begin{align} e_1 &= -\frac{\Omega\left(\alpha^4\right)}{\Xi'\left(\alpha^{4}\right)} = \frac{\alpha^{7} + \alpha^{4}}{\alpha^{-7} + \alpha^{-2}} = \frac{\alpha^{3}}{\alpha^{3}} = 1 \\ e_2 &= -\frac{\Omega\left(\alpha^7\right)}{\Xi'\left(\alpha^{7}\right)} = \frac{\alpha^{7} + \alpha^{7}}{\alpha^{-7} + \alpha^{4}} = 0 \\ e_3 &= -\frac{\Omega\left(\alpha^2\right)}{\Xi'\left(\alpha^2\right)} = \frac{\alpha^{7} + \alpha^{2}}{\alpha^{-7} + \alpha^{-6}} = \frac{\alpha^{-3}}{\alpha^{-3}} = 1 \end{align}</math> The fact that <math>e_3 = 1</math> should not be surprising. Corrected code is therefore [ 1 {{color|green|1}} 0 {{color|green|1}} 1 1 {{color|green|0}} 0 0 0 1 0 1 0 0].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)