Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Euclidean algorithm
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Algorithmic efficiency == [[Image:Euclidean Algorithm Running Time.svg|thumb|alt="A set of colored lines radiating outwards from the origin of an ''x''-''y'' coordinate system. Each line corresponds to a set of number pairs requiring the same number of steps in the Euclidean algorithm."|Number of steps in the Euclidean algorithm for gcd(''x'',''y''). Lighter (red and yellow) points indicate relatively few steps, whereas darker (violet and blue) points indicate more steps. The largest dark area follows the line ''y'' = ''Φx'', where ''Φ'' is the [[golden ratio]].]] The computational efficiency of Euclid's algorithm has been studied thoroughly.<ref>{{harvnb|Knuth|1997}}, pp. 339–364</ref> This efficiency can be described by the number of division steps the algorithm requires, multiplied by the computational expense of each step. The first known analysis of Euclid's algorithm is due to [[Antoine André Louis Reynaud|A. A. L. Reynaud]] in 1811,<ref>{{cite book | last = Reynaud |first=A.-A.-L. | year = 1811 | title = Traité d'arithmétique à l'usage des élèves qui se destinent à l'École Polytechnique | url = https://archive.org/details/bub_gb_YySjvK7oudIC | edition = 6th | publisher = Courcier | location = Paris | at = Note 60, p. 34 }} As cited by {{harvtxt|Shallit|1994}}.</ref> who showed that the number of division steps on input (''u'', ''v'') is bounded by ''v''; later he improved this to ''v''/2 + 2. Later, in 1841, [[Pierre Joseph Étienne Finck|P. J. E. Finck]] showed<ref>{{cite book | last = Finck | first = P.-J.-E. | title = Traité élémentaire d'arithmétique à l'usage des candidats aux écoles spéciales |language=fr | publisher = Derivaux | year = 1841}}</ref> that the number of division steps is at most 2 log<sub>2</sub> ''v'' + 1, and hence Euclid's algorithm runs in time polynomial in the size of the input.{{sfn|Shallit|1994}} [[Émile Léger]], in 1837, studied the worst case, which is when the inputs are consecutive [[Fibonacci numbers]].{{sfn|Shallit|1994}} Finck's analysis was refined by [[Gabriel Lamé]] in 1844,<ref>{{cite journal | author-link = Gabriel Lamé|last=Lamé|first=G. | year = 1844 | title = Note sur la limite du nombre des divisions dans la recherche du plus grand commun diviseur entre deux nombres entiers |language=fr | journal = Comptes Rendus de l'Académie des Sciences | volume = 19 | pages = 867–870}}</ref> who showed that the number of steps required for completion is never more than five times the number ''h'' of base-10 digits of the smaller number ''b''.<ref>{{cite journal | last = Grossman | first = H. | year = 1924 | title = On the Number of Divisions in Finding a G.C.D | journal = The American Mathematical Monthly | volume = 31 | page = 443 | doi = 10.2307/2298146 | jstor = 2298146 | issue = 9}}</ref><ref>{{cite book | last = Honsberger | first = R. | author-link = Ross Honsberger | year = 1976 | title = Mathematical Gems II | publisher = The [[Mathematical Association of America]] | isbn = 0-88385-302-7 | pages = 54–57}}</ref> In the [[uniform cost model]] (suitable for analyzing the complexity of gcd calculation on numbers that fit into a single machine word), each step of the algorithm takes [[constant time]], and Lamé's analysis implies that the total running time is also ''O''(''h''). However, in a model of computation suitable for computation with larger numbers, the computational expense of a single remainder computation in the algorithm can be as large as ''O''(''h''<sup>2</sup>).<ref name="Knuth-257-261"/> In this case the total time for all of the steps of the algorithm can be analyzed using a [[telescoping series]], showing that it is also ''O''(''h''<sup>2</sup>). Modern algorithmic techniques based on the [[Schönhage–Strassen algorithm]] for fast integer multiplication can be used to speed this up, leading to [[quasilinear time|quasilinear algorithms]] for the GCD.<ref name="Crandall_2001" /><ref name="Moller08"/> === Number of steps === The number of steps to calculate the GCD of two natural numbers, ''a'' and ''b'', may be denoted by ''T''(''a'', ''b'').<ref name="Knuth, p. 344">{{harvnb|Knuth|1997}}, p. 344</ref> If ''g'' is the GCD of ''a'' and ''b'', then ''a'' = ''mg'' and ''b'' = ''ng'' for two coprime numbers ''m'' and ''n''. Then : {{math|1=''T''(''a'', ''b'') = ''T''(''m'', ''n'')}} as may be seen by dividing all the steps in the Euclidean algorithm by ''g''.<ref>{{Harvnb|Ore|1948|p=45}}</ref> By the same argument, the number of steps remains the same if ''a'' and ''b'' are multiplied by a common factor ''w'': ''T''(''a'', ''b'') = ''T''(''wa'', ''wb''). Therefore, the number of steps ''T'' may vary dramatically between neighboring pairs of numbers, such as T(''a'', ''b'') and T(''a'', ''b'' + 1), depending on the size of the two GCDs. The recursive nature of the Euclidean algorithm gives another equation : {{math|1=''T''(''a'', ''b'') = 1 + ''T''(''b'', ''r''<sub>0</sub>) = 2 + ''T''(''r''<sub>0</sub>, ''r''<sub>1</sub>) = … = ''N'' + ''T''(''r''<sub>''N''−2</sub>, ''r''<sub>''N''−1</sub>) = ''N'' + 1}} where ''T''(''x'', 0) = 0 by assumption.<ref name="Knuth, p. 344"/> ==== Worst-case ==== If the Euclidean algorithm requires ''N'' steps for a pair of natural numbers ''a'' > ''b'' > 0, the smallest values of ''a'' and ''b'' for which this is true are the [[Fibonacci number]]s ''F''<sub>''N''+2</sub> and ''F''<sub>''N''+1</sub>, respectively.<ref name="Knuth, p. 343">{{harvnb|Knuth|1997}}, p. 343</ref> More precisely, if the Euclidean algorithm requires ''N'' steps for the pair ''a'' > ''b'', then one has ''a'' ≥ ''F''<sub>''N''+2</sub> and ''b'' ≥ ''F''<sub>''N''+1</sub>. This can be shown by [[mathematical induction|induction]].<ref>{{Harvnb|Mollin|2008|p=21}}</ref> If ''N'' = 1, ''b'' divides ''a'' with no remainder; the smallest natural numbers for which this is true is ''b'' = 1 and ''a'' = 2, which are ''F''<sub>2</sub> and ''F''<sub>3</sub>, respectively. Now assume that the result holds for all values of ''N'' up to ''M'' − 1. The first step of the ''M''-step algorithm is ''a'' = ''q''<sub>0</sub>''b'' + ''r''<sub>0</sub>, and the Euclidean algorithm requires ''M'' − 1 steps for the pair ''b'' > ''r''<sub>0</sub>. By induction hypothesis, one has ''b'' ≥ ''F''<sub>''M''+1</sub> and ''r''<sub>0</sub> ≥ ''F''<sub>''M''</sub>. Therefore, ''a'' = ''q''<sub>0</sub>''b'' + ''r''<sub>0</sub> ≥ ''b'' + ''r''<sub>0</sub> ≥ ''F''<sub>''M''+1</sub> + ''F''<sub>''M''</sub> = ''F''<sub>''M''+2</sub>, which is the desired inequality. This proof, published by [[Gabriel Lamé]] in 1844, represents the beginning of [[computational complexity theory]],<ref>{{Harvnb|LeVeque|1996|p=35}}</ref> and also the first practical application of the Fibonacci numbers.<ref name="Knuth, p. 343"/> This result suffices to show that the number of steps in Euclid's algorithm can never be more than five times the number of its digits (base 10).<ref>{{Harvnb|Mollin|2008|pp=21–22}}</ref> For if the algorithm requires ''N'' steps, then ''b'' is greater than or equal to ''F''<sub>''N''+1</sub> which in turn is greater than or equal to ''φ''<sup>''N''−1</sup>, where ''φ'' is the [[golden ratio]]. Since ''b'' ≥ ''φ''<sup>''N''−1</sup>, then ''N'' − 1 ≤ log<sub>''φ''</sub>''b''. Since log<sub>10</sub>''φ'' > 1/5, (''N'' − 1)/5 < log<sub>10</sub>''φ'' log<sub>''φ''</sub>''b'' = log<sub>10</sub>''b''. Thus, ''N'' ≤ 5 log<sub>10</sub>''b''. Thus, the Euclidean algorithm always needs less than [[Big O notation|''O''(''h'')]] divisions, where ''h'' is the number of digits in the smaller number ''b''. ==== Average ==== The average number of steps taken by the Euclidean algorithm has been defined in three different ways. The first definition is the average time ''T''(''a'') required to calculate the GCD of a given number ''a'' and a smaller natural number ''b'' chosen with equal probability from the integers 0 to ''a'' − 1<ref name="Knuth, p. 344"/> <!-- : <math alt="The function T of a equals one over a times the sum of the function T of a comma b over all nonnegative values of b less than a." > --> : <math>T(a) = \frac 1 a \sum_{0 \leq b<a} T(a, b). </math> However, since ''T''(''a'', ''b'') fluctuates dramatically with the GCD of the two numbers, the averaged function ''T''(''a'') is likewise "noisy".<ref>{{harvnb|Knuth|1997}}, p. 353</ref> To reduce this noise, a second average ''τ''(''a'') is taken over all numbers coprime with ''a'' <!-- : <math alt="The function tau of a equals one over the function phi of a times the sum of the function T of a comma b over all nonnegative values of b less than a, where b is coprime to a." > --> : <math>\tau(a) = \frac 1 {\varphi(a)} \sum_{\begin{smallmatrix} 0 \leq b<a \\ \gcd(a, b) = 1 \end{smallmatrix}} T(a, b). </math> There are ''φ''(''a'') coprime integers less than ''a'', where ''φ'' is [[Euler's totient function]]. This tau average grows smoothly with ''a''<ref>{{harvnb|Knuth|1997}}, p. 357</ref><ref>{{cite journal | last = Tonkov | first= T. | year = 1974 | title = On the average length of finite continued fractions | journal = Acta Arithmetica | volume = 26 | issue= 1 | pages = 47–57| doi= 10.4064/aa-26-1-47-57 | doi-access = free }}</ref> :<math>\tau(a) = \frac{12}{\pi^2}\ln 2 \ln a + C + O(a^{-1/6-\varepsilon})</math> with the residual error being of order ''a''<sup>−(1/6)+''ε''</sup>, where ''ε'' is [[infinitesimal]]. The constant ''C'' in this formula is called [[Porter's constant]]<ref> {{cite journal | last = Knuth | first = Donald E. | authorlink = Donald E. Knuth | doi = 10.1016/0898-1221(76)90025-0 | issue = 2 | journal = Computers & Mathematics with Applications | pages = 137–139 | title = Evaluation of Porter's constant | volume = 2 | year = 1976| doi-access = free }}</ref> and equals : <math>C= -\frac 1 2 + \frac{6 \ln 2}{\pi^2}\left(4\gamma -\frac{24}{\pi^2}\zeta'(2) + 3\ln 2 - 2\right) \approx 1.467</math> where {{math|''γ''}} is the [[Euler–Mascheroni constant]] and {{math|''ζ''{{′}}}} is the [[derivative]] of the [[Riemann zeta function]].<ref>{{cite journal | last = Porter | first = J. W. | year = 1975 | title = On a Theorem of Heilbronn | journal = [[Mathematika]] | volume = 22 | issue = 1 | pages = 20–28 | doi = 10.1112/S0025579300004459}}</ref><ref>{{cite journal | last = Knuth|first = D. E. | author-link = Donald Knuth | year = 1976 | title = Evaluation of Porter's Constant | journal = Computers and Mathematics with Applications | volume = 2 | pages = 137–139 | doi = 10.1016/0898-1221(76)90025-0 | issue = 2| doi-access = free }}</ref> The leading coefficient (12/π<sup>2</sup>) ln 2 was determined by two independent methods.<ref>{{cite journal | last = Dixon | first = J. D. | year = 1970 | title = The Number of Steps in the Euclidean Algorithm | journal = J. Number Theory | volume = 2 | pages = 414–422 | doi = 10.1016/0022-314X(70)90044-2 | issue = 4|bibcode = 1970JNT.....2..414D | doi-access = free }}</ref><ref>{{cite book | last = Heilbronn | first = H. A. | author-link = Hans Heilbronn | year = 1969 | chapter = On the Average Length of a Class of Finite Continued Fractions | title = Number Theory and Analysis | editor = Paul Turán | publisher = Plenum | location = New York | pages = 87–96 | lccn = 76016027}}</ref> Since the first average can be calculated from the tau average by summing over the divisors ''d'' of ''a''<ref>{{harvnb|Knuth|1997}}, p. 354</ref> <!-- : <math alt="The function T of a equals one over a times the sum of an argument summed over all divisors d of a. The argument equals the function phi of d multiplied by the function tau of d."> --> : <math> T(a) = \frac 1 a \sum_{d \mid a} \varphi(d) \tau(d) </math> it can be approximated by the formula<ref name="Norton_1990">{{cite journal | last = Norton | first = G. H. | year = 1990 | title = On the Asymptotic Analysis of the Euclidean Algorithm | journal = Journal of Symbolic Computation | volume = 10 | issue = 1 | pages = 53–58 | doi = 10.1016/S0747-7171(08)80036-3| doi-access = free }}</ref> : <math>T(a) \approx C + \frac{12}{\pi^2} \ln 2\, \biggl({\ln a} - \sum_{d \mid a} \frac{\Lambda(d)} d\biggr)</math> where Λ(''d'') is the [[von Mangoldt function|Mangoldt function]].<ref>{{harvnb|Knuth|1997}}, p. 355</ref> A third average ''Y''(''n'') is defined as the mean number of steps required when both ''a'' and ''b'' are chosen randomly (with uniform distribution) from 1 to ''n''<ref name="Norton_1990" /> <!-- :<math alt="The function Y of n equals one over n squared times the double sum of the function T of a comma b for all values of a and b ranging from 1 to n. This equals one over n times the sum of the function T of a over all values of a ranging from 1 to n."> --> : <math>Y(n) = \frac 1 {n^2} \sum_{a=1}^n \sum_{b=1}^n T(a, b) = \frac 1 n \sum_{a=1}^n T(a). </math> Substituting the approximate formula for ''T''(''a'') into this equation yields an estimate for ''Y''(''n'')<ref>{{harvnb|Knuth|1997}}, p. 356</ref> : <math>Y(n) \approx \frac{12}{\pi^2} \ln 2 \ln n + 0.06.</math> === Computational expense per step === In each step ''k'' of the Euclidean algorithm, the quotient ''q''<sub>''k''</sub> and remainder ''r''<sub>''k''</sub> are computed for a given pair of integers ''r''<sub>''k''−2</sub> and ''r''<sub>''k''−1</sub> : {{math|1=''r''<sub>''k''−2</sub> = ''q''<sub>''k''</sub> ''r''<sub>''k''−1</sub> + ''r''<sub>''k''</sub>.}} The computational expense per step is associated chiefly with finding ''q''<sub>''k''</sub>, since the remainder ''r''<sub>''k''</sub> can be calculated quickly from ''r''<sub>''k''−2</sub>, ''r''<sub>''k''−1</sub>, and ''q''<sub>''k''</sub> : {{math|1=''r''<sub>''k''</sub> = ''r''<sub>''k''−2</sub> − ''q''<sub>''k''</sub> ''r''<sub>''k''−1</sub>.}} The computational expense of dividing ''h''-bit numbers scales as {{math|''O''(''h''(''ℓ'' + 1))}}, where {{mvar|ℓ}} is the length of the quotient.<ref name="Knuth-257-261">{{harvnb|Knuth|1997}}, pp. 257–261</ref> For comparison, Euclid's original subtraction-based algorithm can be much slower. A single integer division is equivalent to the quotient ''q'' number of subtractions. If the ratio of ''a'' and ''b'' is very large, the quotient is large and many subtractions will be required. On the other hand, it has been shown that the quotients are very likely to be small integers. The probability of a given quotient ''q'' is approximately {{math|ln {{abs|''u''/(''u'' − 1)}}}} where {{math|1=''u'' = (''q'' + 1)<sup>2</sup>}}.<ref>{{harvnb|Knuth|1997}}, p. 352</ref> For illustration, the probability of a quotient of 1, 2, 3, or 4 is roughly 41.5%, 17.0%, 9.3%, and 5.9%, respectively. Since the operation of subtraction is faster than division, particularly for large numbers,<ref>{{cite book | last = Wagon | first = S. | author-link = Stan Wagon | year = 1999 | title = Mathematica in Action | publisher = Springer-Verlag | location = New York | isbn = 0-387-98252-3 | pages = 335–336}}</ref> the subtraction-based Euclid's algorithm is competitive with the division-based version.<ref>{{Harvnb|Cohen|1993|p=14}}</ref> This is exploited in the [[binary GCD algorithm|binary version]] of Euclid's algorithm.<ref>{{Harvnb|Cohen|1993|pp=14–15, 17–18}}</ref> Combining the estimated number of steps with the estimated computational expense per step shows that the Euclid's algorithm grows quadratically (''h''<sup>2</sup>) with the average number of digits ''h'' in the initial two numbers ''a'' and ''b''. Let {{math|''h''<sub>0</sub>, ''h''<sub>1</sub>, ..., ''h''<sub>''N''−1</sub>}} represent the number of digits in the successive remainders {{math|''r''<sub>0</sub>, ''r''<sub>1</sub>, ..., ''r''<sub>''N''−1</sub>}}. Since the number of steps ''N'' grows linearly with ''h'', the running time is bounded by <!-- : <math alt="The order of the sum over all i less than N of h sub i times parenthesis h sub i minus h sub i plus one plus 2 close parenthesis is a subset of the order of h times the sum over all i less than N of h sub i minus h sub i plus one plus 2, which in turn is a subset of the order of h times h sub zero plus 2 N, which in turn is a subset of the order of h squared."> --> : <math> O\Big(\sum_{i<N}h_i(h_i-h_{i+1}+2)\Big)\subseteq O\Big(h\sum_{i<N}(h_i-h_{i+1}+2) \Big) \subseteq O(h(h_0+2N))\subseteq O(h^2).</math> === Alternative methods === Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity.<ref> {{cite book | last = Sorenson | first = Jonathan P. | contribution = An analysis of the generalized binary GCD algorithm | mr = 2076257 | pages = 327–340 | publisher = American Mathematical Society | location = Providence, RI | series = Fields Institute Communications | title = High primes and misdemeanours: lectures in honour of the 60th birthday of Hugh Cowie Williams | volume = 41 | year = 2004 | url = https://books.google.com/books?id=udr3tHHwBl0C&pg=PA327 | quote = The algorithms that are used the most in practice today [for computing greatest common divisors] are probably the binary algorithm and Euclid's algorithm for smaller numbers, and either Lehmer's algorithm or Lebealean's version of the ''k''-ary GCD algorithm for larger numbers. | isbn = 9780821887592 }}</ref> For comparison, the efficiency of alternatives to Euclid's algorithm may be determined. One inefficient approach to finding the GCD of two natural numbers ''a'' and ''b'' is to calculate all their common divisors; the GCD is then the largest common divisor. The common divisors can be found by dividing both numbers by successive integers from 2 to the smaller number ''b''. The number of steps of this approach grows linearly with ''b'', or exponentially in the number of digits. Another inefficient approach is to find the prime factors of one or both numbers. As noted [[#Greatest common divisor|above]], the GCD equals the product of the prime factors shared by the two numbers ''a'' and ''b''.<ref name="Schroeder_21" /> Present methods for [[integer factorization|prime factorization]] are also inefficient; many modern cryptography systems even rely on that inefficiency.<ref name="Schroeder_216" /> The [[binary GCD algorithm]] is an efficient alternative that substitutes division with faster operations by exploiting the [[binary numeral system|binary]] representation used by computers.<ref>{{harvnb|Knuth|1997}}, pp. 321–323</ref><ref>{{cite journal | last = Stein | first = J. | year = 1967 | title = Computational problems associated with Racah algebra | journal = Journal of Computational Physics | volume = 1 | pages = 397–405 | doi = 10.1016/0021-9991(67)90047-2 | issue = 3| bibcode = 1967JCoPh...1..397S }}</ref> However, this alternative also scales like [[big-O notation|''O''(''h''²)]]. It is generally faster than the Euclidean algorithm on real computers, even though it scales in the same way.<ref name="Crandall_2001">{{harvnb|Crandall|Pomerance|2001}}, pp. 77–79, 81–85, 425–431</ref> Additional efficiency can be gleaned by examining only the leading digits of the two numbers ''a'' and ''b''.<ref>{{harvnb|Knuth|1997}}, p. 328</ref><ref>{{cite journal | last = Lehmer|first = D. H.|author-link=Derrick Henry Lehmer | year = 1938 | title = Euclid's Algorithm for Large Numbers | journal = The American Mathematical Monthly | volume = 45 | pages = 227–233 | doi = 10.2307/2302607 | jstor = 2302607 | issue = 4}}</ref> The binary algorithm can be extended to other bases (''k''-ary algorithms),<ref>{{cite journal | last = Sorenson |first=J. | year = 1994 | title = Two fast GCD algorithms | journal = J. Algorithms | volume = 16 |issue=1 | pages = 110–144 | doi = 10.1006/jagm.1994.1006}}</ref> with up to fivefold increases in speed.<ref>{{cite journal | last = Weber |first=K. | year = 1995 | title = The accelerated GCD algorithm | journal = ACM Trans. Math. Softw. | volume = 21 |issue=1 | pages = 111–122 | doi = 10.1145/200979.201042|s2cid=14934919 | doi-access = free }}</ref> [[Lehmer's GCD algorithm]] uses the same general principle as the binary algorithm to speed up GCD computations in arbitrary bases. A recursive approach for very large integers (with more than 25,000 digits) leads to [[quasilinear time|quasilinear]] integer GCD algorithms,<ref>{{cite book | last1 = Aho | first1 = A. | author1-link = Alfred Aho | last2 = Hopcroft | first2 = J. | author2-link = John Hopcroft | last3 = Ullman | first3 = J. | author3-link = Jeffrey Ullman | year = 1974 | title = The Design and Analysis of Computer Algorithms | publisher = Addison–Wesley | location = New York | pages = [https://archive.org/details/designanalysisof00ahoarich/page/300 300–310] | isbn = 0-201-00029-6 | url = https://archive.org/details/designanalysisof00ahoarich/page/300 }}</ref> such as those of Schönhage,<ref>{{cite journal | last = Schönhage|first= A. |author-link=Arnold Schönhage| title = Schnelle Berechnung von Kettenbruchentwicklungen |language=de | journal = Acta Informatica | volume = 1 | pages = 139–144 | doi = 10.1007/BF00289520 | year = 1971 | issue = 2|s2cid= 34561609 }}</ref><ref>{{cite book | last = Cesari|first= G. | year = 1998 | chapter = Parallel implementation of Schönhage's integer GCD algorithm | title = Algorithmic Number Theory: Proc. ANTS-III, Portland, OR | editor = G. Buhler | publisher = Springer-Verlag | location = New York | pages = 64–76|volume=1423|series=Lecture Notes in Computer Science}}</ref> and Stehlé and Zimmermann.<ref>{{cite book | last1 = Stehlé|first1=D.|last2=Zimmermann|first2=P. | year = 2005 | chapter = [[Gal's accurate tables]] method revisited | title = Proceedings of the 17th IEEE Symposium on Computer Arithmetic (ARITH-17) | publisher = [[IEEE Computer Society Press]] | location = Los Alamitos, CA}}</ref> These algorithms exploit the 2×2 matrix form of the Euclidean algorithm given [[#Matrix method|above]]. These quasilinear methods generally scale as {{math|''O''(''h'' log ''h''{{sup|2}} log log ''h'').}}<ref name="Crandall_2001" /><ref name="Moller08">{{cite journal | last = Möller|first= N. | title = On Schönhage's algorithm and subquadratic integer gcd computation | journal = Mathematics of Computation | volume = 77 | pages = 589–607 | doi = 10.1090/S0025-5718-07-02017-0 | year = 2008 | url=http://www.lysator.liu.se/~nisse/archive/sgcd.pdf | issue = 261| bibcode = 2008MaCom..77..589M | doi-access = free }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)