Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Cauchy–Binet formula
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Identity in linear algebra}} In [[mathematics]], specifically [[linear algebra]], the '''Cauchy–Binet formula''', named after [[Augustin-Louis Cauchy]] and [[Jacques Philippe Marie Binet]], is an [[Identity (mathematics)|identity]] for the [[determinant]] of the [[matrix multiplication|product]] of two rectangular [[matrix (mathematics)|matrices]] of transpose shapes (so that the product is well-defined and [[Square matrix|square]]). It generalizes the statement that the determinant of a product of square matrices is equal to the product of their determinants. The formula is valid for matrices with the entries from any [[commutative ring]]. == Statement == Let ''A'' be an ''m''×''n'' matrix and ''B'' an ''n''×''m'' matrix. Write [''n''] for the set {1, ..., ''n''}, and <math>\tbinom{[n]}m</math> for the set of ''m''-[[combination]]s of [''n''] (i.e., subsets of [''n''] of size ''m''; there are [[binomial coefficient|<math>\tbinom nm</math>]] of them). For <math>S\in\tbinom{[n]}m</math>, write ''A''<sub>[''m''],''S''</sub> for the ''m''×''m'' matrix whose columns are the columns of ''A'' at indices from ''S'', and ''B''<sub>''S'',[''m'']</sub> for the ''m''×''m'' matrix whose rows are the rows of ''B'' at indices from ''S''. The Cauchy–Binet formula then states : <math>\det(AB) = \sum_{S\in\tbinom{[n]}m} \det(A_{[m],S})\det(B_{S,[m]}).</math> Example: Taking ''m'' = 2 and ''n'' = 3, and matrices <math>A = \begin{pmatrix}1&1&2\\ 3& 1& -1\\ \end{pmatrix}</math> and <math>B = \begin{pmatrix}1&1\\3&1\\0&2\end{pmatrix}</math>, the Cauchy–Binet formula gives the determinant :<math> \det(AB)= \left|\begin{matrix}1&1\\3&1\end{matrix}\right| \cdot \left|\begin{matrix}1&1\\3&1\end{matrix}\right| + \left|\begin{matrix}1&2\\1&-1\end{matrix}\right| \cdot \left|\begin{matrix}3&1\\0&2\end{matrix}\right| + \left|\begin{matrix}1&2\\3&-1\end{matrix}\right| \cdot \left|\begin{matrix}1&1\\0&2\end{matrix}\right|. </math> Indeed <math>AB =\begin{pmatrix}4&6\\6&2\end{pmatrix}</math>, and its determinant is <math>-28</math> which equals <math>(-2) \times (-2) + (-3) \times 6 + (-7) \times 2</math> from the right hand side of the formula. == Special cases == If ''n'' < ''m'' then <math>\tbinom{[n]}m</math> is the empty set, and the formula says that det(''AB'') = 0 (its right hand side is an [[empty sum]]); indeed in this case the [[rank (linear algebra)|rank]] of the ''m''×''m'' matrix ''AB'' is at most ''n'', which implies that its determinant is zero. If ''n'' = ''m'', the case where ''A'' and ''B'' are square matrices, <math>\tbinom{[n]}m=\{[n]\} </math> (a [[singleton (mathematics)|singleton]] set), so the sum only involves ''S'' = [''n''], and the formula states that det(''AB'') = det(''A'')det(''B''). For ''m'' = 0, ''A'' and ''B'' are [[empty matrix|empty matrices]] (but of different shapes if ''n'' > 0), as is their product ''AB''; the summation involves a single term ''S'' = Ø, and the formula states 1 = 1, with both sides given by the determinant of the 0×0 matrix. For ''m'' = 1, the summation ranges over the collection <math>\tbinom{[n]}1</math> of the ''n'' different singletons taken from [''n''], and both sides of the formula give <math>\textstyle\sum_{j=1}^nA_{1,j}B_{j,1}</math>, the [[dot product]] of the pair of [[Tuple|vector]]s represented by the matrices. The smallest value of ''m'' for which the formula states a non-trivial equality is ''m'' = 2; it is discussed in the article on the [[Binet–Cauchy identity]]. === In the case ''n'' = 3 === Let <math> \boldsymbol{a}, \boldsymbol{b}, \boldsymbol{c}, \boldsymbol{d}, \boldsymbol{x}, \boldsymbol{y},\boldsymbol{z},\boldsymbol{w} </math> be three-dimensional vectors. : <math> \begin{align} & 1 = 1 & (m = 0)\\[10pt] & \boldsymbol{a}\cdot \boldsymbol{x} = a_1 x_1 + a_2 x_2 + a_3 x_3 & (m = 1)\\[10pt] & \begin{vmatrix} \boldsymbol{a}\cdot \boldsymbol{x} & \boldsymbol{a}\cdot \boldsymbol{y}\\ \boldsymbol{b}\cdot \boldsymbol{x} & \boldsymbol{b}\cdot \boldsymbol{y} \end{vmatrix} \\[4pt] = {} & \begin{vmatrix} a_2 & a_3\\ b_2 & b_3 \end{vmatrix} \begin{vmatrix} x_2 & y_2\\ x_3 & y_3 \end{vmatrix} + \begin{vmatrix} a_3 & a_1\\ b_3 & b_1 \end{vmatrix} \begin{vmatrix} x_3 & y_3\\ x_1 & y_1 \end{vmatrix} + \begin{vmatrix} a_1 & a_2\\ b_1 & b_2 \end{vmatrix} \begin{vmatrix} x_1 & y_1\\ x_2 & y_2 \end{vmatrix}\\[4pt] = {} & (\boldsymbol{a}\times\boldsymbol{b})\cdot(\boldsymbol{x}\times\boldsymbol{y}) & (m = 2)\\[10pt] & \begin{vmatrix} \boldsymbol{a}\cdot \boldsymbol{x} & \boldsymbol{a}\cdot \boldsymbol{y} & \boldsymbol{a}\cdot \boldsymbol{z}\\ \boldsymbol{b}\cdot \boldsymbol{x} & \boldsymbol{b}\cdot \boldsymbol{y} & \boldsymbol{b}\cdot \boldsymbol{z}\\ \boldsymbol{c}\cdot \boldsymbol{x} & \boldsymbol{c}\cdot \boldsymbol{y} & \boldsymbol{c}\cdot \boldsymbol{z} \end{vmatrix} = \begin{vmatrix} a_1 & a_2 & a_3\\ b_1 & b_2 & b_3\\ c_1 & c_2 & c_3 \end{vmatrix} \begin{vmatrix} x_1 & y_1 & z_1 \\ x_2 & y_2 & z_2 \\ x_3 & y_3 & z_3 \end{vmatrix}\\[4pt] = {} & [\boldsymbol{a}\cdot (\boldsymbol{b} \times \boldsymbol{c})] [\boldsymbol{x}\cdot (\boldsymbol{y} \times \boldsymbol{z})] & (m = 3)\\[10pt] & \begin{vmatrix} \boldsymbol{a}\cdot \boldsymbol{x} & \boldsymbol{a}\cdot \boldsymbol{y} & \boldsymbol{a}\cdot \boldsymbol{z} & \boldsymbol{a}\cdot \boldsymbol{w} \\ \boldsymbol{b}\cdot \boldsymbol{x} & \boldsymbol{b}\cdot \boldsymbol{y} & \boldsymbol{b}\cdot \boldsymbol{z} & \boldsymbol{b}\cdot \boldsymbol{w} \\ \boldsymbol{c}\cdot \boldsymbol{x} & \boldsymbol{c}\cdot \boldsymbol{y} & \boldsymbol{c}\cdot \boldsymbol{z} & \boldsymbol{c}\cdot \boldsymbol{w} \\ \boldsymbol{d}\cdot \boldsymbol{x} & \boldsymbol{d}\cdot \boldsymbol{y} & \boldsymbol{d}\cdot \boldsymbol{z} & \boldsymbol{d}\cdot \boldsymbol{w} \end{vmatrix} = 0 & (m = 4) \end{align} </math> In the case ''m'' > 3, the right-hand side always equals 0. == A simple proof == The following simple proof relies on two facts that can be proven in several different ways:<ref>{{cite book |last1=Tao |first1=Terence |author-link=Terence Tao |title=Topics in random matrix theory |date=2012 |publisher=American Mathematical Society |url=https://terrytao.files.wordpress.com/2011/08/matrix-book.pdf |location=Providence, RI |isbn=978-0-8218-7430-1 |doi=10.1090/gsm/132 |series=Graduate Studies in Mathematics |volume=132 |page=253}}</ref> # For any <math>1 \leq k\leq n</math> the coefficient of <math>z^{n-k}</math> in the polynomial <math>\det(zI_n+X)</math> is the sum of the <math>k\times k</math> principal minors of <math>X</math>. # If <math>m \leq n</math> and <math>A</math> is an <math>m\times n</math> matrix and <math>B</math> an <math>n\times m</math> matrix, then ::: <math>\det(zI_n+BA)=z^{n-m}\det(zI_m+AB)</math>. Now, if we compare the coefficient of <math>z^{n-m}</math> in the equation <math>\det(zI_n+BA)=z^{n-m}\det(zI_m+AB)</math>, the left hand side will give the sum of the principal minors of <math>BA</math> while the right hand side will give the constant term of <math>\det(zI_m+AB)</math>, which is simply <math>\det(AB)</math>, which is what the Cauchy–Binet formula states, i.e. : <math> \begin{align} &\det(AB)= \sum_{S\in\tbinom{[n]}m} \det((BA)_{S,S})=\sum_{S\in\tbinom{[n]}m} \det(B_{S,[m]}) \det(A_{[m],S}) \\[5pt] = {} & \sum_{S\in\tbinom{[n]}m}\det(A_{[m],S}) \det(B_{S,[m]}). \end{align} </math> == Proof == There are various kinds of proofs that can be given for the Cauchy−Binet formula. The proof below is based on formal manipulations only, and avoids using any particular interpretation of determinants, which may be taken to be defined by the [[Leibniz formula (determinant)|Leibniz formula]]. Only their multilinearity with respect to rows and columns, and their alternating property (vanishing in the presence of equal rows or columns) are used; in particular the multiplicative property of determinants for square matrices is not used, but is rather established (the case ''n'' = ''m''). The proof is valid for arbitrary commutative coefficient rings. The formula can be proved in two steps: # use the fact that both sides are [[Multilinear map|multilinear]] (more precisely 2''m''-linear) in the ''rows'' of ''A'' and the ''columns'' of ''B'', to reduce to the case that each row of ''A'' and each column of ''B'' has only one non-zero entry, which is 1. # handle that case using the functions [''m''] → [''n''] that map respectively the row numbers of ''A'' to the column number of their nonzero entry, and the column numbers of ''B'' to the row number of their nonzero entry. For step 1, observe that for each row of ''A'' or column of ''B'', and for each ''m''-combination ''S'', the values of det(''AB'') and det(''A''<sub>[''m''],''S''</sub>)det(''B''<sub>''S'',[''m'']</sub>) indeed depend linearly on the row or column. For the latter this is immediate from the multilinear property of the determinant; for the former one must in addition check that taking a linear combination for the row of ''A'' or column of ''B'' while leaving the rest unchanged only affects the corresponding row or column of the product ''AB'', and by the same linear combination. Thus one can work out both sides of the Cauchy−Binet formula by linearity for every row of ''A'' and then also every column of ''B'', writing each of the rows and columns as a linear combination of standard basis vectors. The resulting multiple summations are huge, but they have the same form for both sides: corresponding terms involve the same scalar factor (each is a product of entries of ''A'' and of ''B''), and these terms only differ by involving two different expressions in terms of constant matrices of the kind described above, which expressions should be equal according to the Cauchy−Binet formula. This achieves the reduction of the first step. Concretely, the multiple summations can be grouped into two summations, one over all functions ''f'':[''m''] → [''n''] that for each row index of ''A'' gives a corresponding column index, and one over all functions ''g'':[''m''] → [''n''] that for each column index of ''B'' gives a corresponding row index. The matrices associated to ''f'' and ''g'' are :<math>L_f=\bigl((\delta_{f(i),j})_{i\in[m],j\in[n]}\bigr) \quad\text{and} \quad R_g=\bigl((\delta_{j,g(k)})_{j\in[n],k\in[m]}\bigr)</math> where "<math>\delta</math>" is the [[Kronecker delta]], and the Cauchy−Binet formula to prove has been rewritten as : <math> \begin{align} & \sum_{f:[m]\to[n]}\sum_{g:[m]\to[n]}p(f,g)\det(L_fR_g) \\[5pt] = {} & \sum_{f:[m]\to[n]}\sum_{g:[m]\to[n]} p(f,g) \sum_{S\in\tbinom{[n]}m} \det((L_f)_{[m],S}) \det((R_g)_{S,[m]}), \end{align} </math> where ''p''(''f'',''g'') denotes the scalar factor <math>\textstyle(\prod_{i=1}^mA_{i,f(i)})(\prod_{k=1}^mB_{g(k),k})</math>. It remains to prove the Cauchy−Binet formula for ''A'' = ''L''<sub>''f''</sub> and ''B'' = ''R''<sub>''g''</sub>, for all ''f'',''g'':[''m''] → [''n'']. For this step 2, if ''f'' fails to be injective then ''L''<sub>''f''</sub> and ''L''<sub>''f''</sub>''R''<sub>''g''</sub> both have two identical rows, and if ''g'' fails to be injective then ''R''<sub>''g''</sub> and ''L''<sub>''f''</sub>''R''<sub>''g''</sub> both have two identical columns; in either case both sides of the identity are zero. Supposing now that both ''f'' and ''g'' are injective maps [''m''] → [''n''], the factor <math>\det((L_f)_{[m],S})</math> on the right is zero unless ''S'' = ''f''([''m'']), while the factor <math>\det((R_g)_{S,[m]})</math> is zero unless ''S'' = ''g''([''m'']). So if the images of ''f'' and ''g'' are different, the right hand side has only null terms, and the left hand side is zero as well since ''L''<sub>''f''</sub>''R''<sub>''g''</sub> has a null row (for ''i'' with <math>f(i)\notin g([m])</math>). In the remaining case where the images of ''f'' and ''g'' are the same, say ''f''([''m'']) = ''S'' = ''g''([''m'']), we need to prove that :<math>\det(L_fR_g)=\det((L_f)_{[m],S})\det((R_g)_{S,[m]}).\,</math> Let ''h'' be the unique increasing bijection [''m''] → ''S'', and ''π'',''σ'' the permutations of [''m''] such that <math>f=h\circ\pi^{-1}</math> and <math>g=h\circ\sigma</math>; then <math>(L_f)_{[m],S}</math> is the [[permutation matrix]] for {{pi}}, <math>(R_g)_{S,[m]}</math> is the permutation matrix for ''σ'', and ''L''<sub>''f''</sub>''R''<sub>''g''</sub> is the permutation matrix for <math>\pi\circ\sigma</math>, and since the determinant of a permutation matrix equals the [[signature (permutation)|signature]] of the permutation, the identity follows from the fact that signatures are multiplicative. Using multi-linearity with respect to both the rows of ''A'' and the columns of ''B'' in the proof is not necessary; one could use just one of them, say the former, and use that a matrix product ''L''<sub>''f''</sub>''B'' either consists of a permutation of the rows of ''B''<sub>''f''([''m'']),[''m'']</sub> (if ''f'' is injective), or has at least two equal rows. == Relation to the generalized Kronecker delta == As we have seen, the Cauchy–Binet formula is equivalent to the following: :<math> \det(L_fR_g)=\sum_{S\in\tbinom{[n]}m} \det((L_f)_{[m],S})\det((R_g)_{S,[m]}), </math> where :<math> L_f=\bigl((\delta_{f(i),j})_{i\in[m],j\in[n]}\bigr) \quad\text{and} \quad R_g=\bigl((\delta_{j,g(k)})_{j\in[n],k\in[m]}\bigr). </math> In terms of [[Kronecker delta#Generalizations|generalized Kronecker delta]], we can derive the formula equivalent to the Cauchy–Binet formula: :<math> \delta^{f(1) \dots f(m)}_{g(1) \dots g(m)} = \sum_{k:[m]\to[n] \atop k(1)<\dots<k(m)} \delta^{f(1) \dots f(m)}_{k(1) \dots k(m)} \delta^{k(1) \dots k(m)}_{g(1) \dots g(m)}. </math> == Geometric interpretations == If ''A'' is a real ''m''×''n'' matrix, then det(''A'' ''A''<sup>T</sup>) is equal to the square of the ''m''-dimensional volume of the [[Parallelepiped#Parallelotope|parallelotope]] spanned in '''R'''<sup>''n''</sup> by the ''m'' rows of ''A''. Binet's formula states that this is equal to the sum of the squares of the volumes that arise if the parallelepiped is orthogonally projected onto the ''m''-dimensional coordinate planes (of which there are <math>\tbinom nm</math>). In the case ''m'' = 1 the parallelotope is reduced to a single vector and its volume is its length. The above statement then states that the square of the length of a vector is the sum of the squares of its coordinates; this is indeed the case by [[Euclidean distance|the definition]] of that length, which is based on the [[Pythagorean theorem]]. In [[tensor algebra]], given an [[inner product space]] <math>V</math> of dimension ''n'', the Cauchy–Binet formula defines an induced inner product on the [[Exterior algebra#Inner product|exterior algebra]] <math>\wedge^m V</math>, namely:<blockquote><math>\langle v_1\wedge\cdots \wedge v_m, w_1\wedge\cdots \wedge w_m\rangle =\det\left( \langle v_i,w_j\rangle \right)_{i,j=1}^m .</math></blockquote> == Generalization == The Cauchy–Binet formula can be extended in a straightforward way to a general formula for the [[minor (linear algebra)|minors]] of the product of two matrices. Context for the formula is given in the article on [[Minor (linear algebra)#Other applications|minors]], but the idea is that both the formula for ordinary [[matrix multiplication]] and the Cauchy–Binet formula for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices. Suppose that '''A''' is an ''m'' × ''n'' matrix, '''B''' is an ''n'' × ''p'' matrix, ''I'' is a [[subset]] of {1,...,''m''} with ''k'' elements and ''J'' is a subset of {1,...,''p''} with ''k'' elements. Then :<math>[\mathbf{AB}]_{I,J} = \sum_{K} [\mathbf{A}]_{I,K} [\mathbf{B}]_{K,J}\,</math> where the sum extends over all subsets ''K'' of {1,...,''n''} with ''k'' elements. Note the notation <math>[\mathbf{M}]_{I,J}</math> means the determinant of the matrix formed by taking only the rows of {{mvar|M}} with index in {{mvar|I}} and the columns with index in {{mvar|J}}. == Continuous version == A continuous version of the Cauchy–Binet formula, known as the '''Andréief identity''',<ref>C. Andréief, "Note sur une relation entre les intégrales définies des produits des fonctions", ''Mémoires de la Société des Sciences Physiques et Naturelles de Bordeaux'' (3) 2 (1886), 1–14</ref> appears commonly in random matrix theory.<ref>{{cite book|last=Mehta|first=M.L.|title=Random Matrices|publisher=Elsevier/Academic Press|location=Amsterdam|year=2004|edition=3rd|isbn=0-12-088409-7}}</ref> It is stated as follows: let <math>\left\{f_j(x)\right\}_{j=1}^{N}</math> and <math>\left\{g_j(x)\right\}_{j=1}^{N}</math> be two sequences of integrable functions, supported on <math>I</math>. Then :<math>\int_I \cdots \int_I \det \left[f_{j}(x_k)\right]_{j,k=1}^N \det \left[g_{j}(x_k)\right]_{j,k=1}^N dx_1 \cdots dx_N = N!\, \det \left[\int_I f_j(x)g_k(x) dx\right]_{j,k=1}^{N}.</math> {{Math proof|title=Proof|proof= Let <math> S_N</math> be the [[permutation group]] of order N, <math>|s|</math> be the sign of a permutation, <math>\langle f, g \rangle = \int_I f(x) g(x) dx </math> be the "inner product".<math display="block">\begin{align} \text{left side} &= \sum_{s, s' \in S_N} (-1)^{|s| + |s'|} \int_{I^N} \prod_{j} f_{s(j)}(x_j) \prod_k g_{s'(k)}(x_k)\\ &=\sum_{s, s' \in S_N} (-1)^{|s| + |s'|} \int_{I^N} \prod_{j} f_{s(j)}(x_j) g_{s'(j)}(x_j)\\ &=\sum_{s, s' \in S_N} (-1)^{|s| + |s'|} \prod_{j} \int_{I} f_{s(j)}(x_j) g_{s'(j)}(x_j) d x_j\\ &=\sum_{s, s' \in S_N} (-1)^{|s| + |s'|} \prod_{j}\langle f_{s(j)}, g_{s'(j)} \rangle\\ &=\sum_{s' \in S_N} (-1)^{|s'| + |s'|} \sum_{s \in S_N} (-1)^{|s| + |s'^{-1}|} \prod_{j} \langle f_{(s\circ s'^{-1})(j)}, g_{j} \rangle\\ &=\sum_{s' \in S_N}\sum_{s \in S_N} (-1)^{|s \circ s'^{-1}|} \prod_{j} \langle f_{(s\circ s'^{-1})(j)}, g_{j} \rangle\\ &= \text{right side}\\ \end{align}</math> }} Forrester<ref>{{cite arXiv |eprint=1806.10411 |title=Meet Andréief, Bordeaux 1886, and Andreev, Kharkov 1882–83 |last=Forrester|first=Peter J. |date=2018|class=math-ph }}</ref> describes how to recover the usual Cauchy–Binet formula as a discretisation of the above identity. {{Math proof|title=Proof|proof= Pick <math>t_1 < \cdots < t_m</math> in <math>[0, 1]</math>, pick <math>f_1, \ldots, g_n</math>, such that <math>f_j(t_k) = A_{j, k}</math> and the same holds for <math>g</math> and <math>B</math>. Now plugging in <math>f_j(x_k) = \sum_l A_{j,l} \delta(x_k - t_l)</math> and <math>g_j(x_k) = \sum_l B_{j,l} \delta(x_k - t_l) </math> into the Andreev identity, and simplifying both sides, we get: <math display="block">\sum_{l_1, \ldots, l_n \in [1:m]} \det [f_j(t_{l_k})] \det [g_j(t_{l_k})] = n! \det \left[\sum_l f_j(t_{l}) g_k(t_l)\right]</math> The right side is <math>n! \det(AB)</math>, and the left side is <math>n! \sum_{S\subset [1:m], |S| = n} \det(A_{[1:m],S})\det(B_{S,[1:m]})</math>. }}It is occasionally called the '''[[Konstantin Andreev|Andréief]]-[[Eduard Heine|Heine]] identity''', though the credit to Heine sees unhistorical, as pre-2010 sources generally credit only Andréief.<ref>{{Cite web |title=What's a Heine reference for the "Andréief-Heine identity" |url=https://hsm.stackexchange.com/questions/17734/whats-a-heine-reference-for-the-andr%c3%a9ief-heine-identity/17735#17735 |access-date=2025-04-20 |website=History of Science and Mathematics Stack Exchange |language=en}}</ref> ==References== {{Reflist}} * Joel G. Broida & S. Gill Williamson (1989) ''A Comprehensive Introduction to Linear Algebra'', §4.6 Cauchy-Binet theorem, pp 208–14, [[Addison-Wesley]] {{ISBN|0-201-50065-5}}. * Jin Ho Kwak & Sungpyo Hong (2004) ''Linear Algebra'' 2nd edition, Example 2.15 Binet-Cauchy formula, pp 66,7, [[Birkhäuser]] {{ISBN|0-8176-4294-3}}. * [[Igor Shafarevich|I. R. Shafarevich]] & A. O. Remizov (2012) ''Linear Algebra and Geometry'', §2.9 (p. 68) & §10.5 (p. 377), [[Springer Science+Business Media|Springer]] {{ISBN|978-3-642-30993-9}}. * [[Madan Lal Mehta|M.L. Mehta]] (2004) ''Random matrices'', 3rd ed., [[Elsevier]] {{ISBN|9780120884094}}. {{DEFAULTSORT:Cauchy-Binet formula}} [[Category:Determinants]] [[Category:Augustin-Louis Cauchy]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Cite arXiv
(
edit
)
Template:Cite book
(
edit
)
Template:Cite web
(
edit
)
Template:ISBN
(
edit
)
Template:Math proof
(
edit
)
Template:Mvar
(
edit
)
Template:Pi
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)