Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Vandermonde matrix
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Determinant == The [[determinant]] of a [[square matrix|square]] Vandermonde matrix is called a ''[[Vandermonde polynomial]]'' or ''Vandermonde determinant''. Its value is the [[polynomial]] :<math>\det(V) = \prod_{0 \le i < j \le n} (x_j - x_i) </math> which is non-zero if and only if all <math>x_i</math> are distinct. The Vandermonde determinant was formerly sometimes called the ''discriminant'', but in current terminology the [[discriminant]] of a polynomial <math>p(x)=(x-x_0)\cdots(x-x_n)</math> is the ''square'' of the Vandermonde determinant of the [[root of a function|roots]] <math>x_i</math>. The Vandermonde determinant is an [[alternating form]] in the <math>x_i</math>, meaning that exchanging two <math>x_i</math> changes the sign, and <math>\det(V)</math> thus depends on order for the <math>x_i</math>. By contrast, the discriminant <math>\det(V)^2</math> does not depend on any order, so that [[Galois theory]] implies that the discriminant is a [[polynomial function]] of the coefficients of <math>p(x)</math>. The determinant formula is proved below in three ways. The first uses polynomial properties, especially the [[unique factorization domain|unique factorization property]] of [[multivariate polynomial]]s. Although conceptually simple, it involves non-elementary concepts of [[abstract algebra]]. The second proof is based on the [[linear algebra]] concepts of [[change of basis]] in a vector space and the determinant of a [[linear map]]. In the process, it computes the [[LU decomposition]] of the Vandermonde matrix. The third proof is more elementary but more complicated, using only [[elementary matrix|elementary row and column operations]]. ===First proof: Polynomial properties=== The first proof relies on properties of polynomials. By the [[Leibniz formula (determinant)|Leibniz formula]], <math>\det(V)</math> is a polynomial in the <math>x_i</math>, with [[integer]] coefficients. All entries of the <math>(i-1)</math>-th column have [[total degree]] <math>i</math>. Thus, again by the Leibniz formula, all terms of the determinant have total degree :<math>0 + 1 + 2 + \cdots + n = \frac{n(n+1)}2;</math> (that is, the determinant is a [[homogeneous polynomial]] of this degree). If, for <math>i \neq j</math>, one substitutes <math>x_i</math> for <math>x_j</math>, one gets a matrix with two equal rows, which has thus a zero determinant. Thus, considering the determinant as [[univariate]] in <math>x_i,</math> the [[factor theorem]] implies that <math>x_j-x_i</math> is a divisor of <math>\det(V).</math> It thus follows that for all <math>i</math> and <math>j</math>, <math>x_j-x_i</math> is a divisor of <math>\det(V).</math> This will now be strengthened to show that the product of all those divisors of <math>\det(V)</math> is a divisor of <math>\det(V).</math> Indeed, let <math>p</math> be a polynomial with <math>x_i-x_j</math> as a factor, then <math>p=(x_i-x_j)\,q,</math> for some polynomial <math>q.</math> If <math>x_k-x_l</math> is another factor of <math>p,</math> then <math>p</math> becomes zero after the substitution of <math>x_k</math> for <math>x_l.</math> If <math>\{x_i,x_j\}\neq \{x_k, x_l\}, </math> the factor <math>q</math> becomes zero after this substitution, since the factor <math>x_i-x_j</math> remains nonzero. So, by the factor theorem, <math>x_k-x_l</math> divides <math>q,</math> and <math>(x_i-x_j)\,(x_k-x_l)</math> divides <math>p.</math> Iterating this process by starting from <math>\det(V),</math> one gets that <math>\det(V)</math> is divisible by the product of all <math>x_i-x_j</math> with <math>i<j;</math> that is :<math>\det(V)=Q\prod_{0\le i<j\le n} (x_j-x_i),</math> where <math>Q</math> is a polynomial. As the product of all <math>x_j-x_i</math> and <math>\det(V)</math> have the same degree <math>n(n + 1)/2</math>, the polynomial <math>Q</math> is, in fact, a constant. This constant is one, because the product of the diagonal entries of <math>V</math> is <math>x_1 x_2^2\cdots x_n^n</math>, which is also the [[monomial]] that is obtained by taking the first term of all factors in <math>\textstyle \prod_{0\le i<j\le n} (x_j-x_i).</math> This proves that <math>Q=1,</math> and finishes the proof. :<math>\det(V)=\prod_{0\le i<j\le n} (x_j-x_i).</math> ===Second proof: linear maps=== Let {{mvar|F}} be a [[field (mathematics)|field]] containing all <math>x_i,</math> and <math>P_n</math> the {{mvar|F}} [[vector space]] of the polynomials of degree less than or equal to {{mvar|n}} with coefficients in {{mvar|F}}. Let :<math>\varphi:P_n\to F^{n+1}</math> be the [[linear map]] defined by :<math>p(x) \mapsto (p(x_0), p(x_1), \ldots, p(x_n))</math>. The Vandermonde matrix is the matrix of <math>\varphi</math> with respect to the [[canonical basis|canonical bases]] of <math>P_n</math> and <math>F^{n+1}.</math> [[Change of basis|Changing the basis]] of <math>P_n</math> amounts to multiplying the Vandermonde matrix by a change-of-basis matrix {{mvar|M}} (from the right). This does not change the determinant, if the determinant of {{mvar|M}} is {{val|1}}. The polynomials <math>1</math>, <math>x-x_0</math>, <math>(x-x_0)(x-x_1)</math>, β¦, <math>(x-x_0) (x-x_1) \cdots (x-x_{n-1})</math> are [[monic polynomial|monic]] of respective degrees 0, 1, β¦, {{mvar|n}}. Their matrix on the [[monomial basis]] is an [[upper-triangular matrix]] {{mvar|U}} (if the monomials are ordered in increasing degrees), with all diagonal entries equal to one. This matrix is thus a change-of-basis matrix of determinant one. The matrix of <math>\varphi</math> on this new basis is :<math>\begin{bmatrix} 1 & 0 & 0 & \ldots & 0 \\ 1 & x_1-x_0 & 0 & \ldots & 0 \\ 1 & x_2-x_0 & (x_2-x_0)(x_2-x_1) & \ldots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n-x_0 & (x_n-x_0)(x_n-x_1) & \ldots & (x_n-x_0)(x_n-x_1)\cdots (x_n-x_{n-1}) \end{bmatrix}</math>. Thus Vandermonde determinant equals the determinant of this matrix, which is the product of its diagonal entries. This proves the desired equality. Moreover, one gets the [[LU decomposition]] of {{mvar|V}} as <math>V=LU^{-1}</math>. ===Third proof: row and column operations=== The third proof is based on the fact that if one adds to a column of a matrix the product by a scalar of another column then the determinant remains unchanged. So, by subtracting to each column β except the first one β the preceding column multiplied by <math>x_0</math>, the determinant is not changed. (These subtractions must be done by starting from last columns, for subtracting a column that has not yet been changed). This gives the matrix :<math>V = \begin{bmatrix} 1&0&0&0&\cdots&0\\ 1&x_1-x_0&x_1(x_1-x_0)&x_1^2(x_1-x_0)&\cdots&x_1^{n-1}(x_1-x_0)\\ 1&x_2-x_0&x_2(x_2-x_0)&x_2^2(x_2-x_0)&\cdots&x_2^{n-1}(x_2-x_0)\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ 1&x_n-x_0&x_n(x_n-x_0)&x_n^2(x_n-x_0)&\cdots&x_n^{n-1}(x_n-x_0)\\ \end{bmatrix}</math> Applying the [[Laplace_expansion|Laplace expansion formula]] along the first row, we obtain <math>\det(V)=\det(B)</math>, with :<math>B = \begin{bmatrix} x_1-x_0&x_1(x_1-x_0)&x_1^2(x_1-x_0)&\cdots&x_1^{n-1}(x_1-x_0)\\ x_2-x_0&x_2(x_2-x_0)&x_2^2(x_2-x_0)&\cdots&x_2^{n-1}(x_2-x_0)\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ x_n-x_0&x_n(x_n-x_0)&x_n^2(x_n-x_0)&\cdots&x_n^{n-1}(x_n-x_0)\\ \end{bmatrix}</math> As all the entries in the <math>i</math>-th row of <math>B</math> have a factor of <math>x_{i+1}-x_0</math>, one can take these factors out and obtain :<math>\det(V)=(x_1-x_0)(x_2-x_0)\cdots(x_n-x_0)\begin{vmatrix} 1&x_1&x_1^2&\cdots&x_1^{n-1}\\ 1&x_2&x_2^2&\cdots&x_2^{n-1}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&x_n&x_n^2&\cdots&x_n^{n-1}\\ \end{vmatrix}=\prod_{1<i\leq n}(x_i-x_0)\det(V')</math>, where <math>V'</math> is a Vandermonde matrix in <math>x_1,\ldots, x_n</math>. Iterating this process on this smaller Vandermonde matrix, one eventually gets the desired expression of <math>\det(V)</math> as the product of all <math>x_j-x_i</math> such that <math>i<j</math>. ===Rank of the Vandermonde matrix=== * An {{math|''m'' Γ ''n''}} rectangular Vandermonde matrix such that {{math|''m'' β€ ''n''}} has [[rank of a matrix|rank]] {{math|''m''}} [[if and only if]] all {{math|''x''<sub>''i''</sub>}} are distinct. * An {{math|''m'' Γ ''n''}} rectangular Vandermonde matrix such that {{math|''m'' β₯ ''n''}} has [[rank of a matrix|rank]] {{math|''n''}} if and only if there are {{mvar|n}} of the {{math|''x''<sub>''i''</sub>}} that are distinct. * A square Vandermonde matrix is [[invertible matrix|invertible]] if and only if the {{math|''x''<sub>''i''</sub>}} are distinct. An explicit formula for the inverse is known (see below).<ref>{{Cite book | title = Inverse of the Vandermonde matrix with applications | last = Turner | first = L. Richard | date = August 1966 | url =https://ntrs.nasa.gov/api/citations/19660023042/downloads/19660023042.pdf }}</ref><ref name="MS">{{Cite journal | volume = 65 | issue = 2 | pages = 95β100 | last = Macon | first = N. |author2=A. Spitzbart | title = Inverses of Vandermonde Matrices | journal = The American Mathematical Monthly | date = February 1958 | jstor = 2308881 | doi = 10.2307/2308881}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)