Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Polynomial ring
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Algebraic structure}}{{Ring theory sidebar|Commutative}} {{CS1 config|mode=cs2}} In [[mathematics]], especially in the field of [[algebra]], a '''polynomial ring''' or '''polynomial algebra''' is a [[ring (mathematics)|ring]] formed from the [[set (mathematics)|set]] of [[polynomial]]s in one or more [[indeterminate (variable)|indeterminate]]s (traditionally also called [[variable (mathematics)|variables]]) with [[coefficient]]s in another [[ring (mathematics)|ring]], often a [[field (mathematics)|field]]. Often, the term "polynomial ring" refers implicitly to the special case of a polynomial ring in one indeterminate over a field. The importance of such polynomial rings relies on the high number of properties that they have in common with the ring of the [[Integer#Algebraic_properties|integers]]. Polynomial rings occur and are often fundamental in many parts of mathematics such as [[number theory]], [[commutative algebra]], and [[algebraic geometry]]. In [[ring theory]], many classes of rings, such as [[unique factorization domain]]s, [[regular ring]]s, [[group ring]]s, [[formal power series|rings of formal power series]], [[Ore polynomial]]s, [[graded ring]]s, have been introduced for generalizing some properties of polynomial rings. A closely related notion is that of the [[ring of polynomial functions]] on a [[vector space]], and, more generally, [[ring of regular functions]] on an [[algebraic variety]]. == Definition (univariate case)== Let {{math|''K''}} be a [[field (mathematics)|field]] or (more generally) a [[commutative ring]]. The '''polynomial ring''' in {{math|''X''}} over {{math|''K''}}, which is denoted {{math|''K''[''X'']}}, can be defined in several equivalent ways. One of them is to define {{math|''K''[''X'']}} as the set of expressions, called '''polynomials''' in {{math|''X''}}, of the form<ref>{{harvnb|Herstein|1975|p=153}}</ref> :<math>p = p_0 + p_1 X + p_2 X^2 + \cdots + p_{m - 1} X^{m - 1} + p_m X^m,</math> where {{math|''p''<sub>0</sub>, ''p''<sub>1</sub>, …, ''p''<sub>''m''</sub>}}, the '''coefficients''' of {{math|''p''}}, are elements of {{math|''K''}}, {{math|''p{{sub|m}}'' ≠ 0}} if {{math|''m'' > 0}}, and {{math|''X'', ''X''{{i sup|2}}, …,}} are symbols, which are considered as "powers" of {{math|''X''}}, and follow the usual rules of [[exponentiation]]: {{math|1=''X''{{i sup|0}} = 1}}, {{math|1=''X''{{i sup|1}} = ''X''}}, and <math> X^k\, X^l = X^{k+l}</math> for any [[nonnegative integer]]s {{math|''k''}} and {{math|''l''}}. The symbol {{math|''X''}} is called an indeterminate<ref>Herstein, Hall p. 73</ref> or variable.<ref>{{harvnb|Lang|2002|p=97}}</ref> (The term of "variable" comes from the terminology of [[polynomial function]]s. However, here, {{mvar|X}} has no value (other than itself), and cannot vary, being a ''constant'' in the polynomial ring.) Two polynomials are equal when the corresponding coefficients of each {{math|''X''{{i sup|''k''}}}} are equal. One can think of the ring {{math|''K''[''X'']}} as arising from {{math|''K''}} by adding one new element {{math|''X''}} that is external to {{math|''K''}}, commutes with all elements of {{math|''K''}}, and has no other specific properties. This can be used for an equivalent definition of polynomial rings. The polynomial ring in {{math|''X''}} over {{math|''K''}} is equipped with an addition, a multiplication and a [[scalar multiplication]] that make it a [[Commutative algebra (structure)|commutative algebra]]. These operations are defined according to the ordinary rules for manipulating algebraic expressions. Specifically, if :<math>p = p_0 + p_1 X + p_2 X^2 + \cdots + p_m X^m,</math> and :<math>q = q_0 + q_1 X + q_2 X^2 + \cdots + q_n X^n,</math> then :<math>p + q = r_0 + r_1 X + r_2 X^2 + \cdots + r_k X^k,</math> and :<math>pq = s_0 + s_1 X + s_2 X^2 + \cdots + s_l X^l,</math> where {{math|1=''k'' = max(''m'', ''n''), ''l'' = ''m'' + ''n''}}, :<math>r_i = p_i + q_i</math> and :<math>s_i = p_0 q_i + p_1 q_{i-1} + \cdots + p_i q_0.</math> In these formulas, the polynomials {{math|''p''}} and {{math|''q''}} are extended by adding "dummy terms" with zero coefficients, so that all {{math|''p''<sub>''i''</sub>}} and {{math|''q''<sub>''i''</sub>}} that appear in the formulas are defined. Specifically, if {{math|''m'' < ''n''}}, then {{math|1=''p''<sub>''i''</sub> = 0}} for {{math|''m'' < ''i'' ≤ ''n''}}. The scalar multiplication is the special case of the multiplication where {{math|1=''p'' = ''p''<sub>0</sub>}} is reduced to its ''constant term'' (the term that is independent of {{math|''X''}}); that is :<math>p_0\left(q_0 + q_1 X + \dots + q_n X^n\right) = p_0 q_0 + \left(p_0 q_1\right)X + \cdots + \left(p_0 q_n\right)X^n</math> It is straightforward to verify that these three operations satisfy the axioms of a commutative algebra over {{mvar|K}}. Therefore, polynomial rings are also called ''polynomial algebras''. Another equivalent definition is often preferred, although less intuitive, because it is easier to make it completely rigorous, which consists in defining a polynomial as an infinite [[sequence]] {{math|(''p''<sub>0</sub>, ''p''<sub>1</sub>, ''p''<sub>2</sub>, …)}} of elements of {{math|''K''}}, having the property that only a finite number of the elements are nonzero, or equivalently, a sequence for which there is some {{math|''m''}} so that {{nowrap|1=''p''<sub>''n''</sub> = 0}} for {{math|''n'' > ''m''}}. In this case, {{math|''p''{{sub|0}}}} and {{mvar|X}} are considered as alternate notations for the sequences {{math|(''p''{{sub|0}}, 0, 0, …)}} and {{math|(0, 1, 0, 0, …)}}, respectively. A straightforward use of the operation rules shows that the expression :<math>p_0 + p_1 X + p_2 X^2 + \cdots + p_m X^m</math> is then an alternate notation for the sequence :{{math|(''p''<sub>0</sub>, ''p''<sub>1</sub>, ''p''<sub>2</sub>, …, ''p''<sub>''m''</sub>, 0, 0, …)}}. ===Terminology=== Let :<math>p = p_0 + p_1 X + p_2 X^2 + \cdots + p_{m - 1} X^{m - 1} + p_m X^m,</math> be a nonzero polynomial with <math>p_m\ne 0</math> The ''constant term'' of {{math|''p''}} is <math>p_0.</math> It is zero in the case of the zero polynomial. The ''degree'' of {{math|''p''}}, written {{math|deg(''p'')}} is <math>m,</math> the largest {{math|''k''}} such that the coefficient of {{math|''X''{{sup|''k''}}}} is not zero.<ref>{{harvnb|Herstein|1975|p=154}}</ref> The ''leading coefficient'' of {{math|''p''}} is <math>p_m.</math><ref>{{harvnb|Lang|2002|p=100}}</ref> In the special case of the zero polynomial, all of whose coefficients are zero, the leading coefficient is undefined, and the degree has been variously left undefined,<ref>{{citation|title=Calculus Single Variable|first1=Howard|last1=Anton|first2=Irl C.|last2=Bivens|first3=Stephen|last3=Davis|publisher=Wiley |year=2012|isbn=9780470647707|page=31|url=https://books.google.com/books?id=U2uv84cpJHQC&pg=RA1-PA31}}.</ref> defined to be {{math|−1}},<ref>{{citation|title=Rational Algebraic Curves: A Computer Algebra Approach|volume=22|series=Algorithms and Computation in Mathematics|first1=J. Rafael|last1=Sendra|first2=Franz|last2=Winkler|first3=Sonia|last3=Pérez-Diaz|publisher=Springer|year=2007|isbn=9783540737247|page=250|url=https://books.google.com/books?id=puWxs7KG2D0C&pg=PA250}}.</ref> or defined to be a {{math|−∞}}.<ref>{{citation|title=Elementary Matrix Theory|publisher=Dover|first=Howard Whitley|last=Eves|author-link=Howard Eves|year=1980|isbn=9780486150277|page=183|url=https://books.google.com/books?id=ayVxeUNbZRAC&pg=PA183}}.</ref> A ''constant polynomial'' is either the zero polynomial, or a polynomial of degree zero. A nonzero polynomial is [[monic polynomial|monic]] if its leading coefficient is <math>1.</math> Given two polynomials {{mvar|p}} and {{mvar|q}}, if the degree of the zero polynomial is defined to be <math>-\infty,</math> one has :<math>\deg(p+q) \le \max (\deg(p), \deg (q)),</math> and, over a [[field (mathematics)|field]], or more generally an [[integral domain]],<ref>{{harvnb|Herstein|1975|pp=155,162}}</ref> :<math>\deg(pq) = \deg(p) + \deg(q).</math> It follows immediately that, if {{math|''K''}} is an integral domain, then so is {{math|''K''[''X'']}}.<ref>{{harvnb|Herstein|1975|p=162}}</ref> It follows also that, if {{math|''K''}} is an integral domain, a polynomial is a [[unit (ring theory)|unit]] (that is, it has a [[multiplicative inverse]]) if and only if it is constant and is a unit in {{mvar|K}}. Two polynomials are [[associated element|associated]] if either one is the product of the other by a unit. Over a field, every nonzero polynomial is associated to a unique monic polynomial. Given two polynomials, {{mvar|p}} and {{mvar|q}}, one says that {{mvar|p}} ''divides'' {{mvar|q}}, {{mvar|p}} is a ''divisor'' of {{mvar|q}}, or {{mvar|q}} is a multiple of {{mvar|p}}, if there is a polynomial {{mvar|r}} such that {{math|1=''q'' = ''pr''}}. A polynomial is [[irreducible polynomial|irreducible]] if it is not the product of two non-constant polynomials, or equivalently, if its divisors are either constant polynomials or have the same degree. === Polynomial evaluation === {{further|Polynomial evaluation}} Let {{mvar|K}} be a field or, more generally, a [[commutative ring]], and {{mvar|R}} a ring containing {{mvar|K}}. For any polynomial {{mvar|P}} in {{math|''K''[''X'']}} and any element {{mvar|a}} in {{mvar|R}}, the substitution of {{mvar|X}} with {{mvar|a}} in {{mvar|P}} defines an element of {{math|''R''}}, which is [[Polynomial notation|denoted]] {{math|''P''(''a'')}}. This element is obtained by carrying on in {{mvar|R}} after the substitution the operations indicated by the expression of the polynomial. This computation is called the '''evaluation''' of {{math|''P''}} at {{math|''a''}}. For example, if we have :<math>P = X^2 - 1,</math> we have :<math>\begin{align} P(3) &= 3^2-1 = 8, \\ P(X^2+1) &= \left(X^2 + 1\right)^2 - 1 = X^4 + 2X^2 \end{align}</math> (in the first example {{math|1=''R'' = ''K''}}, and in the second one {{math|1=''R'' = ''K''[''X'']}}). Substituting {{math|''X''}} for itself results in :<math>P = P(X),</math> explaining why the sentences "Let {{mvar|P}} be a polynomial" and "Let {{math|''P''(''X'')}} be a polynomial" are equivalent. The ''polynomial function'' defined by a polynomial {{mvar|P}} is the function from {{mvar|K}} into {{mvar|K}} that is defined by <math>x\mapsto P(x).</math> If {{mvar|K}} is an infinite field, two different polynomials define different polynomial functions, but this property is false for finite fields. For example, if {{mvar|K}} is a field with {{mvar|q}} elements, then the polynomials {{math|0}} and {{math|''X''<sup>''q''</sup> − ''X''}} both define the zero function. For every {{math|''a''}} in {{math|''R''}}, the evaluation at {{mvar|a}}, that is, the map <math>P \mapsto P(a)</math> defines an [[algebra homomorphism]] from {{math|''K''[''X'']}} to {{math|''R''}}, which is the unique homomorphism from {{math|''K''[''X'']}} to {{math|''R''}} that fixes {{mvar|K}}, and maps {{mvar|X}} to {{mvar|a}}. In other words, {{math|''K''[''X'']}} has the following [[universal property]]: :For every ring {{mvar|R}} containing {{mvar|K}}, and every element {{mvar|a}} of {{mvar|R}}, there is a unique algebra homomorphism from {{math|''K''[''X'']}} to {{mvar|R}} that fixes {{mvar|K}}, and maps {{mvar|X}} to {{mvar|a}}. As for all universal properties, this defines the pair {{math|(''K''[''X''], ''X'')}} up to a unique isomorphism, and can therefore be taken as a definition of {{math|''K''[''X'']}}. The [[Image (mathematics)|image]] of the map <math>P \mapsto P(a)</math>, that is, the subset of {{mvar|R}} obtained by substituting {{mvar|a}} for {{mvar|X}} in elements of {{math|''K''[''X'']}}, is denoted {{math|''K''[''a'']}}.<ref>Knapp, Anthony W. (2006), ''Basic Algebra'', [[Birkhäuser]], p. 121.</ref> For example, <math>\Z[\sqrt{2}]=\{P(\sqrt{2})\mid P(X)\in\Z[X]\}</math>, and the simplification rules for the powers of a square root imply <math>\Z[\sqrt{2}]= \{a+b\sqrt 2 \mid a\in \Z, b\in \Z\}.</math> == Univariate polynomials over a field == If {{mvar|K}} is a [[field (mathematics)|field]], the polynomial ring {{math|''K''[''X'']}} has many properties that are similar to those of the [[ring (mathematics)|ring]] of integers <math>\Z.</math> Most of these similarities result from the similarity between the [[long division|long division of integers]] and the [[polynomial long division|long division of polynomials]]. Most of the properties of {{math|''K''[''X'']}} that are listed in this section do not remain true if {{mvar|K}} is not a field, or if one considers polynomials in several indeterminates. Like for integers, the [[Euclidean division of polynomials]] has a property of uniqueness. That is, given two polynomials {{mvar|a}} and {{math|''b'' ≠ 0}} in {{math|''K''[''X'']}}, there is a unique pair {{math|(''q'', ''r'')}} of polynomials such that {{math|1=''a'' = ''bq'' + ''r''}}, and either {{math|1=''r'' = 0}} or {{math|deg(''r'') < deg(''b'')}}. This makes {{math|''K''[''X'']}} a [[Euclidean domain]]. However, most other Euclidean domains (except integers) do not have any property of uniqueness for the division nor an easy algorithm (such as long division) for computing the Euclidean division. The Euclidean division is the basis of the [[Euclidean algorithm for polynomials]] that computes a [[polynomial greatest common divisor]] of two polynomials. Here, "greatest" means "having a maximal degree" or, equivalently, being maximal for the [[preorder]] defined by the degree. Given a greatest common divisor of two polynomials, the other greatest common divisors are obtained by multiplication by a nonzero constant (that is, all greatest common divisors of {{mvar|a}} and {{mvar|b}} are associated). In particular, two polynomials that are not both zero have a unique greatest common divisor that is monic (leading coefficient equal to {{val|1}}). The [[extended Euclidean algorithm]] allows computing (and proving) [[Bézout's identity]]. In the case of {{math|''K''[''X'']}}, it may be stated as follows. Given two polynomials {{mvar|p}} and {{mvar|q}} of respective degrees {{mvar|m}} and {{mvar|n}}, if their monic greatest common divisor {{mvar|g}} has the degree {{mvar|d}}, then there is a unique pair {{math|(''a'', ''b'')}} of polynomials such that :<math>ap + bq = g,</math> and :<math>\deg (a) \le n-d, \quad \deg(b) < m-d.</math> (For making this true in the limiting case where {{math|1=''m'' = ''d''}} or {{math|1=''n'' = ''d''}}, one has to define as negative the degree of the zero polynomial. Moreover, the equality <math>\deg (a)= n-d</math> can occur only if {{mvar|p}} and {{math|q}} are associated.) The uniqueness property is rather specific to {{math|''K''[''X'']}}. In the case of the integers the same property is true, if degrees are replaced by absolute values, but, for having uniqueness, one must require {{math|''a'' > 0}}. [[Euclid's lemma]] applies to {{math|''K''[''X'']}}. That is, if {{mvar|a}} divides {{mvar|bc}}, and is [[coprime]] with {{mvar|b}}, then {{mvar|a}} divides {{mvar|c}}. Here, ''coprime'' means that the monic greatest common divisor is {{val|1}}. ''Proof:'' By hypothesis and Bézout's identity, there are {{mvar|e}}, {{mvar|p}}, and {{mvar|q}} such that {{math|1=''ae'' = ''bc''}} and {{math|1=1 = ''ap'' + ''bq''}}. So <math>c=c(ap+bq)=cap+aeq=a(cp+eq).</math> The [[unique factorization]] property results from Euclid's lemma. In the case of integers, this is the [[fundamental theorem of arithmetic]]. In the case of {{math|''K''[''X'']}}, it may be stated as: ''every non-constant polynomial can be expressed in a unique way as the product of a constant, and one or several irreducible monic polynomials; this decomposition is unique up to the order of the factors.'' In other terms {{math|''K''[''X'']}} is a [[unique factorization domain]]. If {{mvar|K}} is the field of complex numbers, the [[fundamental theorem of algebra]] asserts that a univariate polynomial is irreducible if and only if its degree is one. In this case the unique factorization property can be restated as: ''every non-constant univariate polynomial over the complex numbers can be expressed in a unique way as the product of a constant, and one or several polynomials of the form'' {{math|''X'' − ''r''}}; ''this decomposition is unique up to the order of the factors.'' For each factor, {{mvar|r}} is a [[root of a function|root]] of the polynomial, and the number of occurrences of a factor is the [[multiplicity (mathematics)|multiplicity]] of the corresponding root. ===Derivation=== {{main|Formal derivative|Derivation (differential algebra)}} The [[formal derivative|(formal) derivative]] of the polynomial :<math>a_0+a_1X+a_2X^2+\cdots+a_nX^n</math> is the polynomial :<math>a_1+2a_2X+\cdots+na_nX^{n-1}.</math> In the case of polynomials with [[real number|real]] or [[complex number|complex]] coefficients, this is the standard [[derivative]]. The above formula defines the derivative of a polynomial even if the coefficients belong to a ring on which no notion of [[limit (mathematics)|limit]] is defined. The derivative makes the polynomial ring a [[differential algebra]]. The existence of the derivative is one of the main properties of a polynomial ring that is not shared with integers, and makes some computations easier on a polynomial ring than on integers. ====Square-free factorization==== {{main|Square-free polynomial}} A polynomial with coefficients in a field or integral domain is ''square-free'' if it does not have a [[multiple root]] in the [[algebraically closed field]] containing its coefficients. In particular, a polynomial of degree {{mvar|n}} with real or complex coefficients is square-free if it has {{mvar|n}} distinct complex roots. Equivalently, a polynomial over a field is square-free if and only if the [[Polynomial greatest common divisor|greatest common divisor]] of the polynomial and its derivative is {{math|1}}. A ''square-free factorization'' of a polynomial is an expression for that polynomial as a product of powers of [[pairwise relatively prime]] square-free factors. Over the real numbers (or any other field of [[characteristic 0]]), such a factorization can be computed efficiently by [[Yun's algorithm]]. Less efficient algorithms are known for [[Factorization_of_polynomials_over_finite_fields#Square-free_factorization|square-free factorization of polynomials over finite fields]]. ====Lagrange interpolation==== {{main|Lagrange polynomial}} Given a finite set of ordered pairs <math>(x_j, y_j)</math> with entries in a field and distinct values <math>x_j</math>, among the polynomials <math>f(x)</math> that interpolate these points (so that <math>f(x_j) = y_j</math> for all <math>j</math>), there is a unique polynomial of smallest degree. This is the ''Lagrange interpolation polynomial'' <math>L(x)</math>. If there are <math>k</math> ordered pairs, the degree of <math>L(x)</math> is at most <math>k - 1</math>. The polynomial <math>L(x)</math> can be computed explicitly in terms of the input data <math>(x_j, y_j)</math>. ====Polynomial decomposition==== {{main|Polynomial decomposition}} A ''decomposition'' of a polynomial is a way of expressing it as a [[function composition|composition]] of other polynomials of degree larger than 1. A polynomial that cannot be decomposed is ''indecomposable''. [[Ritt's polynomial decomposition theorem]] asserts that if <math>f = g_1 \circ g_2 \circ \cdots \circ g_m = h_1 \circ h_2 \circ \cdots\circ h_n</math> are two different decompositions of the polynomial <math>f</math>, then <math>m = n</math> and the degrees of the indecomposables in one decomposition are the same as the degrees of the indecomposables in the other decomposition (though not necessarily in the same order). === Factorization === {{main|Polynomial factorization}} Except for factorization, all previous properties of {{math|''K''[''X'']}} are [[effective proof|effective]], since their proofs, as sketched above, are associated with [[algorithm]]s for testing the property and computing the polynomials whose existence are asserted. Moreover these algorithms are efficient, as their [[computational complexity]] is a [[quadratic time|quadratic]] function of the input size. The situation is completely different for factorization: the proof of the unique factorization does not give any hint for a method for factorizing. Already for the integers, there is no known algorithm running on a classical (non-quantum) computer for factorizing them in [[polynomial time]]. This is the basis of the [[RSA cryptosystem]], widely used for secure Internet communications. In the case of {{math|''K''[''X'']}}, the factors, and the methods for computing them, depend strongly on {{mvar|K}}. Over the complex numbers, the irreducible factors (those that cannot be factorized further) are all of degree one, while, over the real numbers, there are irreducible polynomials of degree 2, and, over the [[rational number]]s, there are irreducible polynomials of any degree. For example, the polynomial <math>X^4-2</math> is irreducible over the rational numbers, is factored as <math>(X - \sqrt[4]2)(X+\sqrt[4]2)(X^2+\sqrt 2)</math> over the real numbers and, and as <math>(X-\sqrt[4]2)(X+\sqrt[4]2)(X-i\sqrt[4]2)(X+i\sqrt[4]2)</math> over the complex numbers. The existence of a factorization algorithm depends also on the ground field. In the case of the real or complex numbers, [[Abel–Ruffini theorem]] shows that the roots of some polynomials, and thus the irreducible factors, cannot be computed exactly. Therefore, a factorization algorithm can compute only approximations of the factors. Various algorithms have been designed for computing such approximations, see [[Root finding of polynomials]]. There is an example of a field {{math|''K''}} such that there exist exact algorithms for the arithmetic operations of {{math|''K''}}, but there cannot exist any algorithm for deciding whether a polynomial of the form <math>X^p - a</math> is [[irreducible polynomial|irreducible]] or is a product of polynomials of lower degree.<ref>{{citation |author1=Fröhlich, A.|author2=Shepherson, J. C.|title = On the factorisation of polynomials in a finite number of steps|journal = Mathematische Zeitschrift|volume = 62|issue=1|year = 1955|issn = 0025-5874|doi=10.1007/BF01180640|pages=331–334|s2cid=119955899 }}</ref> On the other hand, over the rational numbers and over finite fields, the situation is better than for [[integer factorization]], as there are [[factorization of polynomials|factorization algorithm]]s that have a [[polynomial complexity]]. They are implemented in most general purpose [[computer algebra system]]s. ===Minimal polynomial=== {{main|Minimal polynomial (field theory)}} If {{math|''θ''}} is an element of an [[associative algebra|associative {{mvar|K}}-algebra]] {{math|''L''}}, the [[#Polynomial evaluation|polynomial evaluation]] at {{math|''θ''}} is the unique [[algebra homomorphism]] {{math|''φ''}} from {{math|''K''[''X'']}} into {{math|''L''}} that maps {{math|''X''}} to {{math|''θ''}} and does not affect the elements of {{math|''K''}} itself (it is the [[identity function|identity map]] on {{math|''K''}}). It consists of ''substituting'' {{math|''X''}} with {{math|''θ''}} in every polynomial. That is, : <math> \varphi\left(a_m X^m + a_{m - 1} X^{m - 1} + \cdots + a_1 X + a_0\right) = a_m \theta^m + a_{m - 1} \theta^{m - 1} + \cdots + a_1 \theta + a_0. </math> The image of this ''evaluation homomorphism'' is the subalgebra generated by {{mvar|θ}}, which is necessarily commutative. If {{math|''φ''}} is injective, the subalgebra generated by {{mvar|θ}} is isomorphic to {{math|''K''[''X'']}}. In this case, this subalgebra is often denoted by {{math|''K''[''θ'']}}. The notation ambiguity is generally harmless, because of the isomorphism. {{anchor|minimal polynomial}} If the evaluation homomorphism is not injective, this means that its [[kernel (algebra)|kernel]] is a nonzero [[ideal (ring theory)|ideal]], consisting of all polynomials that become zero when {{mvar|X}} is substituted with {{mvar|θ}}. This ideal consists of all multiples of some monic polynomial, that is called the '''minimal polynomial''' of {{mvar|θ}}. The term ''minimal'' is motivated by the fact that its degree is minimal among the degrees of the elements of the ideal. There are two main cases where minimal polynomials are considered. In [[field theory (mathematics)|field theory]] and [[number theory]], an element {{mvar|θ}} of an [[extension field]] {{mvar|L}} of {{mvar|K}} is [[algebraic element|algebraic]] over {{mvar|K}} if it is a root of some polynomial with coefficients in {{mvar|K}}. The [[minimal polynomial (field theory)|minimal polynomial]] over {{mvar|K}} of {{mvar|θ}} is thus the monic polynomial of minimal degree that has {{mvar|θ}} as a root. Because {{mvar|L}} is a field, this minimal polynomial is necessarily [[irreducible polynomial|irreducible]] over {{mvar|K}}. For example, the minimal polynomial (over the reals as well as over the rationals) of the [[complex number]] {{mvar|i}} is <math>X^2 + 1</math>. The [[cyclotomic polynomial]]s are the minimal polynomials of the [[roots of unity]]. In [[linear algebra]], the {{math|''n''×''n''}} [[square matrices]] over {{mvar|K}} form an [[associative algebra|associative {{mvar|K}}-algebra]] of finite dimension (as a vector space). Therefore the evaluation homomorphism cannot be injective, and every matrix has a [[minimal polynomial (linear algebra)|minimal polynomial]] (not necessarily irreducible). By [[Cayley–Hamilton theorem]], the evaluation homomorphism maps to zero the [[characteristic polynomial]] of a matrix. It follows that the minimal polynomial divides the characteristic polynomial, and therefore that the degree of the minimal polynomial is at most {{mvar|n}}. === Quotient ring=== In the case of {{math|''K''[''X'']}}, the [[quotient ring]] by an ideal can be built, as in the general case, as a set of [[equivalence class]]es. However, as each equivalence class contains exactly one polynomial of minimal degree, another construction is often more convenient. Given a polynomial {{mvar|p}} of degree {{mvar|d}}, the ''quotient ring'' of {{math|''K''[''X'']}} by the [[ideal (ring theory)|ideal]] generated by {{mvar|p}} can be identified with the [[vector space]] of the polynomials of degrees less than {{mvar|d}}, with the "multiplication modulo {{mvar|p}}" as a multiplication, the ''multiplication modulo'' {{mvar|p}} consisting of the remainder under the division by {{mvar|p}} of the (usual) product of polynomials. This quotient ring is variously denoted as <math>K[X]/pK[X],</math> <math>K[X]/\langle p \rangle,</math> <math>K[X]/(p),</math> or simply <math>K[X]/p.</math> The ring <math>K[X]/(p)</math> is a field if and only if {{mvar|p}} is an [[irreducible polynomial]]. In fact, if {{mvar|p}} is irreducible, every nonzero polynomial {{mvar|q}} of lower degree is coprime with {{mvar|p}}, and [[Bézout's identity]] allows computing {{mvar|r}} and {{mvar|s}} such that {{math|1=''sp'' + ''qr'' = 1}}; so, {{mvar|r}} is the [[multiplicative inverse]] of {{mvar|q}} modulo {{mvar|p}}. Conversely, if {{mvar|p}} is reducible, then there exist polynomials {{mvar|a, b}} of degrees lower than {{math|deg(''p'')}} such that {{math|1=''ab'' = ''p''}} ; so {{mvar|a, b}} are nonzero [[zero divisor]]s modulo {{mvar|p}}, and cannot be invertible. For example, the standard definition of the field of the complex numbers can be summarized by saying that it is the quotient ring :<math>\mathbb C =\mathbb R[X]/(X^2+1),</math> and that the image of {{mvar|X}} in <math>\mathbb C</math> is denoted by {{mvar|i}}. In fact, by the above description, this quotient consists of all polynomials of degree one in {{mvar|i}}, which have the form {{math|''a'' + ''bi''}}, with {{mvar|a}} and {{mvar|b}} in <math>\mathbb R.</math> The remainder of the Euclidean division that is needed for multiplying two elements of the quotient ring is obtained by replacing {{math|''i''{{sup|2}}}} by {{math|−1}} in their product as polynomials (this is exactly the usual definition of the product of complex numbers). Let {{math|''θ''}} be an [[algebraic element]] in a {{mvar|K}}-algebra {{mvar|A}}. By ''algebraic'', one means that {{math|''θ''}} has a minimal polynomial {{mvar|p}}. The [[first ring isomorphism theorem]] asserts that the substitution homomorphism induces an [[isomorphism]] of <math>K[X]/(p)</math> onto the image {{math|''K''[''θ'']}} of the substitution homomorphism. In particular, if {{mvar|A}} is a [[simple extension]] of {{mvar|K}} generated by {{math|''θ''}}, this allows identifying {{mvar|A}} and <math>K[X]/(p).</math> This identification is widely used in [[algebraic number theory]]. === Modules === The [[structure theorem for finitely generated modules over a principal ideal domain]] applies to ''K''[''X''], when ''K'' is a field. This means that every finitely generated module over ''K''[''X''] may be decomposed into a [[direct sum]] of a [[free module]] and finitely many modules of the form <math>K[X]/\left\langle P^k \right\rangle</math>, where ''P'' is an [[irreducible polynomial]] over ''K'' and ''k'' a positive integer. ==Definition (multivariate case)== {{Anchor|multivariable}} Given {{mvar|n}} symbols <math>X_1, \dots, X_n,</math> called [[indeterminate (variable)|indeterminates]], a [[monomial]] (also called ''power product'') :<math>X_1^{\alpha_1}\cdots X_n^{\alpha_n}</math> is a formal product of these indeterminates, possibly raised to a nonnegative power. As usual, exponents equal to one and factors with a zero exponent can be omitted. In particular, <math>X_1^0\cdots X_n^0 =1.</math> The [[tuple]] of exponents {{math|1=''α'' = (''α''<sub>1</sub>, …, ''α''<sub>''n''</sub>)}} is called the ''multidegree'' or ''exponent vector'' of the monomial. For a less cumbersome notation, the abbreviation :<math>X^\alpha=X_1^{\alpha_1}\cdots X_n^{\alpha_n}</math> is often used. The ''degree'' of a monomial {{math|''X''<sup>''α''</sup>}}, frequently denoted {{math|deg ''α''}} or {{math|{{abs|''α''}}}}, is the sum of its exponents: :<math> \deg \alpha = \sum_{i=1}^n \alpha_i. </math> A ''polynomial'' in these indeterminates, with coefficients in a field {{mvar|K}}, or more generally a [[ring (mathematics)|ring]], is a finite [[linear combination]] of monomials :<math> p = \sum_\alpha p_\alpha X^\alpha</math> with coefficients in {{mvar|K}}. The ''degree'' of a nonzero polynomial is the maximum of the degrees of its monomials with nonzero coefficients. The set of polynomials in <math>X_1, \dots, X_n,</math> denoted <math>K[X_1,\dots, X_n],</math> is thus a [[vector space]] (or a [[free module]], if {{mvar|K}} is a ring) that has the monomials as a basis. <math>K[X_1,\dots, X_n]</math> is naturally equipped (see below) with a multiplication that makes a [[ring (mathematics)|ring]], and an [[associative algebra]] over {{mvar|K}}, called ''the polynomial ring in {{mvar|n}} indeterminates'' over {{mvar|K}} (the definite article ''the'' reflects that it is uniquely defined up to the name and the order of the indeterminates. If the ring {{mvar|K}} is [[commutative ring|commutative]], <math>K[X_1,\dots, X_n]</math> is also a commutative ring. ===Operations in {{math|''K''[''X''{{sub|1}}, ..., ''X''{{sub|''n''}}]}}=== ''Addition'' and ''scalar multiplication'' of polynomials are those of a [[vector space]] or [[free module]] equipped by a specific basis (here the basis of the monomials). Explicitly, let <math>p=\sum_{\alpha\in I}p_\alpha X^\alpha,\quad q=\sum_{\beta\in J}q_\beta X^\beta,</math> where {{mvar|I}} and {{mvar|J}} are finite sets of exponent vectors. The scalar multiplication of {{mvar|p}} and a scalar <math>c\in K</math> is :<math>cp = \sum_{\alpha\in I}cp_\alpha X^\alpha.</math> The addition of {{mvar|p}} and {{mvar|q}} is :<math>p+q = \sum_{\alpha\in I\cup J}(p_\alpha+q_\alpha) X^\alpha,</math> where <math>p_\alpha=0</math> if <math>\alpha \not\in I,</math> and <math>q_\beta=0</math> if <math>\beta \not\in J.</math> Moreover, if one has <math>p_\alpha+q_\alpha=0</math> for some <math>\alpha \in I \cap J,</math> the corresponding zero term is removed from the result. The multiplication is :<math>pq = \sum_{\gamma\in I+J}\left(\sum_{\alpha, \beta\mid \alpha+\beta=\gamma} p_\alpha q_\beta\right) X^\gamma,</math> where <math>I+J</math> is the set of the sums of one exponent vector in {{mvar|I}} and one other in {{mvar|J}} (usual sum of vectors). In particular, the product of two monomials is a monomial whose exponent vector is the sum of the exponent vectors of the factors. The verification of the axioms of an [[associative algebra]] is straightforward. ===Polynomial expression=== {{main|Algebraic expression}} {{Unreferenced section|date=January 2021}} A '''polynomial expression''' is an [[expression (mathematics)|expression]] built with scalars (elements of {{mvar|K}}), indeterminates, and the operators of addition, multiplication, and exponentiation to nonnegative integer powers. As all these operations are defined in <math>K[X_1,\dots, X_n]</math> a polynomial expression represents a polynomial, that is an element of <math>K[X_1,\dots, X_n].</math> The definition of a polynomial as a linear combination of monomials is a particular polynomial expression, which is often called the ''canonical form'', ''normal form'', or ''expanded form'' of the polynomial. Given a polynomial expression, one can compute the ''expanded'' form of the represented polynomial by ''expanding'' with the [[distributive law]] all the products that have a sum among their factors, and then using [[commutativity]] (except for the product of two scalars), and [[associativity]] for transforming the terms of the resulting sum into products of a scalar and a monomial; then one gets the canonical form by regrouping the [[like terms]]. The distinction between a polynomial expression and the polynomial that it represents is relatively recent, and mainly motivated by the rise of [[computer algebra]], where, for example, the test whether two polynomial expressions represent the same polynomial may be a nontrivial computation. === Categorical characterization === {{anchor|free commutative algebra|free commutative ring}} If {{mvar|K}} is a commutative ring, the polynomial ring {{math|''K''[''X''<sub>1</sub>, …, ''X''<sub>''n''</sub>]}} has the following [[universal property]]: for every [[commutative algebra (structure)|commutative {{mvar|K}}-algebra]] {{mvar|A}}, and every {{mvar|n}}-[[tuple]] {{math|(''x''<sub>1</sub>, …, ''x''<sub>''n''</sub>)}} of elements of {{mvar|A}}, there is a unique [[algebra homomorphism]] from {{math|''K''[''X''<sub>1</sub>, …, ''X''<sub>''n''</sub>]}} to {{mvar|A}} that maps each <math>X_i</math> to the corresponding <math>x_i.</math> This homomorphism is the ''evaluation homomorphism'' that consists in substituting <math>X_i</math> with <math>x_i</math> in every polynomial. As it is the case for every universal property, this characterizes the pair <math>(K[X_1, \dots, X_n], (X_1, \dots, X_n))</math> up to a unique [[isomorphism]]. This may also be interpreted in terms of [[adjoint functor]]s. More precisely, let {{math|SET}} and {{math|ALG}} be respectively the [[category (mathematics)|categories]] of sets and commutative {{mvar|K}}-algebras (here, and in the following, the morphisms are trivially defined). There is a [[forgetful functor]] <math>\mathrm F: \mathrm{ALG}\to \mathrm{SET}</math> that maps algebras to their underlying sets. On the other hand, the map <math>X\mapsto K[X]</math> defines a functor <math>\mathrm{POL}: \mathrm{SET}\to \mathrm{ALG}</math> in the other direction. (If {{mvar|X}} is infinite, {{math|''K''[''X'']}} is the set of all polynomials in a finite number of elements of {{mvar|X}}.) The universal property of the polynomial ring means that {{math|F}} and {{math|POL}} are [[adjoint functors]]. That is, there is a bijection :<math>\operatorname{Hom}_{\mathrm {SET}}(X,\operatorname{F}(A))\cong \operatorname{Hom}_{\mathrm {ALG}}(K[X], A). </math> This may be expressed also by saying that polynomial rings are '''free commutative algebras''', since they are [[free object]]s in the category of commutative algebras. Similarly, a polynomial ring with integer coefficients is the '''free commutative ring''' over its set of variables, since commutative rings and commutative algebras over the integers are the same thing. ==Graded structure== Every polynomial ring is a [[graded ring]]: one can write the polynomial ring <math>R = K[X_1, \ldots, X_n]</math> as a [[direct sum]] <math display=block> R = \bigoplus_{i = 0}^\infty R_i</math> where <math>R_i</math> is the subspace consisting of all [[homogeneous polynomial]]s of degree <math>i</math> (along with the zero polynomial); then for any elements <math>f \in R_i</math> and <math>g \in R_j</math>, their product <math>fg</math> belongs to <math>R_{i + j}</math>. == Univariate over a ring vs. multivariate == A polynomial in <math>K[X_1, \ldots, X_n]</math> can be considered as a univariate polynomial in the indeterminate <math>X_n</math> over the ring <math>K[X_1, \ldots, X_{n-1}],</math> by regrouping the terms that contain the same power of <math>X_n,</math> that is, by using the identity :<math>\sum_{(\alpha_1, \ldots, \alpha_n)\in I} c_{\alpha_1, \ldots, \alpha_n} X_1^{\alpha_1} \cdots X_n^{\alpha_n}=\sum_i\left(\sum_{(\alpha_1, \ldots, \alpha_{n-1})\mid (\alpha_1, \ldots, \alpha_{n-1}, i)\in I} c_{\alpha_1, \ldots, \alpha_{n-1}} X_1^{\alpha_1} \cdots X_{n-1}^{\alpha_{n-1}}\right)X_n^i,</math> which results from the distributivity and associativity of ring operations. This means that one has an [[algebra isomorphism]] :<math>K[X_1, \ldots, X_n]\cong (K[X_1, \ldots, X_{n-1}])[X_n]</math> that maps each indeterminate to itself. (This isomorphism is often written as an equality, which is justified by the fact that polynomial rings are defined up to a ''unique'' isomorphism.) In other words, a multivariate polynomial ring can be considered as a univariate polynomial over a smaller polynomial ring. This is commonly used for proving properties of multivariate polynomial rings, by [[mathematical induction|induction]] on the number of indeterminates. The main such properties are listed below. === Properties that pass from {{math|''R''}} to {{math|''R''[''X'']}} === In this section, {{mvar|R}} is a commutative ring, {{mvar|K}} is a field, {{mvar|X}} denotes a single indeterminate, and, as usual, <math>\mathbb Z</math> is the ring of integers. Here is the list of the main ring properties that remain true when passing from {{mvar|R}} to {{math|''R''[''X'']}}. * If {{mvar|R}} is an [[integral domain]] then the same holds for {{math|''R''[''X'']}} (since the leading coefficient of a product of polynomials is, if not zero, the product of the leading coefficients of the factors). **In particular, <math>K[X_1,\ldots,X_n]</math> and <math>\mathbb Z[X_1,\ldots,X_n]</math> are integral domains. * If {{mvar|R}} is a [[unique factorization domain]] then the same holds for {{math|''R''[''X'']}}. This results from [[Gauss's lemma (polynomial)|Gauss's lemma]] and the unique factorization property of <math>L[X],</math> where {{mvar|L}} is the field of fractions of {{mvar|R}}. **In particular, <math>K[X_1,\ldots,X_n]</math> and <math>\mathbb Z[X_1,\ldots,X_n]</math> are unique factorization domains. * If {{mvar|R}} is a [[Noetherian ring]], then the same holds for {{math|''R''[''X'']}}. **In particular, <math>K[X_1,\ldots,X_n]</math> and <math>\mathbb Z[X_1,\ldots,X_n]</math> are Noetherian rings; this is [[Hilbert's basis theorem]]. * If {{mvar|R}} is a Noetherian ring, then <math>\dim R[X] = 1+\dim R,</math> where "<math>\dim</math>" denotes the [[Krull dimension]]. **In particular, <math>\dim K[X_1,\ldots,X_n] = n</math> and <math>\dim \mathbb Z[X_1,\ldots,X_n] = n+1.</math> * If {{mvar|R}} is a [[regular ring]], then the same holds for {{math|''R''[''X'']}}; in this case, one has <math display="block">\operatorname{gl}\, \dim R[X]= \dim R[X]= 1 + \operatorname{gl}\, \dim R=1+\dim R,</math> where "<math>\operatorname{gl}\, \dim</math>" denotes the [[global dimension]]. **In particular, <math>K[X_1,\ldots,X_n]</math> and <math>\mathbb Z[X_1,\ldots,X_n]</math> are regular rings, <math>\operatorname{gl}\, \dim \mathbb Z[X_1,\ldots,X_n] = n+1,</math> and <math>\operatorname{gl}\, \dim K[X_1,\ldots,X_n] = n.</math> The latter equality is [[Hilbert's syzygy theorem]]. ==Several indeterminates over a field== Polynomial rings in several variables over a field are fundamental in [[invariant theory]] and [[algebraic geometry]]. Some of their properties, such as those described above can be reduced to the case of a single indeterminate, but this is not always the case. In particular, because of the geometric applications, many interesting properties must be invariant under [[affine transformation|affine]] or [[projective transformation|projective]] transformations of the indeterminates. This often implies that one cannot select one of the indeterminates for a recurrence on the indeterminates. [[Bézout's theorem]], [[Hilbert's Nullstellensatz]] and [[Jacobian conjecture]] are among the most famous properties that are specific to multivariate polynomials over a field. === Hilbert's Nullstellensatz === {{Main|Hilbert's Nullstellensatz}} The Nullstellensatz (German for "zero-locus theorem") is a theorem, first proved by [[David Hilbert]], which extends to the multivariate case some aspects of the [[fundamental theorem of algebra]]. It is foundational for [[algebraic geometry]], as establishing a strong link between the algebraic properties of <math>K[X_1, \ldots, X_n]</math> and the geometric properties of [[algebraic varieties]], that are (roughly speaking) set of points defined by [[implicit equation|implicit polynomial equations]]. The Nullstellensatz, has three main versions, each being a corollary of any other. Two of these versions are given below. For the third version, the reader is referred to the main article on the Nullstellensatz. The first version generalizes the fact that a nonzero univariate polynomial has a [[complex number|complex]] zero if and only if it is not a constant. The statement is: ''a set of polynomials {{mvar|S}} in <math>K[X_1, \ldots, X_n]</math> has a common zero in an [[algebraically closed field]] containing {{mvar|K}}, if and only if'' {{math|1}} ''does not belong to the [[ideal (ring theory)|ideal]] generated by {{mvar|S}}, that is, if'' {{math|1}} ''is not a [[linear combination]] of elements of {{mvar|S}} with polynomial coefficients''. The second version generalizes the fact that the [[irreducible polynomial|irreducible univariate polynomial]]s over the complex numbers are [[associate elements|associate]] to a polynomial of the form <math>X-\alpha.</math> The statement is: ''If {{mvar|K}} is algebraically closed, then the [[maximal ideal]]s of <math>K[X_1, \ldots, X_n]</math> have the form <math>\langle X_1 - \alpha_1, \ldots, X_n - \alpha_n \rangle.</math>'' ===Bézout's theorem=== {{main|Bézout's theorem}} Bézout's theorem may be viewed as a multivariate generalization of the version of the [[fundamental theorem of algebra]] that asserts that a univariate polynomial of degree {{mvar|n}} has {{mvar|n}} complex roots, if they are counted with their multiplicities. In the case of [[bivariate polynomial]]s, it states that two polynomials of degrees {{mvar|d}} and {{mvar|e}} in two variables, which have no common factors of positive degree, have exactly {{mvar|de}} common zeros in an [[algebraically closed field]] containing the coefficients, if the zeros are counted with their multiplicity and include the [[point at infinity|zeros at infinity]]. For stating the general case, and not considering "zero at infinity" as special zeros, it is convenient to work with [[homogeneous polynomial]]s, and consider zeros in a [[projective space]]. In this context, a ''projective zero'' of a homogeneous polynomial <math>P(X_0, \ldots, X_n)</math> is, up to a scaling, a {{math|(''n'' + 1)}}-[[tuple]] <math>(x_0, \ldots, x_n)</math> of elements of {{mvar|K}} that is different from {{math|(0, …, 0)}}, and such that <math>P(x_0, \ldots, x_n) = 0 </math>. Here, "up to a scaling" means that <math>(x_0, \ldots, x_n)</math> and <math>(\lambda x_0, \ldots, \lambda x_n)</math> are considered as the same zero for any nonzero <math>\lambda\in K.</math> In other words, a zero is a set of [[homogeneous coordinates]] of a point in a projective space of dimension {{mvar|n}}. Then, Bézout's theorem states: Given {{mvar|n}} homogeneous polynomials of degrees <math>d_1, \ldots, d_n</math> in {{math|''n'' + 1}} indeterminates, which have only a finite number of common projective zeros in an [[algebraically closed extension]] of {{mvar|K}}, the sum of the [[multiplicity (mathematics)#Intersection multipliicty|multiplicities]] of these zeros is the product <math>d_1 \cdots d_n.</math> ===Jacobian conjecture=== {{main|Jacobian conjecture}} {{expand section|date=June 2020}} ==Generalizations== Polynomial rings can be generalized in a great many ways, including polynomial rings with generalized exponents, power series rings, [[noncommutative polynomial ring]]s, [[skew polynomial ring]]s, and polynomial [[Rig (mathematics)|rig]]s. === Infinitely many variables === One slight generalization of polynomial rings is to allow for infinitely many indeterminates. Each monomial still involves only a finite number of indeterminates (so that its degree remains finite), and each polynomial is a still a (finite) linear combination of monomials. Thus, any individual polynomial involves only finitely many indeterminates, and any finite computation involving polynomials remains inside some subring of polynomials in finitely many indeterminates. This generalization has the same property of usual polynomial rings, of being the [[free commutative algebra]], the only difference is that it is a [[free object]] over an infinite set. One can also consider a strictly larger ring, by defining as a generalized polynomial an infinite (or finite) formal sum of monomials with a bounded degree. This ring is larger than the usual polynomial ring, as it includes infinite sums of variables. However, it is smaller than the [[power series ring#Power series in several variables|ring of power series in infinitely many variables]]. Such a ring is used for constructing the [[ring of symmetric functions]] over an infinite set. ===Generalized exponents=== {{Main|Monoid ring}} A simple generalization only changes the set from which the exponents on the variable are drawn. The formulas for addition and multiplication make sense as long as one can add exponents: {{nowrap|1=''X''{{i sup|''i''}} ⋅ ''X''{{i sup|''j''}} = ''X''{{i sup|''i''+''j''}}}}. A set for which addition makes sense (is closed and associative) is called a [[monoid]]. The set of functions from a monoid ''N'' to a ring ''R'' which are nonzero at only finitely many places can be given the structure of a ring known as ''R''[''N''], the '''monoid ring''' of ''N'' with coefficients in ''R''. The addition is defined component-wise, so that if {{nowrap|1=''c'' = ''a'' + ''b''}}, then {{nowrap|1=''c''<sub>''n''</sub> = ''a''<sub>''n''</sub> + ''b''<sub>''n''</sub>}} for every ''n'' in ''N''. The multiplication is defined as the Cauchy product, so that if {{nowrap|1=''c'' = ''a'' ⋅ ''b''}}, then for each ''n'' in ''N'', ''c''<sub>''n''</sub> is the sum of all ''a''<sub>''i''</sub>''b''<sub>''j''</sub> where ''i'', ''j'' range over all pairs of elements of ''N'' which sum to ''n''. When ''N'' is commutative, it is convenient to denote the function ''a'' in ''R''[''N''] as the formal sum: :<math>\sum_{n \in N} a_n X^n</math> and then the formulas for addition and multiplication are the familiar: :<math>\left(\sum_{n \in N} a_n X^n\right) + \left(\sum_{n \in N} b_n X^n\right) = \sum_{n \in N} \left(a_n + b_n\right)X^n</math> and :<math>\left(\sum_{n \in N} a_n X^n\right) \cdot \left(\sum_{n \in N} b_n X^n\right) = \sum_{n \in N} \left( \sum_{i+j=n} a_i b_j\right)X^n</math> where the latter sum is taken over all ''i'', ''j'' in ''N'' that sum to ''n''. Some authors such as {{harv|Lang|2002|loc=II,§3}} go so far as to take this monoid definition as the starting point, and regular single variable polynomials are the special case where ''N'' is the monoid of non-negative integers. Polynomials in several variables simply take ''N'' to be the direct product of several copies of the monoid of non-negative integers.<!-- Quite tempting to say, ''N'' = '''N'''<sup>''n''</sup>. --> Several interesting examples of rings and groups are formed by taking ''N'' to be the additive monoid of non-negative rational numbers, {{harv|Osborne|2000|loc=§4.4}}. See also [[Puiseux series]]. ===Power series=== {{Main|Formal power series}} Power series generalize the choice of exponent in a different direction by allowing infinitely many nonzero terms. This requires various hypotheses on the monoid ''N'' used for the exponents, to ensure that the sums in the Cauchy product are finite sums. Alternatively, a topology can be placed on the ring, and then one restricts to convergent infinite sums. For the standard choice of ''N'', the non-negative integers, there is no trouble, and the ring of formal power series is defined as the set of functions from ''N'' to a ring ''R'' with addition component-wise, and multiplication given by the Cauchy product. The ring of power series can also be seen as the [[Completion of a ring|ring completion]] of the polynomial ring with respect to the ideal generated by {{mvar|x}}. ===Noncommutative polynomial rings=== {{Main|Free algebra}} For polynomial rings of more than one variable, the products ''X''⋅''Y'' and ''Y''⋅''X'' are simply defined to be equal. A more general notion of polynomial ring is obtained when the distinction between these two formal products is maintained. Formally, the polynomial ring in ''n'' noncommuting variables with coefficients in the ring ''R'' is the [[monoid ring]] ''R''[''N''], where the monoid ''N'' is the [[free monoid]] on ''n'' letters, also known as the set of all strings over an alphabet of ''n'' symbols, with multiplication given by concatenation. Neither the coefficients nor the variables need commute amongst themselves, but the coefficients and variables commute with each other. Just as the polynomial ring in ''n'' variables with coefficients in the commutative ring ''R'' is the free commutative ''R''-algebra of rank ''n'', the noncommutative polynomial ring in ''n'' variables with coefficients in the commutative ring ''R'' is the free associative, unital ''R''-algebra on ''n'' generators, which is noncommutative when ''n'' > 1. ===Differential and skew-polynomial rings=== {{Main|Ore extension}} Other generalizations of polynomials are differential and skew-polynomial rings. A '''differential polynomial ring''' is a ring of [[differential operator]]s formed from a ring ''R'' and a [[Derivation (abstract algebra)|derivation]] ''δ'' of ''R'' into ''R''. This derivation operates on ''R'', and will be denoted ''X'', when viewed as an operator. The elements of ''R'' also operate on ''R'' by multiplication. The [[function composition|composition of operators]] is denoted as the usual multiplication. It follows that the relation {{nowrap|1=''δ''(''ab'') = ''aδ''(''b'') + ''δ''(''a'')''b''}} may be rewritten as : <math>X\cdot a = a\cdot X +\delta(a).</math> This relation may be extended to define a skew multiplication between two polynomials in ''X'' with coefficients in ''R'', which make them a [[noncommutative ring]]. The standard example, called a [[Weyl algebra]], takes ''R'' to be a (usual) polynomial ring ''k''[''Y'' ], and ''δ'' to be the standard polynomial derivative <math>\tfrac{\partial}{\partial Y}</math>. Taking ''a'' = ''Y'' in the above relation, one gets the [[canonical commutation relation]], ''X''⋅''Y'' − ''Y''⋅''X'' = 1. Extending this relation by associativity and distributivity allows explicitly constructing the [[Weyl algebra]]. {{harv|Lam|2001|loc=§1,ex1.9}}. The '''skew-polynomial ring''' is defined similarly for a ring ''R'' and a ring [[endomorphism]] ''f'' of ''R'', by extending the multiplication from the relation ''X''⋅''r'' = ''f''(''r'')⋅''X'' to produce an associative multiplication that distributes over the standard addition. More generally, given a homomorphism ''F'' from the monoid '''N''' of the positive integers into the [[endomorphism ring]] of ''R'', the formula ''X''<sup>''n''</sup>⋅''r'' = ''F''(''n'')(''r'')⋅''X''<sup>''n''</sup> allows constructing a skew-polynomial ring. {{harv|Lam|2001|loc=§1,ex 1.11}} Skew polynomial rings are closely related to [[crossed product]] algebras. === Polynomial rigs === {{See also|Formal power series#On a semiring}} The definition of a polynomial ring can be generalised by relaxing the requirement that the algebraic structure ''R'' be a [[Field (mathematics)|field]] or a [[Ring (mathematics)|ring]] to the requirement that ''R'' only be a [[semifield]] or [[Rig (mathematics)|rig]]; the resulting polynomial structure/extension ''R''[''X''] is a '''polynomial rig'''. For example, the set of all multivariate polynomials with [[natural number]] coefficients is a polynomial rig. == See also == * [[Additive polynomial]] * [[Laurent polynomial]] ==Notes== {{Reflist}} ==References== *{{citation |last=Hall |first=F. M. |title=An Introduction to Abstract Algebra |volume=2 |publisher=Cambridge University Press |year=1969 |isbn=0521084849 |section=Section 3.6 |url-access=registration |url=https://archive.org/details/introductiontoab0000hall_v1 }} *{{citation |last=Herstein |first=I. N. |title=Topics in Algebra |publisher=Wiley |year=1975 |isbn=0471010901 |section=Section 3.9|url=https://archive.org/details/topicsinalgebra00hers|url-access=registration |quote=polynomial ring. }} *{{Citation |last=Lam |first=Tsit-Yuen |author-link=Tsit Yuen Lam |title=A First Course in Noncommutative Rings |publisher=[[Springer-Verlag]] |isbn=978-0-387-95325-0 |year=2001}} *{{citation | last=Lang | first=Serge | author-link=Serge Lang | title=Algebra | publisher=Springer-Verlag | location=New York | series=[[Graduate Texts in Mathematics]] | edition=Revised third | year=2002 | isbn=978-0-387-95385-4 | mr= 1878556 | volume=211 | chapter= | pages= | page= }} *{{Citation |last=Osborne |first=M. Scott |title=Basic homological algebra |publisher=[[Springer-Verlag]] |series=Graduate Texts in Mathematics |isbn=978-0-387-98934-1 |mr=1757274 |year=2000 |volume=196 |doi=10.1007/978-1-4612-1278-2}} {{Authority control}} [[Category:Commutative algebra]] [[Category:Invariant theory]] [[Category:Ring theory]] [[Category:Polynomials]] [[Category:Free algebraic structures]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Anchor
(
edit
)
Template:Authority control
(
edit
)
Template:CS1 config
(
edit
)
Template:Citation
(
edit
)
Template:Expand section
(
edit
)
Template:Further
(
edit
)
Template:Harv
(
edit
)
Template:Harvnb
(
edit
)
Template:Main
(
edit
)
Template:Math
(
edit
)
Template:Mvar
(
edit
)
Template:Nowrap
(
edit
)
Template:Reflist
(
edit
)
Template:Ring theory sidebar
(
edit
)
Template:See also
(
edit
)
Template:Short description
(
edit
)
Template:Unreferenced section
(
edit
)
Template:Val
(
edit
)