Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Permanent (mathematics)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Polynomial of the elements of a matrix}} In [[linear algebra]], the '''permanent''' of a [[square matrix]] is a function of the matrix similar to the [[determinant]]. The permanent, as well as the determinant, is a polynomial in the entries of the matrix.<ref>{{cite journal|author=Marcus, Marvin|author-link=Marvin Marcus|author2=Minc, Henryk|title=Permanents|journal=Amer. Math. Monthly|volume=72|issue=6|year=1965|pages=577–591|url=https://maa.org/programs/maa-awards/writing-awards/permanents|doi=10.2307/2313846|jstor=2313846}}</ref> Both are special cases of a more general function of a matrix called the [[immanant]]. == Definition == The permanent of an {{math|''n''×''n''}} matrix {{math|1=''A'' = (''a''<sub>''i,j''</sub>)}} is defined as <math display="block"> \operatorname{perm}(A)=\sum_{\sigma\in S_n}\prod_{i=1}^n a_{i,\sigma(i)}.</math> The sum here extends over all elements σ of the [[symmetric group]] ''S''<sub>''n''</sub>; i.e. over all [[permutation]]s of the numbers 1, 2, ..., ''n''. For example, <math display="block">\operatorname{perm}\begin{pmatrix}a&b \\ c&d\end{pmatrix}=ad+bc,</math> and <math display="block">\operatorname{perm}\begin{pmatrix}a&b&c \\ d&e&f \\ g&h&i \end{pmatrix}=aei + bfg + cdh + ceg + bdi + afh.</math> The definition of the permanent of ''A'' differs from that of the [[determinant]] of ''A'' in that the [[signature (permutation)|signatures]] of the permutations are not taken into account. The permanent of a matrix A is denoted per ''A'', perm ''A'', or Per ''A'', sometimes with parentheses around the argument. Minc uses Per(''A'') for the permanent of rectangular matrices, and per(''A'') when ''A'' is a square matrix.<ref>{{harvtxt|Minc|1978}}</ref> Muir and Metzler use the notation <math>\overset{+}{|}\quad \overset{+}{|}</math>.<ref>{{harvtxt|Muir|Metzler|1960}}</ref> The word, ''permanent'', originated with Cauchy in 1812 as “fonctions symétriques permanentes” for a related type of function,<ref>{{Citation| last=Cauchy | first=A. L.| title=Mémoire sur les fonctions qui ne peuvent obtenir que deux valeurs égales et de signes contraires par suite des transpositions opérées entre les variables qu'elles renferment. |url=https://gallica.bnf.fr/ark:/12148/bpt6k90193x/f97 |journal=Journal de l'École Polytechnique |volume=10 |pages=91–169 |year=1815}}</ref> and was used by Muir and Metzler<ref>{{harvtxt|Muir|Metzler|1960}}</ref> in the modern, more specific, sense.<ref>{{harvnb|van Lint|Wilson|2001|loc=p. 108}}</ref> == Properties == If one views the permanent as a map that takes ''n'' vectors as arguments, then it is a [[multilinear map]] and it is symmetric (meaning that any order of the vectors results in the same permanent). Furthermore, given a square matrix <math>A = \left(a_{ij}\right)</math> of order ''n'':<ref>{{harvnb|Ryser|1963|loc=pp. 25 – 26}}</ref> * perm(''A'') is invariant under arbitrary permutations of the rows and/or columns of ''A''. This property may be written symbolically as perm(''A'') = perm(''PAQ'') for any appropriately sized [[permutation matrix|permutation matrices]] ''P'' and ''Q'', * multiplying any single row or column of ''A'' by a [[Scalar (mathematics)|scalar]] ''s'' changes perm(''A'') to ''s''⋅perm(''A''), * perm(''A'') is invariant under [[Transpose|transposition]], that is, perm(''A'') = perm(''A''<sup>T</sup>). * If <math>A = \left(a_{ij}\right)</math> and <math>B=\left(b_{ij}\right)</math> are square matrices of order ''n'' then,<ref>{{harvnb|Percus|1971|loc=p. 2}}</ref> <math display="block">\operatorname{perm}\left(A + B\right) = \sum_{s,t} \operatorname{perm} \left(a_{ij}\right)_{i \in s, j \in t} \operatorname{perm} \left(b_{ij}\right)_{i \in \bar{s}, j \in \bar{t}},</math> where ''s'' and ''t'' are subsets of the same size of {1,2,...,''n''} and <math>\bar{s}, \bar{t}</math> are their respective complements in that set. * If <math>A</math> is a [[triangular matrix]], i.e. <math>a_{ij}=0</math>, whenever <math>i>j</math> or, alternatively, whenever <math>i<j</math>, then its permanent (and determinant as well) equals the product of the diagonal entries: <math display="block">\operatorname{perm}\left(A\right) = a_{11} a_{22} \cdots a_{nn} = \prod_{i=1}^n a_{ii}.</math> == Relation to determinants == Laplace's [[expansion by minors]] for computing the determinant along a row, column or diagonal extends to the permanent by ignoring all signs.<ref name="Percus 1971 loc=p. 12">{{harvnb|Percus|1971|loc=p. 12}}</ref> For every <math display="inline">i</math>, <math display="block">\mathbb{perm}(B)=\sum_{j=1}^n B_{i,j} M_{i,j},</math> where <math>B_{i,j}</math> is the entry of the ''i''th row and the ''j''th column of ''B'', and <math display="inline">M_{i,j}</math> is the permanent of the submatrix obtained by removing the ''i''th row and the ''j''th column of ''B''. For example, expanding along the first column, <math display="block">\begin{align} \operatorname{perm} \left ( \begin{matrix} 1 & 1 & 1 & 1\\2 & 1 & 0 & 0\\3 & 0 & 1 & 0\\4 & 0 & 0 & 1 \end{matrix} \right ) = {} & 1 \cdot \operatorname{perm} \left( \begin{matrix} 1&0&0\\ 0&1&0\\ 0&0&1 \end{matrix}\right) + 2\cdot \operatorname{perm} \left(\begin{matrix}1&1&1\\0&1&0\\0&0&1\end{matrix}\right) \\ & {} + \ 3\cdot \operatorname{perm} \left(\begin{matrix}1&1&1\\1&0&0\\0&0&1\end{matrix}\right) + 4 \cdot \operatorname{perm} \left(\begin{matrix}1&1&1\\1&0&0\\0&1&0\end{matrix}\right) \\ = {} & 1(1) + 2(1) + 3(1) + 4(1) = 10, \end{align} </math> while expanding along the last row gives, <math display="block">\begin{align} \operatorname{perm} \left ( \begin{matrix} 1 & 1 & 1 & 1\\2 & 1 & 0 & 0\\3 & 0 & 1 & 0\\4 & 0 & 0 & 1 \end{matrix} \right ) = {} & 4 \cdot \operatorname{perm} \left(\begin{matrix}1&1&1\\1&0&0\\0&1&0\end{matrix}\right) + 0\cdot \operatorname{perm} \left(\begin{matrix}1&1&1\\2&0&0\\3&1&0\end{matrix}\right) \\ & {} + \ 0\cdot \operatorname{perm} \left(\begin{matrix}1&1&1\\2&1&0\\3&0&0\end{matrix}\right) + 1 \cdot \operatorname{perm} \left( \begin{matrix} 1&1&1\\ 2&1&0\\ 3&0&1\end{matrix}\right) \\ = {} & 4(1) + 0 + 0 + 1(6) = 10. \end{align} </math> On the other hand, the basic multiplicative property of determinants is not valid for permanents.<ref name="Ryser 1963 loc=p. 26">{{harvnb|Ryser|1963|loc=p. 26}}</ref> A simple example shows that this is so. <math display="block">\begin{align} 4 &= \operatorname{perm} \left ( \begin{matrix} 1 & 1 \\ 1 & 1 \end{matrix} \right )\operatorname{perm} \left ( \begin{matrix} 1 & 1 \\ 1 & 1 \end{matrix} \right ) \\ &\neq \operatorname{perm}\left ( \left ( \begin{matrix} 1 & 1 \\ 1 & 1 \end{matrix} \right ) \left ( \begin{matrix} 1 & 1 \\ 1 & 1 \end{matrix} \right ) \right ) = \operatorname{perm} \left ( \begin{matrix} 2 & 2 \\ 2 & 2 \end{matrix} \right )= 8. \end{align} </math> Unlike the determinant, the permanent has no easy geometrical interpretation; it is mainly used in [[combinatorics]], in treating boson [[Green's function (many-body theory)|Green's functions]] in [[quantum field theory]], and in determining state probabilities of [[boson sampling]] systems.<ref>{{cite arXiv |last=Aaronson |first=Scott |date=14 Nov 2010 |title=The Computational Complexity of Linear Optics |eprint=1011.3245|class=quant-ph }}</ref> However, it has two [[graph-theoretic]] interpretations: as the sum of weights of [[Vertex cycle cover|cycle cover]]s of a [[directed graph]], and as the sum of weights of perfect matchings in a [[bipartite graph]]. == Applications == ===Symmetric tensors=== The permanent arises naturally in the study of the symmetric tensor power of [[Hilbert spaces]].<ref>{{cite book|last1=Bhatia|first1=Rajendra|title=Matrix Analysis|date=1997|publisher=Springer-Verlag|location=New York|isbn=978-0-387-94846-1|pages=16–19}}</ref> In particular, for a Hilbert space <math>H</math>, let <math>\vee^k H</math> denote the <math>k</math>th symmetric tensor power of <math>H</math>, which is the space of [[symmetric tensor]]s. Note in particular that <math>\vee^k H</math> is spanned by the [[Symmetric tensor#Symmetric product|symmetric products]] of elements in <math>H</math>. For <math>x_1,x_2,\dots,x_k \in H</math>, we define the symmetric product of these elements by <math display="block"> x_1 \vee x_2 \vee \cdots \vee x_k = (k!)^{-1/2} \sum_{\sigma \in S_k} x_{\sigma(1)} \otimes x_{\sigma(2)} \otimes \cdots \otimes x_{\sigma(k)} </math> If we consider <math>\vee^k H</math> (as a subspace of <math>\otimes^kH</math>, the ''k''th [[Tensor product of Hilbert spaces|tensor power]] of <math>H</math>) and define the inner product on <math>\vee^kH</math> accordingly, we find that for <math>x_j,y_j \in H</math> <math display="block">\langle x_1 \vee x_2 \vee \cdots \vee x_k, y_1 \vee y_2 \vee \cdots \vee y_k \rangle = \operatorname{perm}\left[\langle x_i,y_j \rangle\right]_{i,j = 1}^k</math> Applying the [[Cauchy–Schwarz inequality]], we find that <math>\operatorname{perm} \left[\langle x_i,x_j \rangle\right]_{i,j = 1}^k \geq 0</math>, and that <math display="block">\left|\operatorname{perm} \left[\langle x_i,y_j \rangle\right]_{i,j = 1}^k \right|^2 \leq \operatorname{perm} \left[\langle x_i,x_j \rangle\right]_{i,j = 1}^k \cdot \operatorname{perm} \left[\langle y_i,y_j \rangle\right]_{i,j = 1}^k </math> ===Cycle covers=== {{main|Vertex cycle cover}} Any square matrix <math>A = (a_{ij})_{i,j=1}^n</math> can be viewed as the [[adjacency matrix]] of a weighted [[directed graph]] on vertex set <math>V=\{1,2,\dots,n\}</math>, with <math>a_{ij}</math> representing the weight of the arc from vertex ''i'' to vertex ''j''. A [[Vertex cycle cover|cycle cover]] of a weighted directed graph is a collection of vertex-disjoint [[directed cycle]]s in the digraph that covers all vertices in the graph. Thus, each vertex ''i'' in the digraph has a unique "successor" <math>\sigma(i)</math> in the cycle cover, and so <math>\sigma</math> represents a [[permutation]] on ''V''. Conversely, any permutation <math>\sigma</math> on ''V'' corresponds to a cycle cover with arcs from each vertex ''i'' to vertex <math>\sigma(i)</math>. If the weight of a cycle-cover is defined to be the product of the weights of the arcs in each cycle, then <math display="block"> \operatorname{weight}(\sigma) = \prod_{i=1}^n a_{i,\sigma(i)},</math> implying that <math display="block"> \operatorname{perm}(A)=\sum_\sigma \operatorname{weight}(\sigma).</math> Thus the permanent of ''A'' is equal to the sum of the weights of all cycle-covers of the digraph. ===Perfect matchings=== A square matrix <math>A = (a_{ij})</math> can also be viewed as the [[adjacency matrix]] of a [[bipartite graph]] which has [[vertex (graph theory)|vertices]] <math>x_1, x_2, \dots, x_n</math> on one side and <math>y_1, y_2, \dots, y_n</math> on the other side, with <math>a_{ij}</math> representing the weight of the edge from vertex <math>x_i</math> to vertex <math>y_j</math>. If the weight of a [[perfect matching]] <math>\sigma</math> that matches <math>x_i</math> to <math>y_{\sigma(i)}</math> is defined to be the product of the weights of the edges in the matching, then <math display="block"> \operatorname{weight}(\sigma) = \prod_{i=1}^n a_{i,\sigma(i)}.</math> Thus the permanent of ''A'' is equal to the sum of the weights of all perfect matchings of the graph. == Permanents of (0, 1) matrices == === Enumeration === The answers to many counting questions can be computed as permanents of matrices that only have 0 and 1 as entries. Let Ω(''n'',''k'') be the class of all (0, 1)-matrices of order ''n'' with each row and column sum equal to ''k''. Every matrix ''A'' in this class has perm(''A'') > 0.<ref name="Ryser 1963 loc=p. 124">{{harvnb|Ryser|1963|loc=p. 124}}</ref> The incidence matrices of [[projective plane]]s are in the class Ω(''n''<sup>2</sup> + ''n'' + 1, ''n'' + 1) for ''n'' an integer > 1. The permanents corresponding to the smallest projective planes have been calculated. For ''n'' = 2, 3, and 4 the values are 24, 3852 and 18,534,400 respectively.<ref name="Ryser 1963 loc=p. 124"/> Let ''Z'' be the incidence matrix of the projective plane with ''n'' = 2, the [[Fano plane]]. Remarkably, perm(''Z'') = 24 = |det (''Z'')|, the absolute value of the determinant of ''Z''. This is a consequence of ''Z'' being a [[circulant matrix]] and the theorem:<ref>{{harvnb|Ryser|1963|loc=p. 125}}</ref> :If ''A'' is a circulant matrix in the class Ω(''n'',''k'') then if ''k'' > 3, perm(''A'') > |det (''A'')| and if ''k'' = 3, perm(''A'') = |det (''A'')|. Furthermore, when ''k'' = 3, by permuting rows and columns, ''A'' can be put into the form of a direct sum of ''e'' copies of the matrix ''Z'' and consequently, ''n'' = 7''e'' and perm(''A'') = 24<sup>e</sup>. Permanents can also be used to calculate the number of [[permutation]]s with restricted (prohibited) positions. For the standard ''n''-set {1, 2, ..., ''n''}, let <math>A = (a_{ij})</math> be the (0, 1)-matrix where ''a''<sub>''ij''</sub> = 1 if ''i'' → ''j'' is allowed in a permutation and ''a''<sub>''ij''</sub> = 0 otherwise. Then perm(''A'') is equal to the number of permutations of the ''n''-set that satisfy all the restrictions.<ref name="Percus 1971 loc=p. 12"/> Two well known special cases of this are the solution of the [[derangement]] problem and the [[ménage problem]]: the number of permutations of an ''n''-set with no fixed points (derangements) is given by <math display="block">\operatorname{perm}(J - I) = \operatorname{perm}\left (\begin{matrix} 0 & 1 & 1 & \dots & 1 \\ 1 & 0 & 1 & \dots & 1 \\ 1 & 1 & 0 & \dots & 1 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & 1 & 1 & \dots & 0 \end{matrix} \right) = n! \sum_{i=0}^n \frac{(-1)^i}{i!},</math> where ''J'' is the ''n''×''n'' all 1's matrix and ''I'' is the identity matrix, and the [[ménage number]]s are given by <math display="block">\begin{align} \operatorname{perm}(J - I - I') & = \operatorname{perm}\left (\begin{matrix} 0 & 0 & 1 & \dots & 1 \\ 1 & 0 & 0 & \dots & 1 \\ 1 & 1 & 0 & \dots & 1 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 1 & 1 & \dots & 0 \end{matrix} \right) \\ & = \sum_{k=0}^n (-1)^k \frac{2n}{2n-k} {2n-k\choose k} (n-k)!, \end{align}</math> where ''I''' is the (0, 1)-matrix with nonzero entries in positions (''i'', ''i'' + 1) and (''n'', 1). Permanent of ''n''×''n'' all 1's matrix is a number of possible arrangements of ''n'' mutually non-attacking [[Rook (chess)|rooks]] in the positions of the board of size ''n''×''n''.<ref>{{Cite journal |last1=Shevelev |first1=V.S. |year=1990 |title=On a representation of rook polinomials |journal=Russian Mathematical Surveys |volume=45 |issue=4 |pages=183–185 |url=https://www.mathnet.ru/eng/rm4775 |doi=10.1070/RM1990v045n04ABEH002387 }}</ref> === Bounds === The [[Bregman–Minc inequality]], conjectured by [[Henryk Minc|H. Minc]] in 1963<ref>{{citation|first=Henryk|last=Minc|title=Upper bounds for permanents of (0,1)-matrices|journal=Bulletin of the American Mathematical Society|volume=69|issue=6|year=1963|pages=789–791|doi=10.1090/s0002-9904-1963-11031-9|doi-access=free}}</ref> and proved by [[Lev M. Bregman|L. M. Brégman]] in 1973,<ref>{{harvnb|van Lint|Wilson|2001|loc=p. 101}}</ref> gives an upper bound for the permanent of an ''n'' × ''n'' (0, 1)-matrix. If ''A'' has ''r''<sub>''i''</sub> ones in row ''i'' for each 1 ≤ ''i'' ≤ ''n'', the inequality states that <math display="block">\operatorname{perm} A \leq \prod_{i=1}^n (r_i)!^{1/r_i}.</math> == Van der Waerden's conjecture == In 1926, [[Bartel Leendert van der Waerden|Van der Waerden]] conjectured that the minimum permanent among all {{nowrap|''n'' × ''n''}} [[doubly stochastic matrix|doubly stochastic matrices]] is ''n''<nowiki>!</nowiki>/''n''<sup>''n''</sup>, achieved by the matrix for which all entries are equal to 1/''n''.<ref>{{citation | last = van der Waerden | first = B. L. | author-link = Bartel Leendert van der Waerden | journal = Jber. Deutsch. Math.-Verein. | page = 117 | title = Aufgabe 45 | volume = 35 | year = 1926}}.</ref> Proofs of this conjecture were published in 1980 by B. Gyires<ref>{{citation |last=Gyires |first=B. |title=The common source of several inequalities concerning doubly stochastic matrices |journal=Publicationes Mathematicae Institutum Mathematicum Universitatis Debreceniensis |volume=27 |issue=3–4 |pages=291–304 |year=1980 |doi=10.5486/PMD.1980.27.3-4.15 |mr=604006 |doi-access=free}}.</ref> and in 1981 by G. P. Egorychev<ref>{{citation | last = Egoryčev | first = G. P. | language = ru | location = Krasnoyarsk | mr = 602332 | page = 12 | publisher = Akad. Nauk SSSR Sibirsk. Otdel. Inst. Fiz. | title = Reshenie problemy van-der-Vardena dlya permanentov | year = 1980}}. {{citation | last = Egorychev | first = G. P. | issue = 6 | journal = Akademiya Nauk SSSR | language = ru | mr = 638007 | pages = 65–71, 225 | title = Proof of the van der Waerden conjecture for permanents | volume = 22 | year = 1981| doi = 10.1007/BF00968054 | bibcode = 1981SibMJ..22..854E }}. {{citation | last = Egorychev | first = G. P. | doi = 10.1016/0001-8708(81)90044-X | doi-access = free | issue = 3 | journal = [[Advances in Mathematics]] | mr = 642395 | pages = 299–305 | title = The solution of van der Waerden's problem for permanents | volume = 42 | year = 1981}}.</ref> and D. I. Falikman;<ref>{{citation | last = Falikman | first = D. I. | issue = 6 | journal = Akademiya Nauk Soyuza SSR | language = ru | mr = 625097 | pages = 931–938, 957 | title = Proof of the van der Waerden conjecture on the permanent of a doubly stochastic matrix | volume = 29 | year = 1981}}.</ref> Egorychev's proof is an application of the [[Alexandrov–Fenchel inequality]].<ref name=CMC487>Brualdi (2006) p.487</ref> For this work, Egorychev and Falikman won the [[Fulkerson Prize]] in 1982.<ref>[https://mathopt.org/?nav=fulkerson Fulkerson Prize], Mathematical Optimization Society, retrieved 2012-08-19.</ref> == Computation == {{main|Computing the permanent|Sharp-P-completeness of 01-permanent}} The naïve approach, using the definition, of computing permanents is computationally infeasible even for relatively small matrices. One of the fastest known algorithms is due to [[H. J. Ryser]].<ref>{{harvtxt|Ryser|1963|loc=p. 27}}</ref> [[Ryser's formula|Ryser's method]] is based on an [[inclusion–exclusion principle|inclusion–exclusion]] formula that can be given<ref>{{harvtxt|van Lint|Wilson|2001}} [https://books.google.com/books?id=5l5ps2JkyT0C&pg=PA108&dq=permanent+ryser&lr=#PPA99,M1 p. 99]</ref> as follows: Let <math>A_k</math> be obtained from ''A'' by deleting ''k'' columns, let <math>P(A_k)</math> be the product of the row-sums of <math>A_k</math>, and let <math>\Sigma_k</math> be the sum of the values of <math>P(A_k)</math> over all possible <math>A_k</math>. Then <math display="block"> \operatorname{perm}(A)=\sum_{k=0}^{n-1} (-1)^{k} \Sigma_k.</math> It may be rewritten in terms of the matrix entries as follows: <math display="block">\operatorname{perm} (A) = (-1)^n \sum_{S\subseteq\{1,\dots,n\}} (-1)^{|S|} \prod_{i=1}^n \sum_{j\in S} a_{ij}.</math> The permanent is believed to be more difficult to compute than the determinant. While the determinant can be computed in [[polynomial time]] by [[Gaussian elimination]], Gaussian elimination cannot be used to compute the permanent. Moreover, computing the permanent of a (0,1)-matrix is [[sharp-P-complete|#P-complete]]. Thus, if the permanent can be computed in polynomial time by any method, then '''[[FP (complexity)|FP]] = [[sharp-P|#P]]''', which is an even stronger statement than [[P = NP problem|P = NP]]. When the entries of ''A'' are nonnegative, however, the permanent can be computed [[approximation algorithm|approximately]] in [[randomized algorithm|probabilistic]] polynomial time, up to an error of <math>\varepsilon M</math>, where <math>M</math> is the value of the permanent and <math>\varepsilon > 0 </math> is arbitrary.<ref>{{Citation|last1= Jerrum | first1= M.|author1-link= Mark Jerrum |last2=Sinclair | first2= A.|author2-link= Alistair Sinclair|last3=Vigoda | first3= E.|title=A polynomial-time approximation algorithm for the permanent of a matrix with nonnegative entries |journal=[[Journal of the ACM]] |year=2004 |volume= 51 | issue= 4|pages= 671–697 | doi=10.1145/1008731.1008738| citeseerx= 10.1.1.18.9466| s2cid= 47361920}}</ref> The permanent of a certain set of [[positive definite matrix|positive semidefinite matrices]] is NP-hard to approximate within any subexponential factor.<ref>{{cite journal|last1=Meiburg|first1=Alexander|date=2023|title=Inapproximability of Positive Semidefinite Permanents and Quantum State Tomography|journal=[[Algorithmica]]|volume=85 |issue=12 |pages=3828–3854 |doi=10.1007/s00453-023-01169-1|doi-access=free|arxiv=2111.03142}}</ref> If further conditions on the [[Spectrum of a matrix|spectrum]] are imposed, the permanent can be approximated in probabilistic polynomial time: the best achievable error of this approximation is <math>\varepsilon\sqrt{M}</math> (<math>M</math> is again the value of the permanent).<ref>{{cite journal| last1=Chakhmakhchyan|first1=Levon|last2=Cerf|first2=Nicolas|last3=Garcia-Patron|first3=Raul|title=A quantum-inspired algorithm for estimating the permanent of positive semidefinite matrices| journal = Phys. Rev. A|volume=96 |issue=2|pages=022329 | doi=10.1103/PhysRevA.96.022329|year=2017|bibcode=2017PhRvA..96b2329C|arxiv=1609.02416|s2cid=54194194}}</ref> The hardness in these instances is closely linked with difficulty of simulating [[boson sampling]] experiments. ==MacMahon's master theorem== {{main|MacMahon's master theorem}} Another way to view permanents is via multivariate [[generating function]]s. Let <math>A = (a_{ij})</math> be a square matrix of order ''n''. Consider the multivariate generating function: <math display="block">\begin{align} F(x_1,x_2,\dots,x_n) &= \prod_{i=1}^n \left ( \sum_{j=1}^n a_{ij} x_j \right ) \\ &= \left( \sum_{j=1}^n a_{1j} x_j \right ) \left ( \sum_{j=1}^n a_{2j} x_j \right ) \cdots \left ( \sum_{j=1}^n a_{nj} x_j \right). \end{align}</math> The coefficient of <math>x_1 x_2 \dots x_n</math> in <math>F(x_1,x_2,\dots,x_n)</math> is perm(''A'').<ref>{{harvnb|Percus|1971|loc=p. 14}}</ref> As a generalization, for any sequence of ''n'' non-negative integers, <math>s_1,s_2,\dots,s_n</math> define: <math display="block">\operatorname{perm}^{(s_1,s_2,\dots,s_n)}(A)</math> as the coefficient of <math>x_1^{s_1} x_2^{s_2} \cdots x_n^{s_n} </math> in<math>\left ( \sum_{j=1}^n a_{1j} x_j \right )^{s_1} \left ( \sum_{j=1}^n a_{2j} x_j \right )^{s_2} \cdots \left ( \sum_{j=1}^n a_{nj} x_j \right )^{s_n}.</math> '''MacMahon's master theorem''' relating permanents and determinants is:<ref>{{harvnb|Percus|1971|loc=p. 17}}</ref> <math display="block">\operatorname{perm}^{(s_1,s_2,\dots,s_n)}(A) = \text{ coefficient of }x_1^{s_1} x_2^{s_2} \cdots x_n^{s_n} \text{ in } \frac{1}{\det(I - XA)},</math> where ''I'' is the order ''n'' identity matrix and ''X'' is the diagonal matrix with diagonal <math>[x_1,x_2,\dots,x_n].</math> ==Rectangular matrices== The permanent function can be generalized to apply to non-square matrices. Indeed, several authors make this the definition of a permanent and consider the restriction to square matrices a special case.<ref>In particular, {{harvtxt|Minc|1978}} and {{harvtxt|Ryser|1963}} do this.</ref> Specifically, for an ''m'' × ''n'' matrix <math>A = (a_{ij})</math> with ''m'' ≤ ''n'', define <math display="block">\operatorname{perm} (A) = \sum_{\sigma \in \operatorname{P}(n,m)} a_{1 \sigma(1)} a_{2 \sigma(2)} \ldots a_{m \sigma(m)}</math> where P(''n'',''m'') is the set of all ''m''-permutations of the ''n''-set {1,2,...,n}.<ref>{{harvnb|Ryser|1963|loc=p. 25}}</ref> Ryser's computational result for permanents also generalizes. If ''A'' is an ''m'' × ''n'' matrix with ''m'' ≤ ''n'', let <math>A_k</math> be obtained from ''A'' by deleting ''k'' columns, let <math>P(A_k)</math> be the product of the row-sums of <math>A_k</math>, and let <math>\sigma_k</math> be the sum of the values of <math>P(A_k)</math> over all possible <math>A_k</math>. Then<ref name="Ryser 1963 loc=p. 26"/> <math display="block"> \operatorname{perm}(A)=\sum_{k=0}^{m-1} (-1)^{k}\binom{n-m+k}{k}\sigma_{n-m+k}.</math> ===Systems of distinct representatives=== The generalization of the definition of a permanent to non-square matrices allows the concept to be used in a more natural way in some applications. For instance: Let ''S''<sub>1</sub>, ''S''<sub>2</sub>, ..., ''S''<sub>''m''</sub> be subsets (not necessarily distinct) of an ''n''-set with ''m'' ≤ ''n''. The [[incidence matrix]] of this collection of subsets is an ''m'' × ''n'' (0,1)-matrix ''A''. The number of [[transversal (combinatorics)|systems of distinct representatives]] (SDR's) of this collection is perm(''A'').<ref>{{harvnb|Ryser|1963|loc=p. 54}}</ref> ==See also== *[[Computing the permanent]] *[[Bapat–Beg theorem]], an application of permanents in [[order statistics]] *[[Slater determinant]], an application of permanents in [[quantum mechanics]] *[[Hafnian]] ==Notes== {{Reflist|30em}} ==References== *{{cite book | zbl=1106.05001 | last=Brualdi | first=Richard A. | author-link=Richard A. Brualdi | title=Combinatorial matrix classes | series=Encyclopedia of Mathematics and Its Applications | volume=108 | location=Cambridge | publisher=[[Cambridge University Press]] | year=2006 | isbn=978-0-521-86565-4 | url-access=registration | url=https://archive.org/details/combinatorialmat0000brua }} *{{cite book | last=Minc | first=Henryk | title= Permanents | others=With a foreword by Marvin Marcus | series=Encyclopedia of Mathematics and its Applications | volume=6| publisher=Addison–Wesley | year= 1978 | issn=0953-4806 | oclc=3980645 | zbl=0401.15005 | location=Reading, MA }} *{{cite book| last1=Muir | first1=Thomas | last2=Metzler | first2=William H. | year=1960 | orig-year=1882| title = A Treatise on the Theory of Determinants |location=New York| publisher=Dover | oclc=535903}} *{{citation|first=J.K.|last=Percus|title=Combinatorial Methods|series=Applied Mathematical Sciences #4|publisher=Springer-Verlag|place=New York|year=1971|isbn=978-0-387-90027-8}} *{{citation|first=Herbert John|last=Ryser|author-link=H. J. Ryser|title=Combinatorial Mathematics|series=The Carus Mathematical Monographs #14|year=1963|publisher=The Mathematical Association of America}} *{{citation|last1=van Lint|first1=J.H. |last2=Wilson|first2=R.M. |title=A Course in Combinatorics|publisher=Cambridge University Press|year= 2001|isbn=978-0521422604}} ==Further reading== * {{citation|first=Marshall|last=Hall Jr.| author-link=Marshall Hall (mathematician)|title=Combinatorial Theory|edition=2nd|year=1986|publisher=John Wiley & Sons|place=New York|isbn=978-0-471-09138-7|pages=56–72}} Contains a proof of the Van der Waerden conjecture. * {{citation|first1=M.|last1=Marcus|first2=H.|last2=Minc|title=Permanents|journal=The American Mathematical Monthly|volume=72|issue=6|year=1965|pages=577–591|doi=10.2307/2313846|jstor=2313846}} ==External links== *{{PlanetMath |urlname=Permanent |title=Permanent}} *{{PlanetMath |urlname=VanDerWaerdensPermanentConjecture |title=Van der Waerden's permanent conjecture}} [[Category:Algebra]] [[Category:Linear algebra]] [[Category:Matrix theory]] [[Category:Permutations]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Citation
(
edit
)
Template:Cite arXiv
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Harvnb
(
edit
)
Template:Harvtxt
(
edit
)
Template:Main
(
edit
)
Template:Math
(
edit
)
Template:Nowrap
(
edit
)
Template:PlanetMath
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)