Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Householder transformation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Numerical linear algebra=== Householder transformations are widely used in [[numerical linear algebra]], for example, to annihilate the entries below the main diagonal of a matrix,<ref name=taboga>{{cite web | last1 = Taboga | first1 = Marco | url = https://www.statlect.com/matrix-algebra/Householder-matrix | title = Householder matrix, Lectures on matrix algebra.}}</ref> to perform [[QR decomposition]]s and in the first step of the [[QR algorithm]]. They are also widely used for transforming to a [[Hessenberg matrix|Hessenberg]] form. For symmetric or [[Hermitian matrix|Hermitian]] matrices, the symmetry can be preserved, resulting in [[tridiagonal]]ization.<ref>{{Cite journal|last1=Schabauer|first1=Hannes|last2=Pacher|first2=Christoph|last3=Sunderland|first3=Andrew G.|last4=Gansterer|first4=Wilfried N.|date=2010-05-01|title=Toward a parallel solver for generalized complex symmetric eigenvalue problems|journal=Procedia Computer Science|language=en|volume=1|issue=1|pages=437β445|doi=10.1016/j.procs.2010.04.047|doi-access=free}}</ref> Because they involve only a rank-one update and make use of low-level [[Basic_Linear_Algebra_Subprograms#Level_1|BLAS-1]] operations, they can be quite efficient. ====QR decomposition==== {{main|QR decomposition#Using Householder reflections}} Householder transformations can be used to calculate a [[QR decomposition]]. Consider a matrix tridiangularized up to column <math>i</math>, then our goal is to construct such Householder matrices that act upon the principal submatrices of a given matrix <math> \begin{bmatrix} a_{11} & a_{12} & \cdots & & & a_{1n} \\ 0 & a_{22} & \cdots & & & a_{1n} \\ \vdots & & \ddots & & & \vdots \\ 0 & \cdots & 0 & x_{1}=a_{ii} & \cdots & a_{in} \\ 0 & \cdots & 0 & \vdots & & \vdots \\ 0 & \cdots & 0 & x_{n}=a_{ni} & \cdots & a_{nn} \end{bmatrix} </math> via the matrix <math> \begin{bmatrix} I_{i-1}&0\\ 0&P_v \end{bmatrix} </math>. (note that we already established before that Householder transformations are unitary matrices, and since the multiplication of unitary matrices is itself a unitary matrix, this gives us the unitary matrix of the QR decomposition) If we can find a <math>\vec v</math> so that <math>P_v \vec x = \vec e_1</math> we could accomplish this. Thinking geometrically, we are looking for a plane so that the reflection about this plane happens to land directly on the basis vector. In other words, {{NumBlk|:|<math>\vec x-2\langle\vec x,\vec v\rangle \vec v = \alpha \vec e_1</math>|{{EquationRef|1}}}} for some constant <math>\alpha</math>. However, for this to happen, we must have <math>\vec v\propto\vec x-\alpha\vec e_1</math>. And since <math>\vec v</math> is a unit vector, this means that we must have {{NumBlk|:|<math>\vec v=\pm\frac{\vec x-\alpha\vec e_1}{\|\vec x-\alpha\vec e_1\|_2}</math>|{{EquationRef|2}}}} Now if we apply equation ({{EquationNote|2}}) back into equation ({{EquationNote|1}}), we get <math display="block">\vec x-\alpha\vec e_1 = 2(\langle \vec x,\frac{\vec x-\alpha\vec e_1}{\|\vec x-\alpha\vec e_1\|_2}\rangle\frac{\vec x-\alpha\vec e_1}{\|\vec x-\alpha\vec e_1\|_2}</math> Or, in other words, by comparing the scalars in front of the vector <math>\vec x - \alpha\vec e_1</math> we must have <math>\|\vec x-\alpha\vec e_1\|_2^2=2\langle\vec x,\vec x-\alpha e_1\rangle</math>. Or <math>2(\| \vec x\|_2^2-\alpha x_1)=\|\vec x\|_2^2-2\alpha x_1+\alpha^2</math> Which means that we can solve for <math>\alpha</math> as <math>\alpha=\pm\|\vec x\|_2</math> This completes the construction; however, in practice we want to avoid [[catastrophic cancellation]] in equation ({{EquationNote|2}}). To do so, we choose the sign of <math>\alpha</math> as <math>\alpha=-sign(Re(x_1))\|\vec x\|_2</math> <ref>{{cite book |last1=Saad |first1=Yousef |author-link1=Yousef Saad |title=Iterative methods for sparse linear systems |publisher=Society for Industrial and Applied Mathematics |date=2003 |pages=11β13 }}</ref> ==== Tridiagonalization (Hessenberg) ==== {{main|Tridiagonal matrix}} {{see also|Hessenberg matrix#Householder transformations}} This procedure is presented in Numerical Analysis by Burden and Faires, and works when the matrix is symmetric. In the non-symmetric case, it is still useful as a similar procedure can result in a Hessenberg matrix. It uses a slightly altered <math>\operatorname{sgn}</math> function with <math>\operatorname{sgn}(0) = 1</math>.<ref name='burden'>{{cite book |last1=Burden |first1=Richard |last2=Faires |first2=Douglas |last3=Burden |first3=Annette |title=Numerical analysis |date=2016 |publisher=Thomson Brooks/Cole |isbn=9781305253667 |edition=10th}}</ref> In the first step, to form the Householder matrix in each step we need to determine <math display="inline">\alpha</math> and <math display="inline">r</math>, which are: :<math>\begin{align} \alpha &= -\operatorname{sgn}\left(a_{21}\right)\sqrt{\sum_{j=2}^n a_{j1}^2}; \\ r &= \sqrt{\frac{1}{2}\left(\alpha^2 - a_{21}\alpha\right)}; \end{align}</math> From <math display="inline">\alpha</math> and <math display="inline">r</math>, construct vector <math display="inline">v</math>: :<math>\vec v^{(1)} = \begin{bmatrix} v_1 \\ v_2 \\ \vdots \\ v_n \end{bmatrix},</math> where <math display="inline">v_1 = 0</math>, <math display="inline">v_2 = \frac{a_{21} - \alpha}{2r}</math>, and :<math>v_k = \frac{a_{k1}}{2r}</math> for each <math>k = 3, 4 \ldots n</math> Then compute: :<math>\begin{align} P^1 &= I - 2\vec v^{(1)} \left(\vec v^{(1)}\right)^\textsf{T} \\ A^{(2)} &= P^1 AP^1 \end{align}</math> Having found <math display="inline">P^1</math> and computed <math display="inline">A^{(2)}</math> the process is repeated for <math display="inline">k = 2, 3, \ldots, n - 2</math> as follows: :<math>\begin{align} \alpha &= -\operatorname{sgn}\left(a^k_{k+1,k}\right)\sqrt{\sum_{j=k+1}^n \left(a^k_{jk}\right)^2} \\[2pt] r &= \sqrt{\frac{1}{2}\left(\alpha^2 - a^k_{k+1,k}\alpha\right)} \\[2pt] v^k_1 &= v^k_2 = \cdots = v^k_k = 0 \\[2pt] v^k_{k+1} &= \frac{a^k_{k+1,k} - \alpha}{2r} \\ v^k_j &= \frac{a^k_{jk}}{2r} \text{ for } j = k + 2,\ k + 3,\ \ldots,\ n \\ P^k &= I - 2\vec v^{(k)} \left(\vec v^{(k)}\right)^\textsf{T} \\ A^{(k+1)} &= P^k A^{(k)}P^k \end{align}</math> Continuing in this manner, the tridiagonal and symmetric matrix is formed. ====Examples==== In this example, also from Burden and Faires,<ref name="burden" /> the given matrix is transformed to the similar tridiagonal matrix A<sub>3</sub> by using the Householder method. : <math>\mathbf{A} = \begin{bmatrix} 4 & 1 & -2 & 2 \\ 1 & 2 & 0 & 1 \\ -2 & 0 & 3 & -2 \\ 2 & 1 & -2 & -1 \end{bmatrix},</math> Following those steps in the Householder method, we have: The first Householder matrix: : <math>\begin{align} Q_1 &= \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & -\frac{1}{3} & \frac{2}{3} & -\frac{2}{3} \\ 0 & \frac{2}{3} & \frac{2}{3} & \frac{1}{3} \\ 0 & -\frac{2}{3} & \frac{1}{3} & \frac{2}{3} \end{bmatrix}, \\ A_2 = Q_1 A Q_1 &= \begin{bmatrix} 4 & -3 & 0 & 0 \\ -3 & \frac{10}{3} & 1 & \frac{4}{3} \\ 0 & 1 & \frac{5}{3} & -\frac{4}{3} \\ 0 & \frac{4}{3} & -\frac{4}{3} & -1 \end{bmatrix}, \end{align}</math> Used <math display="inline">A_2</math> to form : <math>\begin{align} Q_2 &= \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & -\frac{3}{5} & -\frac{4}{5} \\ 0 & 0 & -\frac{4}{5} & \frac{3}{5} \end{bmatrix}, \\ A_3 = Q_2 A_2 Q_2 &= \begin{bmatrix} 4 & -3 & 0 & 0 \\ -3 & \frac{10}{3} & -\frac{5}{3} & 0 \\ 0 & -\frac{5}{3} & -\frac{33}{25} & \frac{68}{75} \\ 0 & 0 & \frac{68}{75} & \frac{149}{75} \end{bmatrix}, \end{align}</math> As we can see, the final result is a tridiagonal symmetric matrix which is similar to the original one. The process is finished after two steps. ====Quantum computation==== {{main|Grover's algorithm#Geometric proof of correctness}} [[File:Grovers algorithm geometry.png|thumb|310px|Picture showing the geometric interpretation of the first iteration of Grover's algorithm. The state vector <math>|s\rang</math> is rotated towards the target vector <math>|\omega\rang</math> as shown.]] [[quantum_logic_gate#Universal_quantum_gates|As unitary matrices are useful in quantum computation]], and Householder transformations are unitary, they are very useful in quantum computing. One of the central algorithms where they're useful is Grover's algorithm, where we are trying to solve for a representation of an [[quantum oracle|oracle function]] represented by what turns out to be a Householder transformation: <math>\begin{cases} U_\omega |x\rang = -|x\rang & \text{for } x = \omega \text{, that is, } f(x) = 1, \\ U_\omega |x\rang = |x\rang & \text{for } x \ne \omega \text{, that is, } f(x) = 0. \end{cases}</math> (here the <math>|x\rangle</math> is part of the [[bra-ket notation]] and is analogous to <math>\vec x</math> which we were using previously) This is done via an algorithm that iterates via the oracle function <math>U_\omega</math> and another operator <math>U_s</math> known as the ''Grover diffusion operator'' defined by <math>|s\rangle = \frac{1}{\sqrt{N}} \sum_{x=0}^{N-1} |x\rangle. </math> and <math>U_s = 2 \left|s\right\rangle\!\! \left\langle s\right| - I</math>.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)