Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Jacobian matrix and determinant
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Definition == Let <math display="inline">\mathbf{f}: \mathbb{R}^n \to \mathbb{R}^m</math> be a function such that each of its first-order partial derivatives exists on <math display="inline>\mathbb{R}^n</math>. This function takes a point {{tmath|1=\mathbf x =(x_1,\ldots,x_n)\in \mathbb{R}^n}} as input and produces the vector {{tmath|1=\mathbf f(\mathbf x) = (f_1(\mathbf x), \ldots, f_m(\mathbf x)) \in \mathbb{R}^m}} as output. Then the Jacobian matrix of {{math|'''f'''}}, denoted {{math|'''J<sub>f</sub>'''}}, is the {{tmath|m\times n}} matrix whose {{math|(''i'', ''j'')}} entry is <math display="inline">\frac{\partial f_i}{\partial x_j};</math> explicitly <math display="block">\mathbf{J_f} = \begin{bmatrix} \dfrac{\partial \mathbf{f}}{\partial x_1} & \cdots & \dfrac{\partial \mathbf{f}}{\partial x_n} \end{bmatrix} = \begin{bmatrix} \nabla^{\mathsf{T}} f_1 \\ \vdots \\ \nabla^{\mathsf{T}} f_m \end{bmatrix} = \begin{bmatrix} \dfrac{\partial f_1}{\partial x_1} & \cdots & \dfrac{\partial f_1}{\partial x_n}\\ \vdots & \ddots & \vdots\\ \dfrac{\partial f_m}{\partial x_1} & \cdots & \dfrac{\partial f_m}{\partial x_n} \end{bmatrix}</math> where <math>\nabla^{\mathsf{T}} f_i</math> is the transpose (row vector) of the [[gradient]] of the <math>i</math>-th component. The Jacobian matrix, whose entries are functions of {{math|'''x'''}}, is denoted in various ways; other common notations include {{math|''D'''''f'''}}, <math>\nabla \mathbf{f}</math>, and <math display="inline">\frac{\partial(f_1,\ldots,f_m)}{\partial(x_1,\ldots,x_n)}</math>.<ref>{{Cite book |last1=Holder |first1=Allen |title=An Introduction to computational science |last2=Eichholz |first2=Joseph |date=2019 |publisher=Springer |isbn=978-3-030-15679-4 |series=International Series in Operations Research & Management Science |location=Cham, Switzerland |pages=53}}</ref><ref>{{Cite book |last=Lovett |first=Stephen |url=https://books.google.com/books?id=G1bGDwAAQBAJ |title=Differential Geometry of Manifolds |date=2019-12-16 |publisher=CRC Press |isbn=978-0-429-60782-0 |pages=16 |language=en}}</ref> Some authors define the Jacobian as the [[transpose]] of the form given above. The Jacobian matrix [[Matrix_(mathematics)#Linear_transformations|represents]] the [[total derivative|differential]] of {{math|'''f'''}} at every point where {{math|'''f'''}} is differentiable. In detail, if {{math|'''h'''}} is a [[displacement vector]] represented by a [[column matrix]], the [[matrix product]] {{math|'''J'''('''x''') β '''h'''}} is another displacement vector, that is the best linear approximation of the change of {{math|'''f'''}} in a [[neighborhood (mathematics)|neighborhood]] of {{math|'''x'''}}, if {{math|'''f'''('''x''')}} is [[Differentiable function|differentiable]] at {{math|'''x'''}}.{{efn|Differentiability at {{math|'''x'''}} implies, but is not implied by, the existence of all first-order partial derivatives at {{math|'''x'''}}, and hence is a stronger condition.}} This means that the function that maps {{math|'''y'''}} to {{math|'''f'''('''x''') + '''J'''('''x''') β ('''y''' β '''x''')}} is the best [[linear approximation]] of {{math|'''f'''('''y''')}} for all points {{math|'''y'''}} close to {{math|'''x'''}}. The [[linear map]] {{math|'''h''' β '''J'''('''x''') β '''h'''}} is known as the ''derivative'' or the [[total derivative|''differential'']] of {{math|'''f'''}} at {{math|'''x'''}}. When <math display="inline">m=n</math>, the Jacobian matrix is square, so its [[determinant]] is a well-defined function of {{math|'''x'''}}, known as the '''Jacobian determinant''' of {{math|'''f'''}}. It carries important information about the local behavior of {{math|'''f'''}}. In particular, the function {{math|'''f'''}} has a differentiable [[inverse function]] in a neighborhood of a point {{math|'''x'''}} if and only if the Jacobian determinant is nonzero at {{math|'''x'''}} (see [[inverse function theorem]] for an explanation of this and [[Jacobian conjecture]] for a related problem of ''global'' invertibility). The Jacobian determinant also appears when changing the variables in [[multiple integral]]s (see [[Integration_by_substitution#Substitution_for_multiple_variables|substitution rule for multiple variables]]). When <math display="inline">m=1</math>, that is when <math display="inline"> f: \mathbb{R}^n \to \mathbb{R}</math> is a [[scalar field|scalar-valued function]], the Jacobian matrix reduces to the [[row vector]] <math>\nabla^{\mathsf{T}} f</math>; this row vector of all first-order partial derivatives of {{tmath|f}} is the transpose of the [[gradient]] of {{tmath|f}}, i.e. <math>\mathbf{J}_{f} = \nabla^{\mathsf{T}} f</math>. Specializing further, when <math display="inline">m=n=1</math>, that is when <math display="inline">f: \mathbb{R} \to \mathbb{R}</math> is a [[scalar field|scalar-valued function]] of a single variable, the Jacobian matrix has a single entry; this entry is the derivative of the function {{tmath|f}}. These concepts are named after the [[mathematician]] [[Carl Gustav Jacob Jacobi]] (1804β1851).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)