Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Implicit function theorem
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|On converting relations to functions of several real variables}} {{Calculus |expanded=multivariable}} In [[multivariable calculus]], the '''implicit function theorem'''{{efn|Also called '''[[Ulisse Dini|Dini]]'s theorem''' by the Pisan school in Italy. In the English-language literature, [[Dini's theorem]] is a different theorem in mathematical analysis.}} is a tool that allows [[relation (mathematics)#Definition|relations]] to be converted to [[functions of several real variables]]. It does so by representing the relation as the [[graph of a function]]. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the [[domain of a relation|domain]] of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function. More precisely, given a system of {{mvar|m}} equations {{math|1=''f<sub>i</sub>''{{space|hair}}(''x''<sub>1</sub>, ..., ''x<sub>n</sub>'', ''y''<sub>1</sub>, ..., ''y<sub>m</sub>'') = 0, ''i'' = 1, ..., ''m''}} (often abbreviated into {{math|1=''F''('''x''', '''y''') = '''0'''}}), the theorem states that, under a mild condition on the [[partial derivative]]s (with respect to each {{math|''y<sub>i</sub>''}} ) at a point, the {{mvar|m}} variables {{math|''y<sub>i</sub>''}} are differentiable functions of the {{math|''x<sub>j</sub>''}} in some [[neighborhood (mathematics)|neighborhood]] of the point. As these functions generally cannot be expressed in [[closed form expression|closed form]], they are ''implicitly'' defined by the equations, and this motivated the name of the theorem.<ref>{{Cite book |last=Chiang |first=Alpha C. |author-link=Alpha Chiang |title=Fundamental Methods of Mathematical Economics |publisher=McGraw-Hill |edition=3rd |year=1984 |pages=[https://archive.org/details/fundamentalmetho0000chia_b4p1/page/204 204–206] |isbn=0-07-010813-7 |url=https://archive.org/details/fundamentalmetho0000chia_b4p1/page/204 }}</ref> In other words, under a mild condition on the partial derivatives, the set of [[zero of a function|zeros]] of a system of equations is [[local property|locally]] the [[graph of a function]]. == History == [[Augustin-Louis Cauchy]] (1789–1857) is credited with the first rigorous form of the implicit function theorem. [[Ulisse Dini]] (1845–1918) generalized the real-variable version of the implicit function theorem to the context of functions of any number of real variables.<ref>{{cite book |first1=Steven |last1=Krantz |first2=Harold |last2=Parks |title=The Implicit Function Theorem |series=Modern Birkhauser Classics |publisher=Birkhauser |year=2003 |isbn=0-8176-4285-4 |url=https://archive.org/details/implicitfunction0000kran |url-access=registration }}</ref> == Two variables case == Let <math>f:\R^2 \to \R</math> be a continuously differentiable function defining the [[implicit equation]] of a [[curve]] <math> f(x,y) = 0 </math>. Let <math>(x_0, y_0)</math> be a point on the curve, that is, a point such that <math>f(x_0, y_0)=0</math>. In this simple case, the implicit function theorem can be stated as follows: {{math theorem|math_statement=If {{tmath|f(x,y)}} is a function that is continuously differentiable in a neighbourhood of the point {{tmath|(x_0,y_0)}}, and <math display=inline>\frac{\partial f}{ \partial y} (x_0, y_0) \neq 0,</math> then there exists a unique differentiable function {{tmath|\varphi}} such that {{tmath|1=y_0=\varphi(x_0)}} and {{tmath|1=f(x, \varphi(x))=0}} in a neighbourhood of {{tmath|x_0}}.}} '''Proof.''' By differentiating the equation {{tmath|1=f(x, \varphi(x))=0}}, one gets <math display=block>\frac{\partial f}{ \partial x}(x, \varphi(x))+\varphi'(x)\, \frac{\partial f}{ \partial y}(x, \varphi(x))=0. </math> and thus <math display=block>\varphi'(x)=-\frac{\frac{\partial f}{ \partial x}(x, \varphi(x))}{\frac{\partial f}{ \partial y}(x, \varphi(x))}.</math> This gives an [[ordinary differential equation]] for {{tmath|\varphi}}, with the initial condition {{tmath|1=\varphi(x_0) = y_0}}. Since <math display=inline>\frac{\partial f}{ \partial y} (x_0, y_0) \neq 0,</math> the right-hand side of the differential equation is continuous, upper bounded and lower bounded on some closed interval around {{tmath|x_0}}. It is therefore [[Lipschitz continuous]],{{dubious|date=April 2025}} and the [[Cauchy-Lipschitz theorem]] applies for proving the existence of a unique solution. == First example == [[Image:Implicit circle.svg|thumb|right|200px|The unit circle of implicit equation {{math|1= ''x''<sup>2</sup> + ''y''<sup>2</sup> – 1 = 0}} cannot be represented as the graph of a function. Around the point {{math|'''A'''}} where the tangent is not vertical, the bolded [[circular arc]] is the graph of some function of {{mvar|x}}, while around {{math|'''B'''}}, there is no function of {{mvar|x}} with the circle as its graph.<br>This is exactly what the implicit function theorem asserts in this case.]] If we define the function {{math|1=''f''(''x'', ''y'') = ''x''<sup>2</sup> + ''y''<sup>2</sup>}}, then the equation {{math|1=''f''(''x'', ''y'') = 1}} cuts out the [[unit circle]] as the [[level set]] {{math|1={(''x'', ''y'') {{!}} ''f''(''x'', ''y'') = 1}<nowiki/>}}. There is no way to represent the unit circle as the graph of a function of one variable {{math|1=''y'' = ''g''(''x'')}} because for each choice of {{math|''x'' ∈ (−1, 1)}}, there are two choices of ''y'', namely <math>\pm\sqrt{1-x^2}</math>. However, it is possible to represent ''part'' of the circle as the graph of a function of one variable. If we let <math>g_1(x) = \sqrt{1-x^2}</math> for {{math|−1 ≤ ''x'' ≤ 1}}, then the graph of {{math|1=''y'' = ''g''<sub>1</sub>(''x'')}} provides the upper half of the circle. Similarly, if <math>g_2(x) = -\sqrt{1-x^2}</math>, then the graph of {{math|1=''y'' = ''g''<sub>2</sub>(''x'')}} gives the lower half of the circle. The purpose of the implicit function theorem is to tell us that functions like {{math|''g''<sub>1</sub>(''x'')}} and {{math|''g''<sub>2</sub>(''x'')}} [[List of mathematical jargon#almost all|almost always]] exist, even in situations where we cannot write down explicit formulas. It guarantees that {{math|''g''<sub>1</sub>(''x'')}} and {{math|''g''<sub>2</sub>(''x'')}} are differentiable, and it even works in situations where we do not have a formula for {{math|''f''(''x'', ''y'')}}. == Definitions == Let <math>f: \R^{n+m} \to \R^m</math> be a [[continuously differentiable]] function. We think of <math>\R^{n+m}</math> as the [[Cartesian product]] <math>\R^n\times\R^m,</math> and we write a point of this product as <math>(\mathbf{x}, \mathbf{y}) = (x_1,\ldots, x_n, y_1, \ldots y_m).</math> Starting from the given function <math>f</math>, our goal is to construct a function <math>g: \R^n \to \R^m</math> whose graph <math>(\textbf{x}, g(\textbf{x}))</math> is precisely the set of all <math>(\textbf{x}, \textbf{y})</math> such that <math>f(\textbf{x}, \textbf{y}) = \textbf{0}</math>. As noted above, this may not always be possible. We will therefore fix a point <math>(\textbf{a}, \textbf{b}) = (a_1, \dots, a_n, b_1, \dots, b_m)</math> which satisfies <math>f(\textbf{a}, \textbf{b}) = \textbf{0}</math>, and we will ask for a <math>g</math> that works near the point <math>(\textbf{a}, \textbf{b})</math>. In other words, we want an [[open set]] <math>U \subset \R^n</math> containing <math>\textbf{a}</math>, an open set <math>V \subset \R^m</math> containing <math>\textbf{b}</math>, and a function <math>g : U \to V</math> such that the graph of <math>g</math> satisfies the relation <math>f = \textbf{0}</math> on <math>U\times V</math>, and that no other points within <math>U \times V</math> do so. In symbols, <math display="block">\{ (\mathbf{x}, g(\mathbf{x})) \mid \mathbf x \in U \} = \{ (\mathbf{x}, \mathbf{y})\in U \times V \mid f(\mathbf{x}, \mathbf{y}) = \mathbf{0} \}.</math> To state the implicit function theorem, we need the [[Jacobian matrix and determinant|Jacobian matrix]] of <math>f</math>, which is the matrix of the [[partial derivative]]s of <math>f</math>. Abbreviating <math>(a_1, \dots, a_n, b_1, \dots, b_m)</math> to <math>(\textbf{a}, \textbf{b})</math>, the Jacobian matrix is <math display="block">(Df)(\mathbf{a},\mathbf{b}) = \left[\begin{array}{ccc|ccc} \frac{\partial f_1}{\partial x_1}(\mathbf{a},\mathbf{b}) & \cdots & \frac{\partial f_1}{\partial x_n}(\mathbf{a},\mathbf{b}) & \frac{\partial f_1}{\partial y_1}(\mathbf{a},\mathbf{b}) & \cdots & \frac{\partial f_1}{\partial y_m}(\mathbf{a},\mathbf{b}) \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ \frac{\partial f_m}{\partial x_1}(\mathbf{a},\mathbf{b}) & \cdots & \frac{\partial f_m}{\partial x_n}(\mathbf{a},\mathbf{b}) & \frac{\partial f_m}{\partial y_1}(\mathbf{a},\mathbf{b}) & \cdots & \frac{\partial f_m}{\partial y_m}(\mathbf{a},\mathbf{b}) \end{array}\right] = \left[\begin{array}{c|c} X & Y \end{array}\right]</math> where <math>X</math> is the matrix of partial derivatives in the variables <math>x_i</math> and <math>Y</math> is the matrix of partial derivatives in the variables <math>y_j</math>. The implicit function theorem says that if <math>Y</math> is an invertible matrix, then there are <math>U</math>, <math>V</math>, and <math>g</math> as desired. Writing all the hypotheses together gives the following statement. == Statement of the theorem == Let <math>f: \R^{n+m} \to \R^m</math> be a [[continuously differentiable function]], and let <math>\R^{n+m}</math> have coordinates <math>(\textbf{x}, \textbf{y})</math>. Fix a point <math>(\textbf{a}, \textbf{b}) = (a_1,\dots,a_n, b_1,\dots, b_m)</math> with <math>f(\textbf{a}, \textbf{b}) = \mathbf{0}</math>, where <math>\mathbf{0} \in \R^m</math> is the zero vector. If the [[Jacobian matrix]] (this is the right-hand panel of the Jacobian matrix shown in the previous section): <math display="block">J_{f, \mathbf{y}} (\mathbf{a}, \mathbf{b}) = \left [ \frac{\partial f_i}{\partial y_j} (\mathbf{a}, \mathbf{b}) \right ]</math> is [[invertible]], then there exists an open set <math>U \subset \R^n</math> containing <math>\textbf{a}</math> such that there exists a unique function <math>g: U \to \R^m</math> such that {{nowrap|<math> g(\mathbf{a}) = \mathbf{b}</math>,}} and {{nowrap|<math> f(\mathbf{x}, g(\mathbf{x})) = \mathbf{0} ~ \text{for all} ~ \mathbf{x}\in U</math>.}} Moreover, <math>g</math> is continuously differentiable and, denoting the left-hand panel of the Jacobian matrix shown in the previous section as: <math display="block"> J_{f, \mathbf{x}} (\mathbf{a}, \mathbf{b}) = \left [ \frac{\partial f_i}{\partial x_j} (\mathbf{a}, \mathbf{b}) \right ], </math> the Jacobian matrix of partial derivatives of <math>g</math> in <math>U</math> is given by the [[matrix product]]:<ref>{{Cite journal |first=Oswaldo |last=de Oliveira |title=The Implicit and Inverse Function Theorems: Easy Proofs |journal=Real Anal. Exchange |volume=39 |issue=1 |year=2013 |doi=10.14321/realanalexch.39.1.0207 |pages=214–216 |s2cid=118792515 |arxiv=1212.2066 }}</ref> <math display="block"> \left[\frac{\partial g_i}{\partial x_j} (\mathbf{x})\right]_{m\times n} =- \left [ J_{f, \mathbf{y}}(\mathbf{x}, g(\mathbf{x})) \right ]_{m \times m} ^{-1} \, \left [ J_{f, \mathbf{x}}(\mathbf{x}, g(\mathbf{x})) \right ]_{m \times n} </math> For a proof, see [[Inverse function theorem#Implicit_function_theorem]]. Here, the two-dimensional case is detailed. ===Higher derivatives=== If, moreover, <math>f</math> is [[analytic function|analytic]] or continuously differentiable <math>k</math> times in a neighborhood of <math>(\textbf{a}, \textbf{b})</math>, then one may choose <math>U</math> in order that the same holds true for <math>g</math> inside <math>U</math>. <ref>{{Cite book |first1=K. | last1=Fritzsche |first2=H. |last2=Grauert |year=2002 |url=https://books.google.com/books?id=jSeRz36zXIMC&pg=PA34 |title=From Holomorphic Functions to Complex Manifolds |publisher=Springer |page=34 |isbn=9780387953953 }}</ref> In the analytic case, this is called the '''analytic implicit function theorem'''. == The circle example == Let us go back to the example of the [[unit circle]]. In this case ''n'' = ''m'' = 1 and <math>f(x,y) = x^2 + y^2 - 1</math>. The matrix of partial derivatives is just a 1 × 2 matrix, given by <math display="block">(Df)(a,b) = \begin{bmatrix} \dfrac{\partial f}{\partial x}(a,b) & \dfrac{\partial f}{\partial y}(a,b) \end{bmatrix} = \begin{bmatrix} 2a & 2b \end{bmatrix}</math> Thus, here, the {{math|''Y''}} in the statement of the theorem is just the number {{math|2''b''}}; the linear map defined by it is invertible [[if and only if]] {{math|''b'' ≠ 0}}. By the implicit function theorem we see that we can locally write the circle in the form {{math|1=''y'' = ''g''(''x'')}} for all points where {{math|''y'' ≠ 0}}. For {{math|(±1, 0)}} we run into trouble, as noted before. The implicit function theorem may still be applied to these two points, by writing {{mvar|x}} as a function of {{mvar|y}}, that is, <math>x = h(y)</math>; now the graph of the function will be <math>\left(h(y), y\right)</math>, since where {{math|1=''b'' = 0}} we have {{math|1=''a'' = 1}}, and the conditions to locally express the function in this form are satisfied. The implicit derivative of ''y'' with respect to ''x'', and that of ''x'' with respect to ''y'', can be found by [[Differential of a function#Differentials in several variables|totally differentiating]] the implicit function <math>x^2+y^2-1</math> and equating to 0: <math display="block">2x\, dx+2y\, dy = 0,</math> giving <math display="block">\frac{dy}{dx}=-\frac{x}{y}</math> and <math display="block">\frac{dx}{dy} = -\frac{y}{x}. </math> == Application: change of coordinates == Suppose we have an {{mvar|m}}-dimensional space, parametrised by a set of coordinates <math> (x_1,\ldots,x_m) </math>. We can introduce a new coordinate system <math> (x'_1,\ldots,x'_m) </math> by supplying m functions <math> h_1\ldots h_m </math> each being continuously differentiable. These functions allow us to calculate the new coordinates <math> (x'_1,\ldots,x'_m) </math> of a point, given the point's old coordinates <math> (x_1,\ldots,x_m) </math> using <math> x'_1=h_1(x_1,\ldots,x_m), \ldots, x'_m=h_m(x_1,\ldots,x_m) </math>. One might want to verify if the opposite is possible: given coordinates <math> (x'_1,\ldots,x'_m) </math>, can we 'go back' and calculate the same point's original coordinates <math> (x_1,\ldots,x_m) </math>? The implicit function theorem will provide an answer to this question. The (new and old) coordinates <math>(x'_1,\ldots,x'_m, x_1,\ldots,x_m)</math> are related by ''f'' = 0, with <math display="block">f(x'_1,\ldots,x'_m,x_1,\ldots, x_m)=(h_1(x_1,\ldots, x_m)-x'_1,\ldots , h_m(x_1,\ldots, x_m)-x'_m).</math> Now the Jacobian matrix of ''f'' at a certain point (''a'', ''b'') [ where <math>a=(x'_1,\ldots,x'_m), b=(x_1,\ldots,x_m)</math> ] is given by <math display="block">(Df)(a,b) = \left [\begin{matrix} -1 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & -1 \end{matrix}\left| \begin{matrix} \frac{\partial h_1}{\partial x_1}(b) & \cdots & \frac{\partial h_1}{\partial x_m}(b)\\ \vdots & \ddots & \vdots\\ \frac{\partial h_m}{\partial x_1}(b) & \cdots & \frac{\partial h_m}{\partial x_m}(b)\\ \end{matrix} \right.\right] = [-I_m |J ].</math> where I<sub>''m''</sub> denotes the ''m'' × ''m'' [[identity matrix]], and {{mvar|J}} is the {{math|''m'' × ''m''}} matrix of partial derivatives, evaluated at (''a'', ''b''). (In the above, these blocks were denoted by X and Y. As it happens, in this particular application of the theorem, neither matrix depends on ''a''.) The implicit function theorem now states that we can locally express <math> (x_1,\ldots,x_m) </math> as a function of <math> (x'_1,\ldots,x'_m) </math> if ''J'' is invertible. Demanding ''J'' is invertible is equivalent to det ''J'' ≠ 0, thus we see that we can go back from the primed to the unprimed coordinates if the determinant of the Jacobian ''J'' is non-zero. This statement is also known as the [[inverse function theorem]]. === Example: polar coordinates === As a simple application of the above, consider the plane, parametrised by [[polar coordinates]] {{math|(''R'', ''θ'')}}. We can go to a new coordinate system ([[cartesian coordinates]]) by defining functions {{math|1=''x''(''R'', ''θ'') = ''R'' cos(''θ'')}} and {{math|1=''y''(''R'', ''θ'') = ''R'' sin(''θ'')}}. This makes it possible given any point {{math|(''R'', ''θ'')}} to find corresponding Cartesian coordinates {{math|(''x'', ''y'')}}. When can we go back and convert Cartesian into polar coordinates? By the previous example, it is sufficient to have {{math|1=det ''J'' ≠ 0}}, with <math display="block">J =\begin{bmatrix} \frac{\partial x(R,\theta)}{\partial R} & \frac{\partial x(R,\theta)}{\partial \theta} \\ \frac{\partial y(R,\theta)}{\partial R} & \frac{\partial y(R,\theta)}{\partial \theta} \\ \end{bmatrix}= \begin{bmatrix} \cos \theta & -R \sin \theta \\ \sin \theta & R \cos \theta \end{bmatrix}.</math> Since {{math|1=det ''J'' = ''R''}}, conversion back to polar coordinates is possible if {{math|1=''R'' ≠ 0}}. So it remains to check the case {{math|1=''R'' = 0}}. It is easy to see that in case {{math|1=''R'' = 0}}, our coordinate transformation is not invertible: at the origin, the value of θ is not well-defined. == Generalizations == === Banach space version === Based on the [[inverse function theorem]] in [[Banach space]]s, it is possible to extend the implicit function theorem to Banach space valued mappings.<ref>{{Cite book |last=Lang |first=Serge |author-link=Serge Lang |title=Fundamentals of Differential Geometry |url=https://archive.org/details/fundamentalsdiff00lang_678 |url-access=limited |year=1999 |publisher=Springer | location=New York |series=Graduate Texts in Mathematics |isbn=0-387-98593-X |pages=[https://archive.org/details/fundamentalsdiff00lang_678/page/n15 15]–21 }}</ref><ref>{{Cite book |last=Edwards |first=Charles Henry |title=Advanced Calculus of Several Variables |publisher=Dover Publications |location=Mineola, New York |year=1994 |orig-year=1973 |isbn=0-486-68336-2 |pages=417–418 }}</ref> Let ''X'', ''Y'', ''Z'' be [[Banach space]]s. Let the mapping {{math|''f'' : ''X'' × ''Y'' → ''Z''}} be continuously [[Fréchet differentiable]]. If <math>(x_0,y_0)\in X\times Y</math>, <math>f(x_0,y_0)=0</math>, and <math>y\mapsto Df(x_0,y_0)(0,y)</math> is a Banach space isomorphism from ''Y'' onto ''Z'', then there exist neighbourhoods ''U'' of ''x''<sub>0</sub> and ''V'' of ''y''<sub>0</sub> and a Fréchet differentiable function ''g'' : ''U'' → ''V'' such that ''f''(''x'', ''g''(''x'')) = 0 and ''f''(''x'', ''y'') = 0 if and only if ''y'' = ''g''(''x''), for all <math>(x,y)\in U\times V</math>. === Implicit functions from non-differentiable functions === Various forms of the implicit function theorem exist for the case when the function ''f'' is not differentiable. It is standard that local strict monotonicity suffices in one dimension.<ref>{{springer |title=Implicit function |id=i/i050310 |last=Kudryavtsev |first=Lev Dmitrievich }}</ref> The following more general form was proven by Kumagai based on an observation by Jittorntrum.<ref>{{Cite journal |first=K. |last=Jittorntrum |title=An Implicit Function Theorem |journal=Journal of Optimization Theory and Applications |volume=25 |issue=4 |year=1978 |doi=10.1007/BF00933522 |pages=575–577 |s2cid=121647783 }}</ref><ref>{{Cite journal |first=S. |last=Kumagai |title=An implicit function theorem: Comment |journal=Journal of Optimization Theory and Applications |volume=31 |issue=2 |year=1980 |doi=10.1007/BF00934117 |pages=285–288 |s2cid=119867925 }}</ref> Consider a continuous function <math>f : \R^n \times \R^m \to \R^n</math> such that <math>f(x_0, y_0) = 0</math>. If there exist open neighbourhoods <math>A \subset \R^n</math> and <math>B \subset \R^m</math> of ''x''<sub>0</sub> and ''y''<sub>0</sub>, respectively, such that, for all ''y'' in ''B'', <math>f(\cdot, y) : A \to \R^n</math> is locally one-to-one, then there exist open neighbourhoods <math>A_0 \subset \R^n</math> and <math>B_0 \subset \R^m</math> of ''x''<sub>0</sub> and ''y''<sub>0</sub>, such that, for all <math>y \in B_0</math>, the equation ''f''(''x'', ''y'') = 0 has a unique solution <math display="block">x = g(y) \in A_0,</math> where ''g'' is a continuous function from ''B''<sub>0</sub> into ''A''<sub>0</sub>. === Collapsing manifolds === Perelman’s collapsing theorem for [[3-manifold]]s, the capstone of his proof of Thurston's [[geometrization conjecture]], can be understood as an extension of the implicit function theorem.<ref>{{cite journal |last1=Cao |first1=Jianguo |last2=Ge |first2=Jian |title=A simple proof of Perelman's collapsing theorem for 3-manifolds |journal=J. Geom. Anal. |date=2011 |volume=21 |issue=4 |pages=807–869|doi=10.1007/s12220-010-9169-5 |arxiv=1003.2215 |s2cid=514106 }}</ref> == See also == *[[Inverse function theorem]] *[[Constant rank theorem]]: Both the implicit function theorem and the inverse function theorem can be seen as special cases of the constant rank theorem. == Notes == {{Notelist}} == References == {{reflist|30em}} ==Further reading== * {{cite book |first=Carl B. |last=Allendoerfer |author-link=Carl B. Allendoerfer |title=Calculus of Several Variables and Differentiable Manifolds |location=New York |publisher=Macmillan |year=1974 |chapter=Theorems about Differentiable Functions |pages=54–88 |isbn=0-02-301840-2 }} * {{cite book |first=K. G. |last=Binmore |author-link=Kenneth Binmore |chapter=Implicit Functions |title=Calculus |location=New York |publisher=Cambridge University Press |year=1983 |isbn=0-521-28952-1 |pages=198–211 |chapter-url=https://books.google.com/books?id=K8RfQgAACAAJ&pg=PA198 }} * {{cite book |first1=Lynn H. |last1=Loomis |author-link=Lynn Harold Loomis |first2=Shlomo |last2=Sternberg |author-link2=Shlomo Sternberg |title=Advanced Calculus |url=https://archive.org/details/advancedcalculus0000loom |url-access=registration |location=Boston |publisher=Jones and Bartlett |edition=Revised |year=1990 |pages=[https://archive.org/details/advancedcalculus0000loom/page/164 164–171] |isbn=0-86720-122-3 }} * {{cite book |first1=Murray H. |last1=Protter |author-link=Murray H. Protter |first2=Charles B. Jr. |last2=Morrey |author-link2=Charles B. Morrey Jr. |chapter=Implicit Function Theorems. Jacobians |title=Intermediate Calculus |location=New York |publisher=Springer |edition=2nd |year=1985 |isbn=0-387-96058-9 |pages=390–420 |chapter-url=https://books.google.com/books?id=3lTmBwAAQBAJ&pg=PA390 }} {{DEFAULTSORT:Implicit Function Theorem}} [[Category:Articles containing proofs]] [[Category:Mathematical identities]] [[Category:Theorems in calculus]] [[Category:Theorems in real analysis]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Bigger
(
edit
)
Template:Calculus
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Dubious
(
edit
)
Template:Efn
(
edit
)
Template:Endflatlist
(
edit
)
Template:Fix
(
edit
)
Template:Math
(
edit
)
Template:Math theorem
(
edit
)
Template:Mvar
(
edit
)
Template:Notelist
(
edit
)
Template:Nowrap
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Springer
(
edit
)
Template:Startflatlist
(
edit
)
Template:Tmath
(
edit
)