Characterizations of the exponential function
Template:Short description In mathematics, the exponential function can be characterized in many ways. This article presents some common characterizations, discusses why each makes sense, and proves that they are all equivalent.
The exponential function occurs naturally in many branches of mathematics. Walter Rudin called it "the most important function in mathematics".Template:R It is therefore useful to have multiple ways to define (or characterize) it. Each of the characterizations below may be more or less useful depending on context. The "product limit" characterization of the exponential function was discovered by Leonhard Euler.Template:R
CharacterizationsEdit
The six most common definitions of the exponential function <math>\exp(x)=e^x</math> for real values <math>x\in \mathbb{R}</math> are as follows.
- Product limit. Define <math>e^x</math> by the limit:<math display="block">e^x = \lim_{n\to\infty} \left(1+\frac x n \right)^n.</math>
- Power series. Define Template:Math as the value of the infinite series <math display="block">e^x = \sum_{n=0}^\infty {x^n \over n!} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \frac{x^4}{4!} + \cdots</math> (Here Template:Math denotes the factorial of Template:Mvar. One [[proof that e is irrational|proof that Template:Math is irrational]] uses a special case of this formula.)
- Inverse of logarithm integral. Define <math>e^x</math> to be the unique number Template:Math such that <math display="block">\int_1^y \frac{dt}{t} = x.</math> That is, <math>e^x</math> is the inverse of the natural logarithm function <math>x=\ln(y)</math>, which is defined by this integral.
- Differential equation. Define <math>y(x)=e^x</math> to be the unique solution to the differential equation with initial value:<math display="block">y' = y,\quad y(0) = 1,</math> where <math>y'=\tfrac{dy}{dx}</math> denotes the derivative of Template:Mvar.
- Functional equation. The exponential function <math>e^x</math> is the unique function Template:Math with the multiplicative property <math>f(x+y)=f(x)f(y)</math> for all <math>x,y</math> and <math>f'(0)=1</math>. The condition <math>f'(0)=1</math> can be replaced with <math>f(1)=e</math> together with any of the following regularity conditions:Template:Unordered list For the uniqueness, one must impose some regularity condition, since other functions satisfying <math>f(x+y)=f(x)f(y)</math> can be constructed using a basis for the real numbers over the rationals, as described by Hewitt and Stromberg.
- Elementary definition by powers. Define the exponential function with base <math>a>0</math> to be the continuous function <math>a^x</math> whose value on integers <math>x=n</math> is given by repeated multiplication or division of <math>a</math>, and whose value on rational numbers <math>x=n/m</math> is given by <math>a^{n/m} =\ \ \sqrt[m]{\vphantom{A^2}a^n}</math>. Then define <math>e^x</math> to be the exponential function whose base <math>a=e</math> is the unique positive real number satisfying: <math display="block">\lim_{h \to 0} \frac{e^h - 1}{h} = 1.</math>
Larger domainsEdit
One way of defining the exponential function over the complex numbers is to first define it for the domain of real numbers using one of the above characterizations, and then extend it as an analytic function, which is characterized by its values on any infinite domain set.
Also, characterisations (1), (2), and (4) for <math>e^x</math> apply directly for <math>x</math> a complex number. Definition (3) presents a problem because there are non-equivalent paths along which one could integrate; but the equation of (3) should hold for any such path modulo <math>2\pi i</math>. As for definition (5), the additive property together with the complex derivative <math>f'(0) = 1</math> are sufficient to guarantee <math>f(x)=e^x</math>. However, the initial value condition <math>f(1)=e</math> together with the other regularity conditions are not sufficient. For example, for real x and y, the function<math display="block"> f(x + iy) = e^x(\cos(2y) + i\sin(2y)) = e^{x + 2iy} </math>satisfies the three listed regularity conditions in (5) but is not equal to <math>\exp(x+iy)</math>. A sufficient condition is that <math>f(1)=e</math> and that <math>f</math> is a conformal map at some point; or else the two initial values <math>f(1)=e</math> and <math display="inline"> f(i) = \cos(1) + i\sin(1) </math> together with the other regularity conditions.
One may also define the exponential on other domains, such as matrices and other algebras. Definitions (1), (2), and (4) all make sense for arbitrary Banach algebras.
Proof that each characterization makes senseEdit
Some of these definitions require justification to demonstrate that they are well-defined. For example, when the value of the function is defined as the result of a limiting process (i.e. an infinite sequence or series), it must be demonstrated that such a limit always exists.
Characterization 1Edit
The error of the product limit expression is described by:<math display="block">\left(1+\frac x n \right)^n=e^x \left(1-\frac{x^2}{2n}+\frac{x^3(8+3x)}{24n^2}+\cdots \right),</math> where the polynomial's degree (in x) in the term with denominator nk is 2k.
Characterization 2Edit
Since <math display="block">\lim_{n\to\infty} \left|\frac{x^{n+1}/(n+1)!}{x^n/n!}\right|
= \lim_{n\to\infty} \left|\frac{x}{n+1}\right| = 0 < 1.</math>
it follows from the ratio test that <math display="inline">\sum_{n=0}^\infty \frac{x^n}{n!}</math> converges for all x.
Characterization 3Edit
Since the integrand is an integrable function of Template:Mvar, the integral expression is well-defined. It must be shown that the function from <math>\mathbb{R}^+</math> to <math>\mathbb{R}</math> defined by <math display="block">x \mapsto \int_1^x \frac{dt}{t}</math> is a bijection. Since Template:Math is positive for positive Template:Mvar, this function is strictly increasing, hence injective. If the two integrals <math display="block">\begin{align} \int_1^\infty \frac{dt} t & = \infty \\[8pt] \int_1^0 \frac{dt} t & = -\infty \end{align}</math> hold, then it is surjective as well. Indeed, these integrals do hold; they follow from the integral test and the divergence of the harmonic series.
Characterization 6Edit
The definition depends on the unique positive real number <math>a=e</math> satisfying: <math display="block">\lim_{h \to 0} \frac{a^h - 1}{h} = 1.</math>This limit can be shown to exist for any <math>a</math>, and it defines a continuous increasing function <math>f(a)=\ln(a) </math> with <math>f(1)=0</math> and <math>\lim_{a\to\infty}f(a) = \infty </math>, so the Intermediate value theorem guarantees the existence of such a value <math>a=e</math>.
Equivalence of the characterizationsEdit
The following arguments demonstrate the equivalence of the above characterizations for the exponential function.
Characterization 1 ⇔ characterization 2Edit
The following argument is adapted from Rudin, theorem 3.31, p. 63–65.
Let <math>x \geq 0</math> be a fixed non-negative real number. Define <math display="block">t_n=\left(1+\frac x n \right)^n,\qquad s_n = \sum_{k=0}^n\frac{x^k}{k!},\qquad e^x = \lim_{n\to\infty} s_n.</math>
By the binomial theorem, <math display="block">\begin{align} t_n & =\sum_{k=0}^n{n \choose k}\frac{x^k}{n^k}=1+x+\sum_{k=2}^n\frac{n(n-1)(n-2)\cdots(n-(k-1))x^k}{k!\,n^k} \\[8pt] & = 1+x+\frac{x^2}{2!}\left(1-\frac{1}{n}\right)+\frac{x^3}{3!}\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)+\cdots \\[8pt] & {}\qquad \cdots +\frac{x^n}{n!}\left(1-\frac{1}{n}\right)\cdots\left(1-\frac{n-1}{n}\right)\le s_n \end{align}</math> (using x ≥ 0 to obtain the final inequality) so that: <math display="block">\limsup_{n\to\infty}t_n \le \limsup_{n\to\infty}s_n = e^x</math> One must use lim sup because it is not known if tn converges.
For the other inequality, by the above expression for tn, if 2 ≤ m ≤ n, we have: <math display="block">1+x+\frac{x^2}{2!}\left(1-\frac{1}{n}\right)+\cdots+\frac{x^m}{m!}\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)\cdots\left(1-\frac{m-1}{n}\right)\le t_n.</math>
Fix m, and let n approach infinity. Then <math display="block">s_m = 1+x+\frac{x^2}{2!}+\cdots+\frac{x^m}{m!} \le \liminf_{n\to\infty}\ t_n</math> (again, one must use lim inf because it is not known if tn converges). Now, take the above inequality, let m approach infinity, and put it together with the other inequality to obtain: <math display="block">\limsup_{n\to\infty}t_n \le e^x \le \liminf_{n\to\infty}t_n </math> so that <math display="block">\lim_{n\to\infty}t_n = e^x. </math>
This equivalence can be extended to the negative real numbers by noting <math display="inline">\left(1 - \frac r n \right)^n \left(1+\frac{r}{n}\right)^n = \left(1-\frac{r^2}{n^2}\right)^n </math> and taking the limit as n goes to infinity.
Characterization 1 ⇔ characterization 3Edit
Here, the natural logarithm function is defined in terms of a definite integral as above. By the first part of fundamental theorem of calculus, <math display="block">\frac d {dx}\ln x=\frac{d}{dx} \int_1^x \frac1 t \,dt = \frac 1 x.</math>
Besides, <math display="inline">\ln 1 = \int_1^1 \frac{dt}{t} = 0</math>
Now, let x be any fixed real number, and let <math display="block">y=\lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n.</math>
Template:Math, which implies that Template:Math, where Template:Math is in the sense of definition 3. We have <math display="block">\ln y=\ln\lim_{n\to\infty}\left(1+\frac{x}{n} \right)^n = \lim_{n\to\infty} \ln\left(1+\frac{x}{n}\right)^n.</math>
Here, the continuity of ln(y) is used, which follows from the continuity of 1/t: <math display="block">\ln y=\lim_{n\to\infty}n\ln \left(1+\frac{x}{n} \right) = \lim_{n\to\infty} \frac{x\ln\left(1+(x/n)\right)}{(x/n)}.</math>
Here, the result lnan = nlna has been used. This result can be established for n a natural number by induction, or using integration by substitution. (The extension to real powers must wait until ln and exp have been established as inverses of each other, so that ab can be defined for real b as eb lna.) <math display="block">=x\cdot\lim_{h\to 0}\frac{\ln\left(1+h\right)}{h} \quad \text{ where } h = \frac{x}{n}</math> <math display="block">=x\cdot\lim_{h\to 0}\frac{\ln\left(1+h\right)-\ln 1}{h}</math> <math display="block">=x\cdot\frac{d}{dt} \ln t \Bigg|_{t=1}</math> <math display="block">\!\, = x.</math>
Characterization 1 ⇔ characterization 4Edit
Let <math>y(t) </math> denote the solution to the initial value problem <math>y' = y,\ y(0) = 1</math>. Applying the simplest form of Euler's method with increment <math>\Delta t = \frac{x}{n}</math> and sample points <math>t \ =\ 0,\ \Delta t, \ 2 \Delta t, \ldots, \ n \Delta t </math> gives the recursive formula:
<math>y(t+\Delta t) \ \approx \ y(t) + y'(t)\Delta t \ =\ y(t) + y(t)\Delta t \ =\ y(t)\,(1+\Delta t).</math>
This recursion is immediately solved to give the approximate value <math>y(x) = y(n\Delta t) \approx (1+\Delta t)^n</math>, and since Euler's Method is known to converge to the exact solution, we have:
<math>y(x) = \lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n. </math>
Characterization 2 ⇔ characterization 4Edit
Let n be a non-negative integer. In the sense of definition 4 and by induction, <math>\frac{d^ny}{dx^n}=y</math>.
Therefore <math>\frac{d^ny}{dx^n}\Bigg|_{x=0}=y(0)=1.</math>
Using Taylor series, <math display="block">y= \sum_{n=0}^\infty \frac {f^{(n)}(0)}{n!} \, x^n = \sum_{n=0}^\infty \frac {1}{n!} \, x^n = \sum_{n=0}^\infty \frac {x^n}{n!}.</math> This shows that definition 4 implies definition 2.
In the sense of definition 2, <math display="block">\begin{align} \frac{d}{dx}e^x & = \frac{d}{dx} \left(1+\sum_{n=1}^\infty \frac {x^n}{n!} \right) = \sum_{n=1}^\infty \frac {nx^{n-1}}{n!} =\sum_{n=1}^\infty \frac {x^{n-1}}{(n-1)!} \\[6pt] & =\sum_{k=0}^\infty \frac {x^k}{k!}, \text{ where } k=n-1 \\[6pt] & =e^x \end{align}</math>
Besides, <math display="inline">e^0 = 1 + 0 + \frac{0^2}{2!} + \frac{0^3}{3!} + \cdots = 1.</math> This shows that definition 2 implies definition 4.
Characterization 2 ⇒ characterization 5Edit
In the sense of definition 2, the equation <math>\exp(x+y)= \exp(x)\exp(y)</math> follows from the term-by-term manipulation of power series justified by uniform convergence, and the resulting equality of coefficients is just the Binomial theorem. Furthermore:<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> <math display="block">\begin{align} \exp'(0) & = \lim_{h\to 0} \frac{e^h-1}{h} \\
& =\lim_{h\to 0} \frac{1}{h} \left (\left (1+h+ \frac{h^2}{2!}+\frac{h^3}{3!}+\frac{h^4}{4!}+\cdots \right) -1 \right) \\ & =\lim_{h\to 0} \left(1+ \frac{h}{2!}+\frac{h^2}{3!}+\frac{h^3}{4!}+\cdots \right) \ =\ 1.\\
\end{align}</math>
Characterization 3 ⇔ characterization 4Edit
Characterisation 3 first defines the natural logarithm:<math display="block">\log x \ \ \stackrel{\text{def}}{=}\ \int_{1}^{x}\! \frac{dt}{t},</math>then <math>\exp</math> as the inverse function with <math display="inline">x=\log(\exp x) </math>. Then by the Chain rule:
<math>1=\frac{d}{dx}[ \log(\exp(x)) ] = \log'(\exp(x))\cdot \exp'(x) = \frac{\exp'(x)}{\exp(x)}, </math>
i.e. <math>\exp'(x)=\exp(x) </math>. Finally, <math>\log(1) = 0 </math>, so <math>\exp'(0) = \exp(0) = 1 </math>. That is, <math>y=\exp(x)</math> is the unique solution of the initial value problem <math>\frac{dy}{dx} = y </math>, <math>y(0)=1 </math> of characterization 4. Conversely, assume <math>y=\exp(x) </math> has <math>\exp'(x)=\exp(x) </math> and <math>\exp(0)=1 </math>, and define <math>\log(x) </math> as its inverse function with <math>x = \exp(\log x) </math> and <math>\log(1) = 0 </math>. Then:
<math>1=\frac{d}{dx}[ \exp(\log(x)) ] = \exp'(\log(x))\cdot \log'(x) = \exp(\log(x))\cdot \log'(x) = x\cdot \log'(x), </math>
i.e. <math>\log'(x)=\frac{1}{x} </math>. By the Fundamental theorem of calculus,<math display="block">\int_{1}^{x}\frac{1}{t}\, dt = \log(x) - \log(1) = \log(x). </math>
Characterization 5 ⇒ characterization 4Edit
The conditions Template:Math and Template:Math imply both conditions in characterization 4. Indeed, one gets the initial condition Template:Math by dividing both sides of the equation <math display="block">f(0) = f(0 + 0) = f(0) f(0)</math> by Template:Math, and the condition that Template:Math follows from the condition that Template:Math and the definition of the derivative as follows: <math display="block"> \begin{array}{rcccccc} f'(x) & = & \lim\limits_{h\to 0}\frac{f(x+h)-f(x)} h
& = & \lim\limits_{h\to 0}\frac{f(x)f(h)-f(x)} h & = & \lim\limits_{h\to 0}f(x)\frac{f(h)-1} h
\\[1em]
& = & f(x)\lim\limits_{h\to 0}\frac{f(h)-1} h & = & f(x)\lim\limits_{h\to 0}\frac{f(0+h)-f(0)} h & = & f(x)f'(0) = f(x).
\end{array} </math>
Characterization 5 ⇒ characterization 4Edit
Assum characterization 5, the multiplicative property together with the initial condition <math>\exp'(0)= 1 </math> imply that: <math display="block">\begin{array}{rcl} \frac{d}{dx}\exp(x) &=& \lim_{h \to 0} \frac{\exp(x{+}h)-\exp(x)}{h}\\ & = & \exp(x) \cdot \lim_{h \to 0}\frac{\exp(h)-1}{h}\\ & = & \exp(x) \exp'(0) =\exp(x) . \end{array}</math>
Characterization 5 ⇔ characterization 6Edit
By inductively applying the multiplication rule, we get: <math display="block">f\left(\frac{n}{m}\right)^m=f\left(\frac{n}{m}+\cdots+\frac{n}{m} \right)=f(n)=f(1)^n,</math> and thus <math display="block">f\left(\frac{n}{m}\right)=\sqrt[m]{f(1)^n}\ \stackrel{\text{def}}=\ a^{n/m}</math> for <math>a=f(1)</math>. Then the condition <math>f'(0)=1</math> means that <math>\lim_{h\to 0}\tfrac{a^h-1}{h}=1</math>, so <math>a=e</math> by definition.
Also, any of the regularity conditions of definition 5 imply that <math>f(x)</math> is continuous at all real <math>x</math> (see below). The converse is similar.
Characterization 5 ⇒ characterization 6Edit
Let <math>f(x)</math> be a Lebesgue-integrable non-zero function satisfying the mulitiplicative property <math>f(x+y)=f(x)f(y)</math> with <math>f(1) = e</math>. Following Hewitt and Stromberg, exercise 18.46, we will prove that Lebesgue-integrability implies continuity. This is sufficient to imply <math>f(x) = e^x</math> according to characterization 6, arguing as above.
First, a few elementary properties:
- If <math>f(x)</math> is nonzero anywhere (say at <math>x=y </math>), then it is non-zero everywhere. Proof: <math>f(y) = f(x) f(y - x) \neq 0</math> implies <math>f(x) \neq 0</math>.
- <math>f(0)=1</math>. Proof: <math>f(x)= f(x+0) = f(x) f(0)</math> and <math>f(x)</math> is non-zero.
- <math>f(-x)=1/f(x)</math>. Proof: <math>1 = f(0)= f(x-x) = f(x) f(-x)</math>.
- If <math>f(x)</math> is continuous anywhere (say at <math>x=y </math>), then it is continuous everywhere. Proof: <math>f(x+\delta) - f(x) = f(x-y) [ f(y+\delta) - f(y)] \to 0</math> as <math>\delta \to 0</math> by continuity at <math>y </math>.
The second and third properties mean that it is sufficient to prove <math>f(x)=e^x</math> for positive x.
Since<math>f(x)</math> is a Lebesgue-integrable function, then we may define <math display="inline">g(x) = \int_0^x f(t)\, dt </math>. It then follows that <math display="block">g(x+y)-g(x) = \int_x^{x+y} f(t)\, dt = \int_0^y f(x+t)\, dt = f(x) g(y). </math>
Since <math>f(x)</math> is nonzero, some Template:Mvar can be chosen such that <math>g(y) \neq 0</math> and solve for <math>f(x)</math> in the above expression. Therefore: <math display="block">\begin{align} f(x+\delta)-f(x) & = \frac{[g(x+\delta+y)-g(x+\delta)]-[g(x+y)-g(x)]}{g(y)} \\ & =\frac{[g(x+y+\delta)-g(x+y)]-[g(x+\delta)-g(x)]}{g(y)} \\ & =\frac{f(x+y)g(\delta)-f(x)g(\delta)}{g(y)}=g(\delta)\frac{f(x+y)-f(x)}{g(y)}. \end{align}</math>
The final expression must go to zero as <math>\delta \to 0</math> since <math>g(0)=0</math> and <math>g(x)</math> is continuous. It follows that <math>f(x)</math> is continuous.
ReferencesEdit
- Walter Rudin, Principles of Mathematical Analysis, 3rd edition (McGraw–Hill, 1976), chapter 8.
- Edwin Hewitt and Karl Stromberg, Real and Abstract Analysis (Springer, 1965).