Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Convex function
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
[[Image:ConvexFunction.svg|thumb|300px|right|Convex function on an [[interval (mathematics)|interval]].]]{{Use American English|date = March 2019}} {{Short description|Real function with secant line between points above the graph itself}} [[Image:Epigraph convex.svg|right|thumb|300px|A function (in black) is convex if and only if the region above its [[Graph of a function|graph]] (in green) is a [[convex set]].]] [[Image:Grafico 3d x2+xy+y2.png|right|300px|thumb|A graph of the [[polynomial#Number of variables|bivariate]] convex function {{nowrap| ''x''<sup>2</sup> + ''xy'' + ''y''<sup>2</sup>}}.]] [[File:Convex vs. Not-convex.jpg|thumb|right|300px|Convex vs. Not convex]] In [[mathematics]], a [[real-valued function]] is called '''convex''' if the [[line segment]] between any two distinct points on the [[graph of a function|graph of the function]] lies above or on the graph between the two points. Equivalently, a function is convex if its [[epigraph (mathematics)|''epigraph'']] (the set of points on or above the graph of the function) is a [[convex set]]. In simple terms, a convex function graph is shaped like a cup <math>\cup</math> (or a straight line like a linear function), while a [[concave function]]'s graph is shaped like a cap <math>\cap</math>. A twice-[[differentiable function|differentiable]] function of a single variable is convex [[if and only if]] its [[second derivative]] is nonnegative on its entire [[domain of a function|domain]].<ref>{{Cite web|url=https://www.stat.cmu.edu/~larry/=stat705/Lecture2.pdf |title=Lecture Notes 2|website=www.stat.cmu.edu|access-date=3 March 2017}}</ref> Well-known examples of convex functions of a single variable include a [[linear function]] <math>f(x) = cx</math> (where <math>c</math> is a [[real number]]), a [[quadratic function]] <math>cx^2</math> (<math>c</math> as a nonnegative real number) and an [[exponential function]] <math>ce^x</math> (<math>c</math> as a nonnegative real number). Convex functions play an important role in many areas of mathematics. They are especially important in the study of [[optimization]] problems where they are distinguished by a number of convenient properties. For instance, a strictly convex function on an [[open set]] has no more than one [[maximum and minimum|minimum]]. Even in infinite-dimensional spaces, under suitable additional hypotheses, convex functions continue to satisfy such properties and as a result, they are the most well-understood functionals in the [[calculus of variations]]. In [[probability theory]], a convex function applied to the [[expected value]] of a [[random variable]] is always bounded above by the expected value of the convex function of the random variable. This result, known as [[Jensen's inequality]], can be used to deduce [[inequality (mathematics)|inequalities]] such as the [[inequality of arithmetic and geometric means|arithmetic–geometric mean inequality]] and [[Hölder's inequality]]. ==Definition== [[File:Convex 01.ogg|thumb|right|Visualizing a convex function and Jensen's Inequality]] Let <math>X</math> be a [[Convex set|convex subset]] of a real [[vector space]] and let <math>f: X \to \R</math> be a function. Then <math>f</math> is called '''{{em|convex}}''' if and only if any of the following equivalent conditions hold: <ol start=1> <li>For all <math>0 \leq t \leq 1</math> and all <math>x_1, x_2 \in X</math>: <math display=block>f\left(t x_1 + (1-t) x_2\right) \leq t f\left(x_1\right) + (1-t) f\left(x_2\right)</math> The right hand side represents the straight line between <math>\left(x_1, f\left(x_1\right)\right)</math> and <math>\left(x_2, f\left(x_2\right)\right)</math> in the graph of <math>f</math> as a function of <math>t;</math> increasing <math>t</math> from <math>0</math> to <math>1</math> or decreasing <math>t</math> from <math>1</math> to <math>0</math> sweeps this line. Similarly, the argument of the function <math>f</math> in the left hand side represents the straight line between <math>x_1</math> and <math>x_2</math> in <math>X</math> or the <math>x</math>-axis of the graph of <math>f.</math> So, this condition requires that the straight line between any pair of points on the curve of <math>f</math> be above or just meeting the graph.<ref>{{Cite web|last=|first=|date=|title=Concave Upward and Downward|url=https://www.mathsisfun.com/calculus/concave-up-down-convex.html|url-status=live|archive-url=https://web.archive.org/web/20131218034748/http://www.mathsisfun.com:80/calculus/concave-up-down-convex.html |archive-date=2013-12-18 |access-date=|website=}}</ref> </li> <li>For all <math>0 < t < 1</math> and all <math>x_1, x_2 \in X</math> such that <math>x_1\neq x_2</math>: <math display=block>f\left(t x_1 + (1-t) x_2\right) \leq t f\left(x_1\right) + (1-t) f\left(x_2\right)</math> The difference of this second condition with respect to the first condition above is that this condition does not include the intersection points (for example, <math>\left(x_1, f\left(x_1\right)\right)</math> and <math>\left(x_2, f\left(x_2\right)\right)</math>) between the straight line passing through a pair of points on the curve of <math>f</math> (the straight line is represented by the right hand side of this condition) and the curve of <math>f;</math> the first condition includes the intersection points as it becomes <math>f\left(x_1\right) \leq f\left(x_1\right)</math> or <math>f\left(x_2\right) \leq f\left(x_2\right)</math> at <math>t = 0</math> or <math>1,</math> or <math>x_1 = x_2.</math> In fact, the intersection points do not need to be considered in a condition of convex using <math display=block>f\left(t x_1 + (1-t) x_2\right) \leq t f\left(x_1\right) + (1-t) f\left(x_2\right)</math> because <math>f\left(x_1\right) \leq f\left(x_1\right)</math> and <math>f\left(x_2\right) \leq f\left(x_2\right)</math> are always true (so not useful to be a part of a condition). </li> </ol> The second statement characterizing convex functions that are valued in the real line <math>\R</math> is also the statement used to define '''{{em|convex functions}}''' that are valued in the [[extended real number line]] <math>[-\infty, \infty] = \R \cup \{\pm\infty\},</math> where such a function <math>f</math> is allowed to take <math>\pm\infty</math> as a value. The first statement is not used because it permits <math>t</math> to take <math>0</math> or <math>1</math> as a value, in which case, if <math>f\left(x_1\right) = \pm\infty</math> or <math>f\left(x_2\right) = \pm\infty,</math> respectively, then <math>t f\left(x_1\right) + (1 - t) f\left(x_2\right)</math> would be undefined (because the multiplications <math>0 \cdot \infty</math> and <math>0 \cdot (-\infty)</math> are undefined). The sum <math>-\infty + \infty</math> is also undefined so a convex extended real-valued function is typically only allowed to take exactly one of <math>-\infty</math> and <math>+\infty</math> as a value. The second statement can also be modified to get the definition of {{em|strict convexity}}, where the latter is obtained by replacing <math>\,\leq\,</math> with the strict inequality <math>\,<.</math> Explicitly, the map <math>f</math> is called '''{{em|strictly convex}}''' if and only if for all real <math>0 < t < 1</math> and all <math>x_1, x_2 \in X</math> such that <math>x_1 \neq x_2</math>: <math display=block>f\left(t x_1 + (1-t) x_2\right) < t f\left(x_1\right) + (1-t) f\left(x_2\right)</math> A strictly convex function <math>f</math> is a function that the straight line between any pair of points on the curve <math>f</math> is above the curve <math>f</math> except for the intersection points between the straight line and the curve. An example of a function which is convex but not strictly convex is <math>f(x,y) = x^2 + y</math>. This function is not strictly convex because any two points sharing an x coordinate will have a straight line between them, while any two points NOT sharing an x coordinate will have a greater value of the function than the points between them. The function <math>f</math> is said to be '''{{em|[[Concave function|concave]]}}''' (resp. '''{{em|strictly concave}}''') if <math>-f</math> (<math>f</math> multiplied by −1) is convex (resp. strictly convex). ==Alternative naming== The term ''convex'' is often referred to as ''convex down'' or ''concave upward'', and the term [[Concave function |concave]] is often referred as ''concave down'' or ''convex upward''.<ref>{{Cite book|last=Stewart|first=James|title=Calculus|publisher=Cengage Learning|year=2015|isbn=978-1305266643|edition=8th|pages=223–224}}</ref><ref>{{cite book|last1=W. Hamming|first1=Richard|url=https://books.google.com/books?id=WLIbeA1aWvUC|title=Methods of Mathematics Applied to Calculus, Probability, and Statistics|publisher=Courier Corporation|year=2012|isbn=978-0-486-13887-9|edition=illustrated|page=227}} [https://books.google.com/books?id=WLIbeA1aWvUC&pg=PA227 Extract of page 227]</ref><ref>{{cite book|last1=Uvarov|first1=Vasiliĭ Borisovich|url=https://books.google.com/books?id=GzQnAQAAIAAJ|title=Mathematical Analysis|publisher=Mir Publishers|year=1988|isbn=978-5-03-000500-3|edition=|page=126-127}}</ref> If the term "convex" is used without an "up" or "down" keyword, then it refers strictly to a cup shaped graph <math>\cup</math>. As an example, [[Jensen's inequality]] refers to an inequality involving a convex or convex-(down), function.<ref>{{cite book |title=The Probability Companion for Engineering and Computer Science |edition=illustrated |first1=Adam |last1=Prügel-Bennett |publisher=Cambridge University Press |year=2020 |isbn=978-1-108-48053-6 |page=160 |url=https://books.google.com/books?id=coHCDwAAQBAJ}} [https://books.google.com/books?id=coHCDwAAQBAJ&pg=PA160 Extract of page 160]</ref> == Properties == Many properties of convex functions have the same simple formulation for functions of many variables as for functions of one variable. See below the properties for the case of many variables, as some of them are not listed for functions of one variable. === Functions of one variable === * Suppose <math>f</math> is a function of one [[real number|real]] variable defined on an interval, and let <math display=block>R(x_1, x_2) = \frac{f(x_2) - f(x_1)}{x_2 - x_1}</math> (note that <math>R(x_1, x_2)</math> is the slope of the purple line in the first drawing; the function <math>R</math> is [[Symmetric function|symmetric]] in <math>(x_1, x_2),</math> means that <math>R</math> does not change by exchanging <math>x_1</math> and <math>x_2</math>). <math>f</math> is convex if and only if <math>R(x_1, x_2)</math> is [[monotonically non-decreasing]] in <math>x_1,</math> for every fixed <math>x_2</math> (or vice versa). This characterization of convexity is quite useful to prove the following results. * A convex function <math>f</math> of one real variable defined on some [[open interval]] <math>C</math> is [[Continuous function|continuous]] on <math>C </math>. Moreover, <math>f</math> admits [[Semi-differentiability|left and right derivatives]], and these are [[monotonically non-decreasing]]. In addition, the left derivative is left-continuous and the right-derivative is right-continuous. As a consequence, <math>f</math> is [[differentiable function|differentiable]] at all but at most [[countable|countably many]] points, the set on which <math>f</math> is not differentiable can however still be dense. If <math>C</math> is closed, then <math>f</math> may fail to be continuous at the endpoints of <math>C</math> (an example is shown in the [[#Examples|examples section]]). * A [[differentiable function|differentiable]] function of one variable is convex on an interval if and only if its [[derivative]] is [[monotonically non-decreasing]] on that interval. If a function is differentiable and convex then it is also [[continuously differentiable]]. * A differentiable function of one variable is convex on an interval if and only if its graph lies above all of its [[tangent]]s:<ref name="boyd">{{cite book| title=Convex Optimization| first1=Stephen P.|last1=Boyd |first2=Lieven| last2=Vandenberghe | year = 2004 |publisher=Cambridge University Press| isbn=978-0-521-83378-3| url= https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf#page=83 |format=pdf | access-date=October 15, 2011}}</ref>{{rp|69}} <math display=block>f(x) \geq f(y) + f'(y) (x-y)</math> for all <math>x</math> and <math>y</math> in the interval. * A twice differentiable function of one variable is convex on an interval if and only if its [[second derivative]] is non-negative there; this gives a practical test for convexity. Visually, a twice differentiable convex function "curves up", without any bends the other way ([[inflection point]]s). If its second derivative is positive at all points then the function is strictly convex, but the [[converse (logic)|converse]] does not hold. For example, the second derivative of <math>f(x) = x^4</math> is <math>f''(x) = 12x^{2}</math>, which is zero for <math>x = 0,</math> but <math>x^4</math> is strictly convex. **This property and the above property in terms of "...its derivative is monotonically non-decreasing..." are not equal since if <math>f''</math> is non-negative on an interval <math>X</math> then <math>f'</math> is monotonically non-decreasing on <math>X</math> while its converse is not true, for example, <math>f'</math> is monotonically non-decreasing on <math>X</math> while its derivative <math>f''</math> is not defined at some points on <math>X</math>. * If <math>f</math> is a convex function of one real variable, and <math>f(0)\le 0</math>, then <math>f</math> is [[Superadditivity|superadditive]] on the [[positive reals]], that is <math>f(a + b) \geq f(a) + f(b)</math> for positive real numbers <math>a</math> and <math>b</math>. {{math proof|proof= Since <math>f</math> is convex, by using one of the convex function definitions above and letting <math>x_2 = 0,</math> it follows that for all real <math>0 \leq t \leq 1,</math> <math display=block> \begin{align} f(tx_1) & = f(t x_1 + (1-t) \cdot 0) \\ & \leq t f(x_1) + (1-t) f(0) \\ & \leq t f(x_1). \\ \end{align} </math> From <math>f(tx_1)\leq t f(x_1)</math>, it follows that <math display=block> \begin{align} f(a) + f(b) & = f \left((a+b) \frac{a}{a+b} \right) + f \left((a+b) \frac{b}{a+b} \right) \\ & \leq \frac{a}{a+b} f(a+b) + \frac{b}{a+b} f(a+b) \\ & = f(a+b).\\ \end{align}</math> Namely, <math>f(a) + f(b) \leq f(a+b)</math>. }} * A function <math>f</math> is midpoint convex on an interval <math>C</math> if for all <math>x_1, x_2 \in C</math> <math display=block>f\!\left(\frac{x_1 + x_2}{2}\right) \leq \frac{f(x_1) + f(x_2)}{2}.</math> This condition is only slightly weaker than convexity. For example, a real-valued [[Lebesgue measurable function]] that is midpoint-convex is convex: this is a theorem of [[Wacław Sierpiński|Sierpiński]].<ref>{{cite book|last=Donoghue|first=William F.| title= Distributions and Fourier Transforms|year=1969|publisher=Academic Press | isbn=9780122206504 |url= https://books.google.com/books?id=P30Y7daiGvQC&pg=PA12|access-date=August 29, 2012|page=12}}</ref> In particular, a continuous function that is midpoint convex will be convex. === Functions of several variables === * A function that is marginally convex in each individual variable is not necessarily (jointly) convex. For example, the function <math>f(x, y) = x y</math> is [[bilinear map|marginally linear]], and thus marginally convex, in each variable, but not (jointly) convex. * A function <math>f : X \to [-\infty, \infty]</math> valued in the [[extended real numbers]] <math>[-\infty, \infty] = \R \cup \{\pm\infty\}</math> is convex if and only if its [[Epigraph (mathematics)|epigraph]] <math display=block>\{(x, r) \in X \times \R ~:~ r \geq f(x)\}</math> is a convex set. * A differentiable function <math>f</math> defined on a convex domain is convex if and only if <math>f(x) \geq f(y) + \nabla f(y)^T \cdot (x-y)</math> holds for all <math>x, y</math> in the domain. * A twice differentiable function of several variables is convex on a convex set if and only if its [[Hessian matrix]] of second [[partial derivative]]s is [[Positive-definite matrix|positive semidefinite]] on the interior of the convex set. * For a convex function <math>f,</math> the [[sublevel set]]s <math>\{x : f(x) < a\}</math> and <math>\{x : f(x) \leq a\}</math> with <math>a \in \R</math> are convex sets. A function that satisfies this property is called a '''{{em|[[quasiconvex function]]}}''' and may fail to be a convex function. * Consequently, the set of [[Arg min|global minimisers]] of a convex function <math>f</math> is a convex set: <math>{\operatorname{argmin}}\,f</math> - convex. * Any [[local minimum]] of a convex function is also a [[global minimum]]. A {{em|strictly}} convex function will have at most one global minimum.<ref>{{cite web | url=https://math.stackexchange.com/q/337090 | title=If f is strictly convex in a convex set, show it has no more than 1 minimum | publisher=Math StackExchange | date=21 Mar 2013 | access-date=14 May 2016}}</ref> * [[Jensen's inequality]] applies to every convex function <math>f</math>. If <math>X</math> is a random variable taking values in the domain of <math>f,</math> then <math>\operatorname{E}(f(X)) \geq f(\operatorname{E}(X)),</math> where <math>\operatorname{E}</math> denotes the [[Expected value|mathematical expectation]]. Indeed, convex functions are exactly those that satisfies the hypothesis of [[Jensen's inequality]]. * A first-order [[homogeneous function]] of two positive variables <math>x</math> and <math>y,</math> (that is, a function satisfying <math>f(a x, a y) = a f(x, y)</math> for all positive real <math>a, x, y > 0</math>) that is convex in one variable must be convex in the other variable.<ref>Altenberg, L., 2012. Resolvent positive linear operators exhibit the reduction phenomenon. Proceedings of the National Academy of Sciences, 109(10), pp.3705-3710.</ref> ==Operations that preserve convexity== * <math>-f</math> is concave if and only if <math>f</math> is convex. * If <math>r</math> is any real number then <math>r + f</math> is convex if and only if <math>f</math> is convex. * Nonnegative weighted sums: **if <math>w_1, \ldots, w_n \geq 0</math> and <math>f_1, \ldots, f_n</math> are all convex, then so is <math>w_1 f_1 + \cdots + w_n f_n.</math> In particular, the sum of two convex functions is convex. **this property extends to infinite sums, integrals and expected values as well (provided that they exist). * Elementwise maximum: let <math>\{f_i\}_{i \in I}</math> be a collection of convex functions. Then <math>g(x) = \sup\nolimits_{i \in I} f_i(x)</math> is convex. The domain of <math>g(x)</math> is the collection of points where the expression is finite. Important special cases: **If <math>f_1, \ldots, f_n</math> are convex functions then so is <math>g(x) = \max \left\{f_1(x), \ldots, f_n(x)\right\}.</math> **[[Danskin's theorem]]: If <math>f(x,y)</math> is convex in <math>x</math> then <math>g(x) = \sup\nolimits_{y\in C} f(x,y)</math> is convex in <math>x</math> even if <math>C</math> is not a convex set. * Composition: **If <math>f</math> and <math>g</math> are convex functions and <math>g</math> is non-decreasing over a univariate domain, then <math>h(x) = g(f(x))</math> is convex. For example, if <math>f</math> is convex, then so is <math>e^{f(x)}</math> because <math>e^x</math> is convex and monotonically increasing. **If <math>f</math> is concave and <math>g</math> is convex and non-increasing over a univariate domain, then <math>h(x) = g(f(x))</math> is convex. **Convexity is invariant under affine maps: that is, if <math>f</math> is convex with domain <math>D_f \subseteq \mathbf{R}^m</math>, then so is <math>g(x) = f(Ax+b)</math>, where <math>A \in \mathbf{R}^{m \times n}, b \in \mathbf{R}^m</math> with domain <math>D_g \subseteq \mathbf{R}^n.</math> * Minimization: If <math>f(x,y)</math> is convex in <math>(x,y)</math> then <math>g(x) = \inf\nolimits_{y\in C} f(x,y)</math> is convex in <math>x,</math> provided that <math>C</math> is a convex set and that <math>g(x) \neq -\infty.</math> * If <math>f</math> is convex, then its perspective <math>g(x, t) = t f \left(\tfrac{x}{t} \right)</math> with domain <math>\left\{(x,t) : \tfrac{x}{t} \in \operatorname{Dom}(f), t > 0 \right\}</math> is convex. * Let <math>X</math> be a vector space. <math>f : X \to \mathbf{R}</math> is convex and satisfies <math>f(0) \leq 0</math> if and only if <math>f(ax+by) \leq a f(x) + bf(y)</math> for any <math>x, y \in X</math> and any non-negative real numbers <math>a, b</math> that satisfy <math>a + b \leq 1.</math> ==Strongly convex functions== The concept of strong convexity extends and parametrizes the notion of strict convexity. Intuitively, a strongly-convex function is a function that grows as fast as a quadratic function.<ref>{{Cite web |title=Strong convexity · Xingyu Zhou's blog |url=https://xingyuzhou.org/blog/notes/strong-convexity |access-date=2023-09-27 |website=xingyuzhou.org}}</ref> A strongly convex function is also strictly convex, but not vice versa. If a one-dimensional function <math>f</math> is twice continuously differentiable and the domain is the real line, then we can characterize it as follows: *<math>f</math> convex if and only if <math>f''(x) \ge 0</math> for all <math>x.</math> *<math>f</math> strictly convex if <math>f''(x) > 0</math> for all <math>x</math> (note: this is sufficient, but not necessary). *<math>f</math> strongly convex if and only if <math>f''(x) \ge m > 0</math> for all <math>x.</math> For example, let <math>f</math> be strictly convex, and suppose there is a sequence of points <math>(x_n)</math> such that <math>f''(x_n) = \tfrac{1}{n}</math>. Even though <math>f''(x_n) > 0</math>, the function is not strongly convex because <math>f''(x)</math> will become arbitrarily small. More generally, a differentiable function <math>f</math> is called strongly convex with parameter <math>m > 0</math> if the following inequality holds for all points <math>x, y</math> in its domain:<ref name="bertsekas">{{cite book|page=[https://archive.org/details/convexanalysisop00bert_476/page/n87 72]|title=Convex Analysis and Optimization|url=https://archive.org/details/convexanalysisop00bert_476|url-access=limited|author=Dimitri Bertsekas| others= Contributors: Angelia Nedic and Asuman E. Ozdaglar|publisher=Athena Scientific|year=2003|isbn=9781886529458}}</ref><math display="block">(\nabla f(x) - \nabla f(y) )^T (x-y) \ge m \|x-y\|_2^2 </math> or, more generally, <math display=block>\langle \nabla f(x) - \nabla f(y), x-y \rangle \ge m \|x-y\|^2 </math> where <math>\langle \cdot, \cdot\rangle</math> is any [[inner product]], and <math>\|\cdot\|</math> is the corresponding [[Norm (mathematics)|norm]]. Some authors, such as <ref name="ciarlet">{{cite book| title=Introduction to numerical linear algebra and optimisation|author=Philippe G. Ciarlet|publisher=Cambridge University Press |year=1989 |isbn=9780521339841}}</ref> refer to functions satisfying this inequality as [[Elliptic operator|elliptic]] functions. An equivalent condition is the following:<ref name="nesterov">{{cite book|pages=[https://archive.org/details/introductorylect00nest/page/n79 63]–64|title=Introductory Lectures on Convex Optimization: A Basic Course|url=https://archive.org/details/introductorylect00nest|url-access=limited|author=Yurii Nesterov|publisher=Kluwer Academic Publishers|year=2004|isbn=9781402075537}}</ref> <math display=block>f(y) \ge f(x) + \nabla f(x)^T (y-x) + \frac{m}{2} \|y-x\|_2^2 </math> It is not necessary for a function to be differentiable in order to be strongly convex. A third definition<ref name="nesterov"/> for a strongly convex function, with parameter <math>m,</math> is that, for all <math>x, y</math> in the domain and <math>t \in [0,1],</math> <math display=block>f(tx+(1-t)y) \le t f(x)+(1-t)f(y) - \frac{1}{2} m t(1-t) \|x-y\|_2^2</math> Notice that this definition approaches the definition for strict convexity as <math>m \to 0,</math> and is identical to the definition of a convex function when <math>m = 0.</math> Despite this, functions exist that are strictly convex but are not strongly convex for any <math>m > 0</math> (see example below). If the function <math>f</math> is twice continuously differentiable, then it is strongly convex with parameter <math>m</math> if and only if <math>\nabla^2 f(x) \succeq mI</math> for all <math>x</math> in the domain, where <math>I</math> is the identity and <math>\nabla^2f</math> is the [[Hessian matrix]], and the inequality <math>\succeq</math> means that <math>\nabla^2 f(x) - mI</math> is [[Positive-definite matrix|positive semi-definite]]. This is equivalent to requiring that the minimum [[eigenvalue]] of <math>\nabla^2 f(x)</math> be at least <math>m</math> for all <math>x.</math> If the domain is just the real line, then <math>\nabla^2 f(x)</math> is just the second derivative <math>f''(x),</math> so the condition becomes <math>f''(x) \ge m</math>. If <math>m = 0</math> then this means the Hessian is positive semidefinite (or if the domain is the real line, it means that <math>f''(x) \ge 0</math>), which implies the function is convex, and perhaps strictly convex, but not strongly convex. Assuming still that the function is twice continuously differentiable, one can show that the lower bound of <math>\nabla^2 f(x)</math> implies that it is strongly convex. Using [[Taylor's theorem|Taylor's Theorem]] there exists <math display=block>z \in \{ t x + (1-t) y : t \in [0,1] \}</math> such that <math display=block>f(y) = f(x) + \nabla f(x)^T (y-x) + \frac{1}{2} (y-x)^T \nabla^2f(z) (y-x)</math> Then <math display=block>(y-x)^T \nabla^2f(z) (y-x) \ge m (y-x)^T(y-x) </math> by the assumption about the eigenvalues, and hence we recover the second strong convexity equation above. A function <math>f</math> is strongly convex with parameter ''m'' if and only if the function <math display=block>x\mapsto f(x) -\frac m 2 \|x\|^2</math> is convex. A twice continuously differentiable function <math>f</math> on a compact domain <math>X</math> that satisfies <math>f''(x)>0</math> for all <math>x\in X</math> is strongly convex. The proof of this statement follows from the [[extreme value theorem]], which states that a continuous function on a compact set has a maximum and minimum. Strongly convex functions are in general easier to work with than convex or strictly convex functions, since they are a smaller class. Like strictly convex functions, strongly convex functions have unique minima on compact sets. === Properties of strongly-convex functions === If ''f'' is a strongly-convex function with parameter ''m'', then:<ref name=":0">{{Cite web |last=Nemirovsky and Ben-Tal |date=2023 |title=Optimization III: Convex Optimization |url=http://www2.isye.gatech.edu/~nemirovs/OPTIIILN2023Spring.pdf}}</ref>{{Rp|location=Prop.6.1.4}} * For every real number ''r'', the [[level set]] {''x'' | ''f''(''x'') ≤ ''r''} is [[Compact space|compact]]. * The function ''f'' has a unique [[global minimum]] on ''R<sup>n</sup>''. == Uniformly convex functions == A uniformly convex function,<ref name="Zalinescu">{{cite book|title=Convex Analysis in General Vector Spaces|author=C. Zalinescu|publisher=World Scientific|year=2002|isbn=9812380671}}</ref><ref name="Bauschke">{{cite book|page=[https://archive.org/details/convexanalysismo00hhba/page/n161 144]|title=Convex Analysis and Monotone Operator Theory in Hilbert Spaces |url=https://archive.org/details/convexanalysismo00hhba|url-access=limited|author=H. Bauschke and P. L. Combettes |publisher=Springer |year=2011 |isbn=978-1-4419-9467-7}}</ref> with modulus <math>\phi</math>, is a function <math>f</math> that, for all <math>x, y</math> in the domain and <math>t \in [0, 1],</math> satisfies <math display="block">f(tx+(1-t)y) \le t f(x)+(1-t)f(y) - t(1-t) \phi(\|x-y\|)</math> where <math>\phi</math> is a function that is non-negative and vanishes only at 0. This is a generalization of the concept of strongly convex function; by taking <math>\phi(\alpha) = \tfrac{m}{2} \alpha^2</math> we recover the definition of strong convexity. It is worth noting that some authors require the modulus <math>\phi</math> to be an increasing function,<ref name="Bauschke">{{cite book|page=[https://archive.org/details/convexanalysismo00hhba/page/n161 144]|title=Convex Analysis and Monotone Operator Theory in Hilbert Spaces |url=https://archive.org/details/convexanalysismo00hhba|url-access=limited|author=H. Bauschke and P. L. Combettes |publisher=Springer |year=2011 |isbn=978-1-4419-9467-7}}</ref> but this condition is not required by all authors.<ref name="Zalinescu">{{cite book|title=Convex Analysis in General Vector Spaces|author=C. Zalinescu|publisher=World Scientific|year=2002|isbn=9812380671}}</ref> ==Examples== ===Functions of one variable=== * The function <math>f(x)=x^2</math> has <math>f''(x)=2>0</math>, so {{mvar|f}} is a convex function. It is also strongly convex (and hence strictly convex too), with strong convexity constant 2. * The function <math>f(x)=x^4</math> has <math>f''(x)=12x^2\ge 0</math>, so {{mvar|f}} is a convex function. It is strictly convex, even though the second derivative is not strictly positive at all points. It is not strongly convex. * The [[absolute value]] function <math>f(x)=|x|</math> is convex (as reflected in the [[triangle inequality]]), even though it does not have a derivative at the point <math>x = 0.</math> It is not strictly convex. * The function <math>f(x)=|x|^p</math> for <math>p \ge 1</math> is convex. * The [[exponential function]] <math>f(x)=e^x</math> is convex. It is also strictly convex, since <math>f''(x)=e^x >0 </math>, but it is not strongly convex since the second derivative can be arbitrarily close to zero. More generally, the function <math>g(x) = e^{f(x)}</math> is [[Logarithmically convex function|logarithmically convex]] if <math>f</math> is a convex function. The term "superconvex" is sometimes used instead.<ref>{{Cite journal | last1 = Kingman | first1 = J. F. C. | doi = 10.1093/qmath/12.1.283 | title = A Convexity Property of Positive Matrices | journal = The Quarterly Journal of Mathematics | volume = 12 | pages = 283–284 | year = 1961 | bibcode = 1961QJMat..12..283K }}</ref> * The function <math>f</math> with domain [0,1] defined by <math>f(0) = f(1) = 1, f(x) = 0</math> for <math>0 < x < 1</math> is convex; it is continuous on the open interval <math>(0, 1),</math> but not continuous at 0 and 1. * The function <math>x^3</math> has second derivative <math>6 x</math>; thus it is convex on the set where <math>x \geq 0</math> and [[concave function|concave]] on the set where <math>x \leq 0.</math> * Examples of functions that are [[Monotonic function|monotonically increasing]] but not convex include <math>f(x)=\sqrt{x}</math> and <math>g(x)=\log x</math>. * Examples of functions that are convex but not [[Monotonic function|monotonically increasing]] include <math>h(x)= x^2</math> and <math>k(x)=-x</math>. * The function <math>f(x) = \tfrac{1}{x}</math> has <math>f''(x)=\tfrac{2}{x^3}</math> which is greater than 0 if <math>x > 0</math> so <math>f(x)</math> is convex on the interval <math>(0, \infty)</math>. It is concave on the interval <math>(-\infty, 0)</math>. * The function <math>f(x)=\tfrac{1}{x^2}</math> with <math>f(0)=\infty</math>, is convex on the interval <math>(0, \infty)</math> and convex on the interval <math>(-\infty, 0)</math>, but not convex on the interval <math>(-\infty, \infty)</math>, because of the singularity at <math>x = 0.</math> ===Functions of ''n'' variables=== * [[LogSumExp]] function, also called softmax function, is a convex function. *The function <math>-\log\det(X)</math> on the domain of [[Positive-definite matrix|positive-definite matrices]] is convex.<ref name="boyd" />{{rp|74}} * Every real-valued [[linear transformation]] is convex but not strictly convex, since if <math>f</math> is linear, then <math>f(a + b) = f(a) + f(b)</math>. This statement also holds if we replace "convex" by "concave". * Every real-valued [[affine function]], that is, each function of the form <math>f(x) = a^T x + b,</math> is simultaneously convex and concave. * Every [[norm (mathematics)|norm]] is a convex function, by the [[triangle inequality]] and [[Homogeneous function#Positive homogeneity|positive homogeneity]]. * The [[spectral radius]] of a [[nonnegative matrix]] is a convex function of its diagonal elements.<ref>Cohen, J.E., 1981. [https://www.ams.org/journals/proc/1981-081-04/S0002-9939-1981-0601750-2/S0002-9939-1981-0601750-2.pdf Convexity of the dominant eigenvalue of an essentially nonnegative matrix]. Proceedings of the American Mathematical Society, 81(4), pp.657-658.</ref> ==See also== {{Div col|colwidth=30em}} * [[Concave function]] * [[Convex analysis]] * [[Convex conjugate]] * [[Convex curve]] * [[Convex optimization]] * [[Geodesic convexity]] * [[Hahn–Banach theorem]] * [[Hermite–Hadamard inequality]] * [[Invex function]] * [[Jensen's inequality]] * [[K-convex function]] * [[Kachurovskii's theorem]], which relates convexity to [[monotone operator|monotonicity]] of the derivative * [[Karamata's inequality]] * [[Logarithmically convex function]] * [[Pseudoconvex function]] * [[Quasiconvex function]] * [[Subderivative]] of a convex function {{Div col end}} ==Notes== {{Reflist}}<!--added under references heading by script-assisted edit--> ==References== * {{cite book | last = Bertsekas | first = Dimitri | author-link= Dimitri Bertsekas | title = Convex Analysis and Optimization | publisher = Athena Scientific | year = 2003 }} * [[Jonathan M. Borwein|Borwein, Jonathan]], and Lewis, Adrian. (2000). Convex Analysis and Nonlinear Optimization. Springer. * {{cite book | last = Donoghue | first = William F. | title = Distributions and Fourier Transforms | publisher = Academic Press | year = 1969 }} * Hiriart-Urruty, Jean-Baptiste, and [[Claude Lemaréchal|Lemaréchal, Claude]]. (2004). Fundamentals of Convex analysis. Berlin: Springer. *{{cite book | author =[[Mark Krasnosel'skii|Krasnosel'skii M.A.]], Rutickii Ya.B. | title=Convex Functions and Orlicz Spaces | publisher= P.Noordhoff Ltd | location=Groningen | year=1961}} * {{cite book | last = Lauritzen | first = Niels | title = Undergraduate Convexity | publisher = World Scientific Publishing | year = 2013 }} * {{cite book | last = Luenberger | first = David | author-link = David Luenberger | title = Linear and Nonlinear Programming | publisher = Addison-Wesley | year = 1984 }} * {{cite book | last = Luenberger | first = David | author-link = David Luenberger | title = Optimization by Vector Space Methods | publisher = Wiley & Sons | year = 1969 }} <!-- * {{cite web | last = Moon | first = Todd | title = Tutorial: Convexity and Jensen's inequality | url=http://www.neng.usu.edu/classes/ece/7680/lecture2/node5.html | accessdate = 2008-09-04 }} --> * {{cite book | last = Rockafellar | first = R. T. | author-link= R. Tyrrell Rockafellar | title = Convex analysis | publisher = Princeton University Press | year = 1970 | location = Princeton }} * {{cite book | last = Thomson | first = Brian | title = Symmetric Properties of Real Functions | publisher = CRC Press | year = 1994 }} * {{cite book|last=Zălinescu|first=C.|title=Convex analysis in general vector spaces|publisher=World Scientific Publishing Co., Inc|location=River Edge, NJ|year=2002|pages=xx+367|isbn=981-238-067-1|mr=1921556}} ==External links== * {{springer|title=Convex function (of a real variable)|id=p/c026240}} * {{springer|title=Convex function (of a complex variable)|id=p/c026230}} {{Convex analysis and variational analysis}} {{Authority control}} [[Category:Convex analysis]] [[Category:Generalized convexity]] [[Category:Types of functions]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Authority control
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite web
(
edit
)
Template:Convex analysis and variational analysis
(
edit
)
Template:Div col
(
edit
)
Template:Div col end
(
edit
)
Template:Em
(
edit
)
Template:Math proof
(
edit
)
Template:Mvar
(
edit
)
Template:Nowrap
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)
Template:Short description
(
edit
)
Template:Springer
(
edit
)
Template:Use American English
(
edit
)