Convex conjugate

Revision as of 04:27, 13 May 2025 by imported>Risee01 (→‎growthexperiments-addlink-summary-summary:2|0|0)
(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)

Template:Short description In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformation, or Fenchel conjugate (after Adrien-Marie Legendre and Werner Fenchel). The convex conjugate is widely used for constructing the dual problem in optimization theory, thus generalizing Lagrangian duality.

DefinitionEdit

Let <math>X</math> be a real topological vector space and let <math>X^{*}</math> be the dual space to <math>X</math>. Denote by

<math>\langle \cdot , \cdot \rangle : X^{*} \times X \to \mathbb{R}</math>

the canonical dual pairing, which is defined by <math>\left\langle x^*, x \right\rangle \mapsto x^* (x).</math>

For a function <math>f : X \to \mathbb{R} \cup \{ - \infty, + \infty \}</math> taking values on the extended real number line, its Template:Em is the function

<math>f^{*} : X^{*} \to \mathbb{R} \cup \{ - \infty, + \infty \}</math>

whose value at <math>x^* \in X^{*}</math> is defined to be the supremum:

<math>f^{*} \left( x^{*} \right) := \sup \left\{ \left\langle x^{*}, x \right\rangle - f (x) ~\colon~ x \in X \right\},</math>

or, equivalently, in terms of the infimum:

<math>f^{*} \left( x^{*} \right) := - \inf \left\{ f (x) - \left\langle x^{*}, x \right\rangle ~\colon~ x \in X \right\}.</math>

This definition can be interpreted as an encoding of the convex hull of the function's epigraph in terms of its supporting hyperplanes.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>

ExamplesEdit

For more examples, see Template:Section link.

  • The convex conjugate of an affine function <math> f(x) = \left\langle a, x \right\rangle - b</math> is <math display="block"> f^{*}\left(x^{*} \right)

= \begin{cases} b, & x^{*} = a

            \\ +\infty, & x^{*}  \ne a.
 \end{cases}

</math>

  • The convex conjugate of a power function <math> f(x) = \frac{1}{p}|x|^p, 1 < p < \infty </math> is <math display="block">

f^{*}\left(x^{*} \right) = \frac{1}{q}|x^{*}|^q, 1<q<\infty, \text{where} \tfrac{1}{p} + \tfrac{1}{q} = 1.</math>

  • The convex conjugate of the absolute value function <math>f(x) = \left| x \right|</math> is <math display="block">

f^{*}\left(x^{*} \right) = \begin{cases} 0, & \left|x^{*} \right| \le 1

            \\ \infty, & \left|x^{*} \right|  >  1.
 \end{cases}

</math>

f^{*}\left(x^{*} \right) = \begin{cases} x^{*} \ln x^{*} - x^{*} , & x^{*} > 0

            \\ 0                            , & x^{*}  = 0
            \\ \infty                       , & x^{*}  < 0.
 \end{cases}

</math>

The convex conjugate and Legendre transform of the exponential function agree except that the domain of the convex conjugate is strictly larger as the Legendre transform is only defined for positive real numbers.

Connection with expected shortfall (average value at risk)Edit

See this article for example.

Let F denote a cumulative distribution function of a random variable X. Then (integrating by parts), <math display="block">f(x):= \int_{-\infty}^x F(u) \, du = \operatorname{E}\left[\max(0,x-X)\right] = x-\operatorname{E} \left[\min(x,X)\right]</math> has the convex conjugate <math display="block">f^{*}(p)= \int_0^p F^{-1}(q) \, dq = (p-1)F^{-1}(p)+\operatorname{E}\left[\min(F^{-1}(p),X)\right]

= p F^{-1}(p)-\operatorname{E}\left[\max(0,F^{-1}(p)-X)\right].</math>

OrderingEdit

A particular interpretation has the transform <math display="block">f^\text{inc}(x):= \arg \sup_t t\cdot x-\int_0^1 \max\{t-f(u),0\} \, du,</math> as this is a nondecreasing rearrangement of the initial function f; in particular, <math>f^\text{inc}= f</math> for f nondecreasing.

PropertiesEdit

The convex conjugate of a closed convex function is again a closed convex function. The convex conjugate of a polyhedral convex function (a convex function with polyhedral epigraph) is again a polyhedral convex function.

Order reversingEdit

Declare that <math>f \le g</math> if and only if <math>f(x) \le g(x)</math> for all <math>x.</math> Then convex-conjugation is order-reversing, which by definition means that if <math>f \le g</math> then <math>f^* \ge g^*.</math>

For a family of functions <math>\left(f_\alpha\right)_\alpha</math> it follows from the fact that supremums may be interchanged that

<math>\left(\inf_\alpha f_\alpha\right)^*(x^*) = \sup_\alpha f_\alpha^*(x^*),</math>

and from the max–min inequality that

<math>\left(\sup_\alpha f_\alpha\right)^*(x^*) \le \inf_\alpha f_\alpha^*(x^*).</math>

BiconjugateEdit

The convex conjugate of a function is always lower semi-continuous. The biconjugate <math>f^{**}</math> (the convex conjugate of the convex conjugate) is also the closed convex hull, i.e. the largest lower semi-continuous convex function with <math>f^{**} \le f.</math> For proper functions <math>f,</math>

<math>f = f^{**}</math> if and only if <math>f</math> is convex and lower semi-continuous, by the Fenchel–Moreau theorem.

Fenchel's inequalityEdit

For any function Template:Mvar and its convex conjugate Template:Math, Fenchel's inequality (also known as the Fenchel–Young inequality) holds for every <math>x \in X</math> and Template:Nowrap

<math>\left\langle p,x \right\rangle \le f(x) + f^*(p).</math>

Furthermore, the equality holds only when <math>p \in \partial f(x)</math>. The proof follows from the definition of convex conjugate: <math>f^*(p) = \sup_{\tilde x} \left\{ \langle p,\tilde x \rangle - f(\tilde x) \right\} \ge \langle p,x \rangle - f(x).</math>

ConvexityEdit

For two functions <math>f_0</math> and <math>f_1</math> and a number <math>0 \le \lambda \le 1</math> the convexity relation

<math>\left((1-\lambda) f_0 + \lambda f_1\right)^{*} \le (1-\lambda) f_0^{*} + \lambda f_1^{*}</math>

holds. The <math>{*}</math> operation is a convex mapping itself.

Infimal convolutionEdit

The infimal convolution (or epi-sum) of two functions <math>f</math> and <math>g</math> is defined as

<math>\left( f \operatorname{\Box} g \right)(x) = \inf \left\{ f(x-y) + g(y) \mid y \in \mathbb{R}^n \right\}.</math>

Let <math>f_1, \ldots, f_{m}</math> be proper, convex and lower semicontinuous functions on <math>\mathbb{R}^{n}.</math> Then the infimal convolution is convex and lower semicontinuous (but not necessarily proper),<ref>Template:Cite book</ref> and satisfies

<math>\left( f_1 \operatorname{\Box} \cdots \operatorname{\Box} f_m \right)^{*} = f_1^{*} + \cdots + f_m^{*}.</math>

The infimal convolution of two functions has a geometric interpretation: The (strict) epigraph of the infimal convolution of two functions is the Minkowski sum of the (strict) epigraphs of those functions.<ref>Template:Cite journal</ref>

Maximizing argumentEdit

If the function <math>f</math> is differentiable, then its derivative is the maximizing argument in the computation of the convex conjugate:

<math>f^\prime(x) = x^*(x):= \arg\sup_{x^{*}} {\langle x, x^{*}\rangle} -f^{*}\left( x^{*} \right)</math> and
<math>f^{{*}\prime}\left( x^{*} \right) = x\left( x^{*} \right):= \arg\sup_x {\langle x, x^{*}\rangle} - f(x);</math>

hence

<math>x = \nabla f^Template:*\left( \nabla f(x) \right),</math>
<math>x^{*} = \nabla f\left( \nabla f^Template:*\left( x^{*} \right)\right),</math>

and moreover

<math>f^{\prime\prime}(x) \cdot f^{{*}\prime\prime}\left( x^{*}(x) \right) = 1,</math>
<math>f^{{*}\prime\prime}\left( x^{*} \right) \cdot f^{\prime\prime}\left( x(x^{*}) \right) = 1.</math>

Scaling propertiesEdit

If for some <math>\gamma>0,</math> <math>g(x) = \alpha + \beta x + \gamma \cdot f\left( \lambda x + \delta \right)</math>, then

<math>g^{*}\left( x^{*} \right)= - \alpha - \delta\frac{x^{*}-\beta} \lambda + \gamma \cdot f^{*}\left(\frac {x^{*}-\beta}{\lambda \gamma}\right).</math>

Behavior under linear transformationsEdit

Let <math>A : X \to Y</math> be a bounded linear operator. For any convex function <math>f</math> on <math>X,</math>

<math>\left(A f\right)^{*} = f^{*} A^{*}</math>

where

<math>(A f)(y) = \inf\{ f(x) : x \in X , A x = y \}</math>

is the preimage of <math>f</math> with respect to <math>A</math> and <math>A^{*}</math> is the adjoint operator of <math>A.</math><ref>Ioffe, A.D. and Tichomirov, V.M. (1979), Theorie der Extremalaufgaben. Deutscher Verlag der Wissenschaften. Satz 3.4.3</ref>

A closed convex function <math>f</math> is symmetric with respect to a given set <math>G</math> of orthogonal linear transformations,

<math>f(A x) = f(x)</math> for all <math>x</math> and all <math>A \in G</math>

if and only if its convex conjugate <math>f^{*}</math> is symmetric with respect to <math>G.</math>

Table of selected convex conjugatesEdit

The following table provides Legendre transforms for many common functions as well as a few useful properties.<ref>Template:Cite book</ref>

<math>g(x)</math> <math>\operatorname{dom}(g)</math> <math>g^*(x^*)</math> <math>\operatorname{dom}(g^*)</math>
<math>f(ax)</math> (where <math>a \neq 0</math>) <math>X</math> <math>f^*\left(\frac{x^*}{a}\right)</math> <math>X^*</math>
<math>f(x + b)</math> <math>X</math> <math>f^*(x^*) - \langle b,x^* \rangle</math> <math>X^*</math>
<math>a f(x)</math> (where <math>a > 0</math>) <math>X</math> <math>a f^*\left(\frac{x^*}{a}\right)</math> <math>X^*</math>
<math>\alpha+ \beta x+ \gamma \cdot f(\lambda x+\delta)</math> <math>X</math> <math>-\alpha- \delta\frac{x^*-\beta}\lambda+ \gamma \cdot f^* \left(\frac {x^*-\beta}{\gamma \lambda}\right)\quad (\gamma>0)</math> <math>X^*</math>
x|^p}{p}</math> (where <math>p > 1</math>) <math>\mathbb{R}</math> x^*|^q}{q} </math> (where <math>\frac{1}{p} + \frac{1}{q} = 1</math>) <math>\mathbb{R}</math>
<math>\frac{-x^p}{p}</math> (where <math>0 < p < 1</math>) <math>\mathbb{R}_+</math> <math>\frac{-(-x^*)^q}q</math> (where <math>\frac 1 p + \frac 1 q = 1</math>) <math>\mathbb{R}_{--}</math>
<math>\sqrt{1 + x^2}</math> <math>\mathbb{R}</math> <math>-\sqrt{1 - (x^*)^2}</math> <math>[-1,1]</math>
<math>-\log(x)</math> <math>\mathbb{R}_{++}</math> <math>-(1 + \log(-x^*))</math> <math>\mathbb{R}_{--}</math>
<math>e^x</math> <math>\mathbb{R}</math> <math>\begin{cases}x^* \log(x^*) - x^* & \text{if }x^* > 0\\ 0 & \text{if }x^* = 0\end{cases}</math> <math>\mathbb{R}_{+}</math>
<math>\log\left(1 + e^x\right)</math> <math>\mathbb{R}</math> <math>\begin{cases}x^* \log(x^*) + (1 - x^*) \log(1 - x^*) & \text{if }0 < x^* < 1\\ 0 & \text{if }x^* = 0,1\end{cases}</math> <math>[0,1]</math>
<math>-\log\left(1 - e^x\right)</math> <math>\mathbb{R}_{--}</math> <math>\begin{cases}x^* \log(x^*) - (1 + x^*) \log(1 + x^*) & \text{if }x^* > 0\\ 0 & \text{if }x^* = 0\end{cases}</math> <math>\mathbb{R}_+</math>

See alsoEdit

ReferencesEdit

<references/>

Further readingEdit

  • {{#invoke:citation/CS1|citation

|CitationClass=web }}

  • {{#invoke:citation/CS1|citation

|CitationClass=web }}

  • {{#invoke:citation/CS1|citation

|CitationClass=web }} [2] (24 pages)

Template:Convex analysis and variational analysis