Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Gumbel distribution
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Gumbel reparameterization tricks === In [[machine learning]], the Gumbel distribution is sometimes employed to generate samples from the [[categorical distribution]]. This technique is called "Gumbel-max trick" and is a special example of "[[Reparameterization trick|reparameterization tricks]]".<ref>{{Cite conference |first1=Eric |last1=Jang |first2=Shixiang |last2=Gu |first3=Ben |last3=Poole |date=April 2017 |title=Categorical Reparameterization with Gumble-Softmax |url=https://pure.mpg.de/pubman/faces/ViewItemOverviewPage.jsp?itemId=item_2564872 |conference=International Conference on Learning Representations (ICLR) 2017}}</ref> In detail, let <math>(\pi_1, \ldots, \pi_n)</math> be nonnegative, and not all zero, and let <math>g_1,\ldots , g_n</math> be independent samples of Gumbel(0, 1), then by routine integration,<math display="block">Pr(j = \arg\max_i (g_i + \log\pi_i)) = \frac{\pi_j}{\sum_i \pi_i}</math>That is, <math>\arg\max_i (g_i + \log\pi_i) \sim \text{Categorical}\left(\frac{\pi_j}{\sum_i \pi_i}\right)_j</math> Equivalently, given any <math>x_1, ..., x_n\in \R</math>, we can sample from its [[Boltzmann distribution]] by <math display="block">Pr(j = \arg\max_i (g_i + x_i)) = \frac{e^{x_j}}{\sum_i e^{x_i}}</math>Related equations include:<ref>{{Cite journal |last1=Balog |first1=Matej |last2=Tripuraneni |first2=Nilesh |last3=Ghahramani |first3=Zoubin |last4=Weller |first4=Adrian |date=2017-07-17 |title=Lost Relatives of the Gumbel Trick |url=https://proceedings.mlr.press/v70/balog17a.html |journal=International Conference on Machine Learning |language=en |publisher=PMLR |pages=371β379|arxiv=1706.04161 }}</ref> * If <math>x\sim \operatorname{Exp}(\lambda)</math>, then <math>(-\ln x - \gamma)\sim \text{Gumbel}(-\gamma + \ln\lambda, 1)</math>. * <math>\arg\max_i (g_i + \log\pi_i) \sim \text{Categorical}\left(\frac{\pi_j}{\sum_i \pi_i}\right)_j</math>. * <math>\max_i (g_i + \log\pi_i) \sim \text{Gumbel}\left(\log\left(\sum_i \pi_i \right), 1\right)</math>. That is, the Gumbel distribution is a max-stable distribution family. * <math>\mathbb E[\max_i (g_i + \beta x_i)] = \log \left(\sum_i e^{\beta x_i}\right) + \gamma.</math>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)