Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Random forest
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Kernel random forest== In machine learning, kernel random forests (KeRF) establish the connection between random forests and [[kernel method]]s. By slightly modifying their definition, random forests can be rewritten as [[kernel method]]s, which are more interpretable and easier to analyze.<ref name="scornet2015random">{{cite arXiv | first = Erwan | last = Scornet | title = Random forests and kernel methods | year = 2015 | eprint = 1502.03836 | class = math.ST }}</ref> === History === [[Leo Breiman]]<ref name="breiman2000some">{{cite journal | first = Leo | last = Breiman | author-link = Leo Breiman | title = Some infinity theory for predictor ensembles | institution = Technical Report 579, Statistics Dept. UCB | year = 2000 | url = https://statistics.berkeley.edu/tech-reports/579 }}</ref> was the first person to notice the link between random forest and [[kernel methods]]. He pointed out that random forests trained using [[i.i.d.]] random vectors in the tree construction are equivalent to a kernel acting on the true margin. Lin and Jeon<ref name="lin2006random">{{cite journal | first1 = Yi | last1 = Lin | first2 = Yongho | last2 = Jeon | title = Random forests and adaptive nearest neighbors | journal = Journal of the American Statistical Association | volume = 101 | number = 474 | pages = 578β590 | year = 2006 | doi = 10.1198/016214505000001230 | citeseerx = 10.1.1.153.9168 | s2cid = 2469856 }}</ref> established the connection between random forests and adaptive nearest neighbor, implying that random forests can be seen as adaptive kernel estimates. Davies and Ghahramani<ref name="davies2014random">{{cite arXiv |first1=Alex |last1=Davies |first2=Zoubin|last2=Ghahramani |title=The Random Forest Kernel and other kernels for big data from random partitions |eprint=1402.4293 |year= 2014 |class=stat.ML }}</ref> proposed Kernel Random Forest (KeRF) and showed that it can empirically outperform state-of-art kernel methods. Scornet<ref name="scornet2015random"/> first defined KeRF estimates and gave the explicit link between KeRF estimates and random forest. He also gave explicit expressions for kernels based on centered random forest<ref name="breiman2004consistency">{{cite journal | first1 = Leo | last1 = Breiman | first2 = Zoubin | last2 = Ghahramani | name-list-style = vanc | title = Consistency for a simple model of random forests | journal = Statistical Department, University of California at Berkeley. Technical Report | number = 670 | year = 2004 | citeseerx = 10.1.1.618.90 }}</ref> and uniform random forest,<ref name="arlot2014analysis">{{cite arXiv |first1=Sylvain |last1=Arlot | first2 = Robin | last2 = Genuer | name-list-style = vanc |title=Analysis of purely random forests bias |eprint=1407.3939 |year= 2014 |class=math.ST }}</ref> two simplified models of random forest. He named these two KeRFs Centered KeRF and Uniform KeRF, and proved upper bounds on their rates of consistency. === Notations and definitions === ==== Preliminaries: Centered forests ==== Centered forest<ref name="breiman2004consistency"/> is a simplified model for Breiman's original random forest, which uniformly selects an attribute among all attributes and performs splits at the center of the cell along the pre-chosen attribute. The algorithm stops when a fully binary tree of level <math>k</math> is built, where <math>k \in\mathbb{N} </math> is a parameter of the algorithm. ==== Uniform forest ==== Uniform forest<ref name="arlot2014analysis"/> is another simplified model for Breiman's original random forest, which uniformly selects a feature among all features and performs splits at a point uniformly drawn on the side of the cell, along the preselected feature. ==== From random forest to KeRF ==== Given a training sample <math>\mathcal{D}_n =\{(\mathbf{X}_i, Y_i)\}_{i=1}^n</math> of <math>[0,1]^p\times\mathbb{R}</math>-valued independent random variables distributed as the independent prototype pair <math>(\mathbf{X}, Y)</math>, where <math>\operatorname{E}[Y^2]<\infty</math>. We aim at predicting the response <math>Y</math>, associated with the random variable <math>\mathbf{X}</math>, by estimating the regression function <math>m(\mathbf{x})=\operatorname{E}[Y \mid \mathbf{X} = \mathbf{x}]</math>. A random regression forest is an ensemble of <math>M</math> randomized regression trees. Denote <math>m_n(\mathbf{x},\mathbf{\Theta}_j)</math> the predicted value at point <math>\mathbf{x}</math> by the <math>j</math>-th tree, where <math>\mathbf{\Theta}_1,\ldots,\mathbf{\Theta}_M </math> are independent random variables, distributed as a generic random variable <math>\mathbf{\Theta}</math>, independent of the sample <math>\mathcal{D}_n</math>. This random variable can be used to describe the randomness induced by node splitting and the sampling procedure for tree construction. The trees are combined to form the finite forest estimate <math>m_{M, n}(\mathbf{x},\Theta_1,\ldots,\Theta_M) = \frac{1}{M}\sum_{j=1}^M m_n(\mathbf{x},\Theta_j)</math>. For regression trees, we have <math>m_n = \sum_{i=1}^n\frac{Y_i\mathbf{1}_{\mathbf{X}_i\in A_n(\mathbf{x},\Theta_j)}}{N_n(\mathbf{x}, \Theta_j)}</math>, where <math>A_n(\mathbf{x},\Theta_j)</math> is the cell containing <math>\mathbf{x}</math>, designed with randomness <math>\Theta_j</math> and dataset <math>\mathcal{D}_n</math>, and <math> N_n(\mathbf{x}, \Theta_j) = \sum_{i=1}^n \mathbf{1}_{\mathbf{X}_i\in A_n(\mathbf{x}, \Theta_j)}</math>. Thus random forest estimates satisfy, for all <math>\mathbf{x}\in[0,1]^d</math>, <math> m_{M,n}(\mathbf{x}, \Theta_1,\ldots,\Theta_M) =\frac{1}{M}\sum_{j=1}^M \left(\sum_{i=1}^n\frac{Y_i\mathbf{1}_{\mathbf{X}_i\in A_n(\mathbf{x},\Theta_j)}}{N_n(\mathbf{x}, \Theta_j)}\right)</math>. Random regression forest has two levels of averaging, first over the samples in the target cell of a tree, then over all trees. Thus the contributions of observations that are in cells with a high density of data points are smaller than that of observations which belong to less populated cells. In order to improve the random forest methods and compensate the misestimation, Scornet<ref name="scornet2015random"/> defined KeRF by <math display="block"> \tilde{m}_{M,n}(\mathbf{x}, \Theta_1,\ldots,\Theta_M) = \frac{1}{\sum_{j=1}^M N_n(\mathbf{x}, \Theta_j)}\sum_{j=1}^M\sum_{i=1}^n Y_i\mathbf{1}_{\mathbf{X}_i\in A_n(\mathbf{x}, \Theta_j)},</math> which is equal to the mean of the <math>Y_i</math>'s falling in the cells containing <math>\mathbf{x}</math> in the forest. If we define the connection function of the <math>M</math> finite forest as <math>K_{M,n}(\mathbf{x}, \mathbf{z}) = \frac{1}{M} \sum_{j=1}^M \mathbf{1}_{\mathbf{z} \in A_n (\mathbf{x}, \Theta_j)}</math>, i.e. the proportion of cells shared between <math>\mathbf{x}</math> and <math>\mathbf{z}</math>, then almost surely we have <math>\tilde{m}_{M,n}(\mathbf{x}, \Theta_1,\ldots,\Theta_M) = \frac{\sum_{i=1}^n Y_i K_{M,n}(\mathbf{x}, \mathbf{x}_i)}{\sum_{\ell=1}^n K_{M,n}(\mathbf{x}, \mathbf{x}_{\ell})}</math>, which defines the KeRF. ==== Centered KeRF ==== The construction of Centered KeRF of level <math>k</math> is the same as for centered forest, except that predictions are made by <math>\tilde{m}_{M,n}(\mathbf{x}, \Theta_1,\ldots,\Theta_M) </math>, the corresponding kernel function, or connection function is <math display="block"> K_k^{cc}(\mathbf{x},\mathbf{z}) = \sum_{k_1,\ldots,k_d, \sum_{j=1}^d k_j=k} \frac{k!}{k_1!\cdots k_d!} \left(\frac 1 d \right)^k \prod_{j=1}^d\mathbf{1}_{\lceil2^{k_j}x_j\rceil=\lceil2^{k_j}z_j\rceil}, \qquad \text{ for all } \mathbf{x},\mathbf{z}\in[0,1]^d. </math> ==== Uniform KeRF ==== Uniform KeRF is built in the same way as uniform forest, except that predictions are made by <math>\tilde{m}_{M,n}(\mathbf{x}, \Theta_1,\ldots,\Theta_M) </math>, the corresponding kernel function, or connection function is <math display="block">K_k^{uf}(\mathbf{0},\mathbf{x}) = \sum_{k_1,\ldots,k_d, \sum_{j=1}^d k_j=k} \frac{k!}{k_1!\ldots k_d!}\left(\frac{1}{d}\right)^k \prod_{m=1}^d\left(1-|x_m|\sum_{j=0}^{k_m-1}\frac{\left(-\ln|x_m|\right)^j}{j!}\right) \text{ for all } \mathbf{x}\in[0,1]^d.</math> === Properties === ==== Relation between KeRF and random forest ==== Predictions given by KeRF and random forests are close if the number of points in each cell is controlled: <blockquote> Assume that there exist sequences <math> (a_n),(b_n) </math> such that, almost surely, <math display="block"> a_n\leq N_n(\mathbf{x},\Theta)\leq b_n \text{ and } a_n\leq \frac 1 M \sum_{m=1}^M N_n {\mathbf{x},\Theta_m}\leq b_n. </math> Then almost surely, <math display="block">|m_{M,n}(\mathbf{x}) - \tilde{m}_{M,n}(\mathbf{x})| \le\frac{b_n-a_n}{a_n} \tilde{m}_{M,n}(\mathbf{x}). </math> </blockquote> ==== Relation between infinite KeRF and infinite random forest ==== When the number of trees <math>M</math> goes to infinity, then we have infinite random forest and infinite KeRF. Their estimates are close if the number of observations in each cell is bounded: <blockquote> Assume that there exist sequences <math>(\varepsilon_n), (a_n),(b_n)</math> such that, almost surely * <math>\operatorname{E}[N_n(\mathbf{x},\Theta)] \ge 1,</math> * <math>\operatorname{P}[a_n\le N_n(\mathbf{x},\Theta) \le b_n\mid \mathcal{D}_n] \ge 1-\varepsilon_n/2,</math> * <math>\operatorname{P}[a_n\le \operatorname{E}_\Theta [N_n(\mathbf{x},\Theta)] \le b_n\mid \mathcal{D}_n] \ge 1-\varepsilon_n/2,</math> Then almost surely, <math display="block"> |m_{\infty,n}(\mathbf{x})-\tilde{m}_{\infty,n}(\mathbf{x})| \le \frac{b_n-a_n}{a_n}\tilde{m}_{\infty,n}(\mathbf{x}) + n \varepsilon_n \left( \max_{1\le i\le n} Y_i \right).</math> </blockquote> === Consistency results === Assume that <math>Y = m(\mathbf{X}) + \varepsilon</math>, where <math>\varepsilon</math> is a centered Gaussian noise, independent of <math>\mathbf{X}</math>, with finite variance <math>\sigma^2<\infty</math>. Moreover, <math>\mathbf{X}</math> is uniformly distributed on <math>[0,1]^d</math> and <math>m</math> is [[Lipschitz_continuity|Lipschitz]]. Scornet<ref name="scornet2015random"/> proved upper bounds on the rates of consistency for centered KeRF and uniform KeRF. ==== Consistency of centered KeRF ==== Providing <math>k\rightarrow\infty</math> and <math>n/2^k\rightarrow\infty</math>, there exists a constant <math>C_1>0</math> such that, for all <math>n</math>, <math> \mathbb{E}[\tilde{m}_n^{cc}(\mathbf{X}) - m(\mathbf{X})]^2 \le C_1 n^{-1/(3+d\log 2)}(\log n)^2</math>. ==== Consistency of uniform KeRF ==== Providing <math>k\rightarrow\infty</math> and <math>n/2^k\rightarrow\infty</math>, there exists a constant <math>C>0</math> such that, <math>\mathbb{E}[\tilde{m}_n^{uf}(\mathbf{X})-m(\mathbf{X})]^2\le Cn^{-2/(6+3d\log2)}(\log n)^2</math>.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)