Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Statistical inference
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Model-free randomization inference ==== Model-free techniques provide a complement to model-based methods, which employ reductionist strategies of reality-simplification. The former combine, evolve, ensemble and train algorithms dynamically adapting to the contextual affinities of a process and learning the intrinsic characteristics of the observations.<ref name="Dinov Palanimalai Khare Christou 2018"> {{cite journal |last1= Dinov | first1=Ivo |last2= Palanimalai | first2= Selvam |last3= Khare |first3= Ashwini |last4= Christou |first4= Nicolas |date= 2018 |title= Randomization-based statistical inference: A resampling and simulation infrastructure |journal= Teaching Statistics |volume= 40 |issue= 2 |pages= 64β73 |doi= 10.1111/test.12156 | pmid=30270947 |pmc= 6155997 }}</ref><ref name="Tang model-based Model-Free 2019"> {{cite journal |last1= Tang | first1=Ming |last2= Gao | first2=Chao |last3= Goutman | first3=Stephen |last4= Kalinin | first4=Alexandr |last5= Mukherjee | first5=Bhramar |last6= Guan | first6=Yuanfang |last7= Dinov | first7=Ivo |date= 2019 |title= Model-Based and Model-Free Techniques for Amyotrophic Lateral Sclerosis Diagnostic Prediction and Patient Clustering |journal= Neuroinformatics |volume= 17 | issue=3 |pages= 407β421 |doi= 10.1007/s12021-018-9406-9 | pmid=30460455 | pmc= 6527505 }}</ref> For example, model-free simple linear regression is based either on: * a ''random design'', where the pairs of observations <math>(X_1,Y_1), (X_2,Y_2), \cdots , (X_n,Y_n)</math> are independent and identically distributed (iid), * or a ''deterministic design'', where the variables <math>X_1, X_2, \cdots, X_n</math> are deterministic, but the corresponding response variables <math>Y_1,Y_2, \cdots, Y_n</math> are random and independent with a common conditional distribution, i.e., <math>P\left (Y_j \leq y | X_j =x\right ) = D_x(y)</math>, which is independent of the index <math>j</math>. In either case, the model-free randomization inference for features of the common conditional distribution <math>D_x(.)</math> relies on some regularity conditions, e.g. functional smoothness. For instance, model-free randomization inference for the population feature ''conditional mean'', <math>\mu(x)=E(Y | X = x)</math>, can be consistently estimated via local averaging or local polynomial fitting, under the assumption that <math>\mu(x)</math> is smooth. Also, relying on asymptotic normality or resampling, we can construct confidence intervals for the population feature, in this case, the ''conditional mean'', <math>\mu(x)</math>.<ref name="Politis Model-Free Inference 2019"> {{cite journal |last1= Politis | first1=D.N. |date= 2019 |title= Model-free inference in statistics: how and why |journal= IMS Bulletin |volume= 48 |url= http://bulletin.imstat.org/2015/11/model-free-inference-in-statistics-how-and-why/ }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)