Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Linear discriminant analysis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Linear discriminant in high dimensions== Geometric anomalies in higher dimensions lead to the well-known [[curse of dimensionality]]. Nevertheless, proper utilization of [[concentration of measure]] phenomena can make computation easier.<ref>Kainen P.C. (1997) [https://web.archive.org/web/20190226172352/http://pdfs.semanticscholar.org/708f/8e0a95ba5977072651c0681f3c7b8f09eca3.pdf Utilizing geometric anomalies of high dimension: When complexity makes computation easier]. In: Kárný M., Warwick K. (eds) Computer Intensive Methods in Control and Signal Processing: The Curse of Dimensionality, Springer, 1997, pp. 282–294.</ref> An important case of these [[Curse of dimensionality#Blessing of dimensionality|''blessing of dimensionality'']] phenomena was highlighted by Donoho and Tanner: if a sample is essentially high-dimensional then each point can be separated from the rest of the sample by linear inequality, with high probability, even for exponentially large samples.<ref>Donoho, D., Tanner, J. (2009) [https://arxiv.org/abs/0906.2530 Observed universality of phase transitions in high-dimensional geometry, with implications for modern data analysis and signal processing], Phil. Trans. R. Soc. A 367, 4273–4293.</ref> These linear inequalities can be selected in the standard (Fisher's) form of the linear discriminant for a rich family of probability distribution.<ref>{{cite journal |last1= Gorban| first1= Alexander N.|last2= Golubkov |first2 = Alexander |last3= Grechuck|first3 = Bogdan |last4= Mirkes|first4 = Evgeny M.|last5= Tyukin |first5 = Ivan Y. | year= 2018 |title= Correction of AI systems by linear discriminants: Probabilistic foundations|journal= Information Sciences |volume=466|pages= 303–322|doi= 10.1016/j.ins.2018.07.040| arxiv= 1811.05321| s2cid= 52876539}}</ref> In particular, such theorems are proven for [[Logarithmically concave measure|log-concave]] distributions including [[Multivariate normal distribution|multidimensional normal distribution]] (the proof is based on the concentration inequalities for log-concave measures<ref>Guédon, O., Milman, E. (2011) [https://arxiv.org/abs/1011.0943 Interpolating thin-shell and sharp large-deviation estimates for isotropic log-concave measures], Geom. Funct. Anal. 21 (5), 1043–1068.</ref>) and for product measures on a multidimensional cube (this is proven using [[Talagrand's concentration inequality]] for product probability spaces). Data separability by classical linear discriminants simplifies the problem of error correction for [[artificial intelligence]] systems in high dimension.<ref name=GMT2019>{{cite journal |last1= Gorban|first1= Alexander N.|last2= Makarov|first2= Valeri A.|last3= Tyukin |first3= Ivan Y.|date= July 2019|title= The unreasonable effectiveness of small neural ensembles in high-dimensional brain|journal= Physics of Life Reviews|volume= 29 |pages= 55–88|doi= 10.1016/j.plrev.2018.09.005|doi-access=free|arxiv= 1809.07656| pmid= 30366739|bibcode= 2019PhLRv..29...55G}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)