Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Support vector machine
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Set of methods for supervised statistical learning}} {{Machine learning|Supervised learning}} In [[machine learning]], '''support vector machines''' ('''SVMs''', also '''support vector networks'''<ref name="CorinnaCortes" />) are [[supervised learning|supervised]] [[Maximum-margin hyperplane|max-margin]] models with associated learning [[algorithm]]s that analyze data for [[Statistical classification|classification]] and [[regression analysis]]. Developed at [[AT&T Bell Laboratories]],<ref name="CorinnaCortes" /><ref>{{Cite book |last=Vapnik |first=Vladimir N. |date=1997 |editor-last=Gerstner |editor-first=Wulfram |editor2-last=Germond |editor2-first=Alain |editor3-last=Hasler |editor3-first=Martin |editor4-last=Nicoud |editor4-first=Jean-Daniel |chapter=The Support Vector method |chapter-url=https://link.springer.com/chapter/10.1007/BFb0020166 |title=Artificial Neural Networks β ICANN'97 |series=Lecture Notes in Computer Science |volume=1327 |language=en |location=Berlin, Heidelberg |publisher=Springer |pages=261β271 |doi=10.1007/BFb0020166 |isbn=978-3-540-69620-9}}</ref> SVMs are one of the most studied models, being based on statistical learning frameworks of [[VC theory]] proposed by [[Vladimir Vapnik|Vapnik]] (1982, 1995) and [[Alexey Chervonenkis|Chervonenkis]] (1974). In addition to performing [[linear classifier|linear classification]], SVMs can efficiently perform non-linear classification using the [[Kernel method#Mathematics: the kernel trick|''kernel trick'']], representing the data only through a set of pairwise similarity comparisons between the original data points using a kernel function, which transforms them into coordinates in a higher-dimensional [[feature space]]. Thus, SVMs use the kernel trick to implicitly map their inputs into high-dimensional feature spaces, where linear classification can be performed.<ref>{{cite book|last1=Awad|first1=Mariette|last2=Khanna|first2=Rahul|title=Efficient Learning Machines|chapter=Support Vector Machines for Classification|pages=39β66|doi=10.1007/978-1-4302-5990-9_3|doi-access=free|publisher=Apress|isbn=978-1-4302-5990-9|year=2015}}</ref> Being max-margin models, SVMs are resilient to noisy data (e.g., misclassified examples). SVMs can also be used for [[Regression analysis|regression]] tasks, where the objective becomes <math>\epsilon</math>-sensitive. The support vector clustering<ref name="HavaSiegelmann">{{cite journal |last1=Ben-Hur |first1=Asa |last2=Horn |first2=David |last3=Siegelmann |first3=Hava |last4=Vapnik |first4=Vladimir N. |title="Support vector clustering" (2001); |journal=Journal of Machine Learning Research |volume=2 |pages=125β137}}</ref> algorithm, created by [[Hava Siegelmann]] and [[Vladimir Vapnik]], applies the statistics of support vectors, developed in the support vector machines algorithm, to categorize unlabeled data.{{Citation needed|date=March 2018}} These data sets require [[unsupervised learning]] approaches, which attempt to find natural [[Cluster analysis|clustering of the data]] into groups, and then to map new data according to these clusters. The popularity of SVMs is likely due to their amenability to theoretical analysis, and their flexibility in being applied to a wide variety of tasks, including [[structured prediction]] problems. It is not clear that SVMs have better predictive performance than other linear models, such as [[logistic regression]] and [[linear regression]].<ref>{{cite journal | last1 = Huang | first1 = H. H. | last2 = Xu | first2 = T. | last3 = Yang | first3 = J. | title = Comparing logistic regression, support vector machines, and permanental classification methods in predicting hypertension | journal = BMC Proceedings | volume = 8 | issue = Suppl 1 | pages = S96 | year = 2014 | doi = 10.1186/1753-6561-8-S1-S96 | doi-access = free | pmid = 25519351 | pmc = 4143639 }}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)