Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Classifiers and statistical learning methods === The simplest AI applications can be divided into two types: classifiers (e.g., "if shiny then diamond"), on one hand, and controllers (e.g., "if diamond then pick up"), on the other hand. [[Classifier (mathematics)|Classifiers]]<ref>Statistical learning methods and [[Classifier (mathematics)|classifiers]]: {{Harvtxt|Russell|Norvig|2021|loc=chpt. 20}},</ref> are functions that use [[pattern matching]] to determine the closest match. They can be fine-tuned based on chosen examples using [[supervised learning]]. Each pattern (also called an "[[random variate|observation]]") is labeled with a certain predefined class. All the observations combined with their class labels are known as a [[data set]]. When a new observation is received, that observation is classified based on previous experience.<ref name="Supervised learning"/> There are many kinds of classifiers in use.<ref>{{Cite book |last1=Ciaramella |first1=Alberto |author-link=Alberto Ciaramella |title=Introduction to Artificial Intelligence: from data analysis to generative AI |last2=Ciaramella |first2=Marco |date=2024 |publisher=Intellisemantic Editions |isbn=978-8-8947-8760-3}}</ref> The [[decision tree]] is the simplest and most widely used symbolic machine learning algorithm.<ref>[[Alternating decision tree|Decision tree]]s: {{Harvtxt|Russell|Norvig|2021|loc=sect. 19.3}}, {{Harvtxt|Domingos|2015|p=88}}</ref> [[K-nearest neighbor]] algorithm was the most widely used analogical AI until the mid-1990s, and [[Kernel methods]] such as the [[support vector machine]] (SVM) displaced k-nearest neighbor in the 1990s.<ref>[[Nonparametric statistics|Non-parameteric]] learning models such as [[K-nearest neighbor]] and [[support vector machines]]: {{Harvtxt|Russell|Norvig|2021|loc=sect. 19.7}}, {{Harvtxt|Domingos|2015|p=187}} (k-nearest neighbor) * {{Harvtxt|Domingos|2015|p=88}} (kernel methods)</ref> The [[naive Bayes classifier]] is reportedly the "most widely used learner"{{Sfnp|Domingos|2015|p=152}} at Google, due in part to its scalability.<ref>[[Naive Bayes classifier]]: {{Harvtxt|Russell|Norvig|2021|loc=sect. 12.6}}, {{Harvtxt|Domingos|2015|p=152}}</ref> [[Artificial neural network|Neural networks]] are also used as classifiers.<ref name="Neural networks"/>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)