Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Decision tree learning
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Machine learning algorithm}} {{about|decision trees in machine learning|the use of the term in decision analysis|Decision tree}} {{Machine learning|Supervised learning}} '''Decision tree learning''' is a [[supervised learning]] approach used in [[statistics]], [[data mining]] and [[machine learning]]. In this formalism, a classification or regression [[decision tree]] is used as a [[predictive model]] to draw conclusions about a set of observations. Tree models where the target variable can take a discrete set of values are called '''[[Statistical classification|classification]] [[decision tree|trees]]'''; in these tree structures, [[leaf node|leaves]] represent class labels and branches represent [[Logical conjunction|conjunction]]s of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically [[real numbers]]) are called '''[[regression analysis|regression]] [[decision tree|trees]]'''. More generally, the concept of regression tree can be extended to any kind of object equipped with pairwise dissimilarities such as categorical sequences.<ref name=":1">{{Cite journal |last1=Studer |first1=Matthias |last2=Ritschard |first2=Gilbert |last3=Gabadinho |first3=Alexis |last4=Mรผller |first4=Nicolas S. |date=2011 |title=Discrepancy Analysis of State Sequences |url=http://journals.sagepub.com/doi/10.1177/0049124111415372 |journal=Sociological Methods & Research |language=en |volume=40 |issue=3 |pages=471โ510 |doi=10.1177/0049124111415372 |s2cid=13307797 |issn=0049-1241}}</ref> Decision trees are among the most popular machine learning algorithms given their intelligibility and simplicity because they produce models that are easy to interpret and visualize, even for users without a statistical background.<ref>{{Cite journal|last1=Wu|first1=Xindong|last2=Kumar|first2=Vipin|last3=Ross Quinlan|first3=J.|last4=Ghosh|first4=Joydeep|last5=Yang|first5=Qiang|last6=Motoda|first6=Hiroshi|last7=McLachlan|first7=Geoffrey J.|last8=Ng|first8=Angus|last9=Liu|first9=Bing|last10=Yu|first10=Philip S.|last11=Zhou|first11=Zhi-Hua|date=2008-01-01|title=Top 10 algorithms in data mining|journal=Knowledge and Information Systems|language=en|volume=14|issue=1|pages=1โ37|doi=10.1007/s10115-007-0114-2|s2cid=2367747|issn=0219-3116|hdl=10983/15329|hdl-access=free}}</ref> In decision analysis, a decision tree can be used to visually and explicitly represent decisions and [[decision making]]. In [[data mining]], a decision tree describes data (but the resulting classification tree can be an input for decision making).
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)