Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Symbolic artificial intelligence
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==== Machine learning ==== Symbolic machine learning approaches were investigated to address the [[knowledge acquisition]] bottleneck. One of the earliest is [[Dendral#Meta-Dendral|Meta-DENDRAL]]. Meta-DENDRAL used a generate-and-test technique to generate plausible rule hypotheses to test against spectra. Domain and task knowledge reduced the number of candidates tested to a manageable size. [[Ed Feigenbaum|Feigenbaum]] described Meta-DENDRAL as {{Blockquote |text=...the culmination of my dream of the early to mid-1960s having to do with theory formation. The conception was that you had a problem solver like DENDRAL that took some inputs and produced an output. In doing so, it used layers of knowledge to steer and prune the search. That knowledge got in there because we interviewed people. But how did the people get the knowledge? By looking at thousands of spectra. So we wanted a program that would look at thousands of spectra and infer the knowledge of mass spectrometry that DENDRAL could use to solve individual hypothesis formation problems. We did it. We were even able to publish new knowledge of mass spectrometry in the ''[[Journal of the American Chemical Society]]'', giving credit only in a footnote that a program, Meta-DENDRAL, actually did it. We were able to do something that had been a dream: to have a computer program come up with a new and publishable piece of science.<ref name="Feignebaum Interview"/>}} In contrast to the knowledge-intensive approach of Meta-DENDRAL, [[Ross Quinlan]] invented a domain-independent approach to statistical classification, [[decision tree learning]], starting first with [[ID3 algorithm|ID3]]<ref>{{harvc|in1=Michalski|in2=Carbonell|in3=Mitchell|year=1983|c=Chapter 15: Learning Efficient Classification Procedures and their Application to Chess End Games |first=J. Ross |last=Quinlan}}</ref> and then later extending its capabilities to [[C4.5]].<ref>{{Cite book| edition = 1st | publisher = Morgan Kaufmann| isbn = 978-1-55860-238-0| last = Quinlan| first = J. Ross| title = C4.5: Programs for Machine Learning| location = San Mateo, Calif| date = 1992-10-15}}</ref> The decision trees created are [[glass box]], interpretable classifiers, with human-interpretable classification rules. Advances were made in understanding machine learning theory, too. [[Tom M. Mitchell|Tom Mitchell]] introduced [[version space learning]] which describes learning as a search through a space of hypotheses, with upper, more general, and lower, more specific, boundaries encompassing all viable hypotheses consistent with the examples seen so far.<ref>{{harvc|in1=Michalski|in2=Carbonell|in3=Mitchell|year=1983 |c=Chapter 6: Learning by Experimentation: Acquiring and Refining Problem-Solving Heuristics |first1=Tom M. |last1=Mitchell |first2=Paul E. |last2=Utgoff |first3=Ranan |last3=Banerji}}</ref> More formally, [[Leslie Valiant|Valiant]] introduced [[Probably approximately correct learning|Probably Approximately Correct Learning]] (PAC Learning), a framework for the mathematical analysis of machine learning.<ref>{{Cite journal| doi = 10.1145/1968.1972| issn = 0001-0782| volume = 27| issue = 11| pages = 1134–1142| last = Valiant| first = L. G.| title = A theory of the learnable| journal = Communications of the ACM| date = 1984-11-05| s2cid = 12837541| doi-access = free}}</ref> Symbolic machine learning encompassed more than learning by example. E.g., [[John Robert Anderson (psychologist)|John Anderson]] provided a [[cognitive model]] of human learning where skill practice results in a compilation of rules from a declarative format to a procedural format with his [[ACT-R]] [[cognitive architecture]]. For example, a student might learn to apply "Supplementary angles are two angles whose measures sum 180 degrees" as several different procedural rules. E.g., one rule might say that if X and Y are supplementary and you know X, then Y will be 180 - X. He called his approach "knowledge compilation". [[ACT-R]] has been used successfully to model aspects of human cognition, such as learning and retention. ACT-R is also used in [[intelligent tutoring systems]], called [[cognitive tutors]], to successfully teach geometry, computer programming, and algebra to school children.<ref "pump"="">{{Cite journal| volume = 8| pages = 30–43| last1 = Koedinger| first1 = K. R.| last2 = Anderson| first2 = J. R.| last3 = Hadley| first3 = W. H.| last4 = Mark| first4 = M. A.| last5 = others| title = Intelligent tutoring goes to school in the big city| journal = International Journal of Artificial Intelligence in Education | accessdate = 2012-08-18| date = 1997| url = http://telearn.archives-ouvertes.fr/hal-00197383/}}</ref> Inductive logic programming was another approach to learning that allowed logic programs to be synthesized from input-output examples. E.g., [[Ehud Shapiro]]'s MIS (Model Inference System) could synthesize Prolog programs from examples.<ref>{{Cite conference| conference = IJCAI| volume = 2| pages = 1064| last = Shapiro| first = Ehud Y| title = The Model Inference System| book-title = Proceedings of the 7th international joint conference on Artificial intelligence| date = 1981}}</ref> [[John R. Koza]] applied [[genetic algorithms]] to [[program synthesis]] to create [[genetic programming]], which he used to synthesize LISP programs. Finally, [[Zohar Manna]] and [[Richard Waldinger]] provided a more general approach to [[program synthesis]] that synthesizes a [[functional programming|functional program]] in the course of proving its specifications to be correct.<ref>{{Cite journal| doi = 10.1145/357084.357090| volume = 2| pages = 90–121| last1 = Manna| first1 = Zohar| last2 = Waldinger| first2 = Richard| title = A Deductive Approach to Program Synthesis| journal = ACM Trans. Program. Lang. Syst.| date = 1980-01-01| issue = 1| s2cid = 14770735}}</ref> As an alternative to logic, [[Roger Schank]] introduced case-based reasoning (CBR). The CBR approach outlined in his book, Dynamic Memory,<ref name="Schank">{{Cite book| publisher = Cambridge University Press| isbn = 978-0-521-27029-8| last = Schank| first = Roger C.| title = Dynamic Memory: A Theory of Reminding and Learning in Computers and People| location = Cambridge Cambridgeshire : New York| date = 1983-01-28}}</ref> focuses first on remembering key problem-solving cases for future use and generalizing them where appropriate. When faced with a new problem, CBR retrieves the most similar previous case and adapts it to the specifics of the current problem.<ref>{{Cite book| publisher = Academic Press| isbn = 978-0-12-322060-8| last = Hammond| first = Kristian J.| title = Case-Based Planning: Viewing Planning as a Memory Task| location = Boston| date = 1989-04-11}}</ref> Another alternative to logic, [[genetic algorithms]] and [[genetic programming]] are based on an evolutionary model of learning, where sets of rules are encoded into populations, the rules govern the behavior of individuals, and selection of the fittest prunes out sets of unsuitable rules over many generations.<ref>{{Cite book| edition = 1st | publisher = A Bradford Book| isbn = 978-0-262-11170-6| last = Koza| first = John R.| title = Genetic Programming: On the Programming of Computers by Means of Natural Selection| location = Cambridge, Mass| date = 1992-12-11}}</ref> Symbolic machine learning was applied to learning concepts, rules, heuristics, and problem-solving. Approaches, other than those above, include: # Learning from instruction or advice—i.e., taking human instruction, posed as advice, and determining how to operationalize it in specific situations. For example, in a game of Hearts, learning ''exactly how'' to play a hand to "avoid taking points."<ref>{{harvc|in1=Michalski|in2=Carbonell|in3=Mitchell|year=1983|c=Chapter 12: Machine Transformation of Advice into a Heuristic Search Procedure |first=David Jack |last=Mostow}}</ref> # Learning from exemplars—improving performance by accepting subject-matter expert (SME) feedback during training. When problem-solving fails, querying the expert to either learn a new exemplar for problem-solving or to learn a new explanation as to exactly why one exemplar is more relevant than another. For example, the program Protos learned to diagnose tinnitus cases by interacting with an audiologist.<ref>{{harvc |in1=Michalski |in2=Carbonell |in3=Mitchell |year=1986 |pp=112-139|c=Chapter 4: Protos: An Exemplar-Based Learning Apprentice |first=Ray |last=Bareiss|first2=Bruce|last2=Porter|first3=Craig|last3=Wier}}</ref> # Learning by analogy—constructing problem solutions based on similar problems seen in the past, and then modifying their solutions to fit a new situation or domain.<ref>{{harvc |in1=Michalski |in2=Carbonell |in3=Mitchell |year=1983 |pp=137-162|c=Chapter 5: Learning by Analogy: Formulating and Generalizing Plans from Past Experience |first=Jaime |last=Carbonell}}</ref><ref>{{harvc |in1=Michalski |in2=Carbonell |in3=Mitchell |year=1986 |pp=371-392|c=Chapter 14: Derivational Analogy: A Theory of Reconstructive Problem Solving and Expertise Acquisition |first=Jaime |last=Carbonell}}</ref> # Apprentice learning systems—learning novel solutions to problems by observing human problem-solving. Domain knowledge explains why novel solutions are correct and how the solution can be generalized. LEAP learned how to design VLSI circuits by observing human designers.<ref>{{harvc|in1=Kodratoff|in2=Michalski|year=1990|pp=271-289|c=Chapter 10: LEAP: A Learning Apprentice for VLSI Design |first=Tom |last=Mitchell|first2=Sridbar |last2=Mabadevan|first3=Louis|last3=Steinberg}}</ref> # Learning by discovery—i.e., creating tasks to carry out experiments and then learning from the results. [[Douglas Lenat|Doug Lenat]]'s [[Eurisko]], for example, learned heuristics to beat human players at the [[Traveller (role-playing game)|Traveller]] role-playing game for two years in a row.<ref>{{harvc|in1=Michalski|in2=Carbonell|in3=Mitchell|year=1983|pp=243-306|c=Chapter 9: The Role of Heuristics in Learning by Discovery: Three Case Studies|first=Douglas |last=Lenat}}</ref> # Learning macro-operators—i.e., searching for useful macro-operators to be learned from sequences of basic problem-solving actions. Good macro-operators simplify problem-solving by allowing problems to be solved at a more abstract level.<ref>{{Cite book| publisher = Pitman Publishing| isbn = 0-273-08690-1| last = Korf| first = Richard E.| title = Learning to Solve Problems by Searching for Macro-Operators| series = Research Notes in Artificial Intelligence| date = 1985}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)