Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Inductive logic programming
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Probabilistic inductive logic programming == Probabilistic inductive logic programming adapts the setting of inductive logic programming to learning [[Probabilistic logic programming|probabilistic logic programs]]. It can be considered as a form of [[statistical relational learning]] within the formalism of probabilistic logic programming.<ref>{{Citation |last1=De Raedt |first1=Luc |title=Probabilistic Inductive Logic Programming |date=2008 |url=http://dx.doi.org/10.1007/978-3-540-78652-8_1 |pages=1β27 |access-date=2023-12-09 |place=Berlin, Heidelberg |publisher=Springer Berlin Heidelberg |isbn=978-3-540-78651-1 |last2=Kersting |first2=Kristian|doi=10.1007/978-3-540-78652-8_1 }}</ref><ref name="pilp">{{Cite journal |last1=Riguzzi |first1=Fabrizio |last2=Bellodi |first2=Elena |last3=Zese |first3=Riccardo |date=2014-09-18 |title=A History of Probabilistic Inductive Logic Programming |journal=Frontiers in Robotics and AI |volume=1 |doi=10.3389/frobt.2014.00006 |issn=2296-9144 |doi-access=free }}</ref> Given # background knowledge as a probabilistic logic program {{Mvar|B}}, and # a set of positive and negative examples <math display="inline">E^{+}</math> and <math display="inline">E^{-}</math> the goal of probabilistic inductive logic programming is to find a probabilistic logic program <math display="inline">H</math> such that the probability of positive examples according to <math display="inline">{H \cup B}</math> is maximized and the probability of negative examples is minimized.<ref name="pilp" /> This problem has two variants: parameter learning and structure learning. In the former, one is given the structure (the clauses) of {{Mvar|H}} and the goal is to infer the probabilities annotations of the given clauses, while in the latter the goal is to infer both the structure and the probability parameters of {{Mvar|H}}. Just as in classical inductive logic programming, the examples can be given as examples or as (partial) interpretations.<ref name="pilp" /> === Parameter Learning === Parameter learning for languages following the distribution semantics has been performed by using an [[expectation-maximisation algorithm]] or by [[gradient descent]]. An expectation-maximisation algorithm consists of a cycle in which the steps of expectation and maximization are repeatedly performed. In the expectation step, the distribution of the hidden variables is computed according to the current values of the probability parameters, while in the maximisation step, the new values of the parameters are computed. Gradient descent methods compute the gradient of the target function and iteratively modify the parameters moving in the direction of the gradient.<ref name="pilp" /> === Structure Learning === Structure learning was pioneered by [[Daphne Koller]] and Avi Pfeffer in 1997,<ref>{{Cite conference |last1=Koller |first1=Daphne |last2=Pfeffer |first2=Avi |date=August 1997 |title=Learning probabilities for noisy first-order rules |url=http://www.robotics.stanford.edu/~koller/Papers/Koller+Pfeffer:IJCAI97.pdf |conference=[[IJCAI]]}}</ref> where the authors learn the structure of [[First-order logic|first-order]] rules with associated probabilistic uncertainty parameters. Their approach involves generating the underlying [[graphical model]] in a preliminary step and then applying expectation-maximisation.<ref name="pilp" /> In 2008, [[Luc De Raedt|De Raedt]] et al. presented an algorithm for performing [[theory compression]] on [[ProbLog]] programs, where theory compression refers to a process of removing as many clauses as possible from the theory in order to maximize the probability of a given set of positive and negative examples. No new clause can be added to the theory.<ref name="pilp" /><ref>{{Cite journal |last1=De Raedt |first1=L. |last2=Kersting |first2=K. |last3=Kimmig |first3=A. |last4=Revoredo |first4=K. |last5=Toivonen |first5=H. |date=March 2008 |title=Compressing probabilistic Prolog programs |url=http://link.springer.com/10.1007/s10994-007-5030-x |journal=Machine Learning |language=en |volume=70 |issue=2β3 |pages=151β168 |doi=10.1007/s10994-007-5030-x |issn=0885-6125}}</ref> In the same year, Meert, W. et al. introduced a method for learning parameters and structure of [[Ground term|ground]] probabilistic logic programs by considering the [[Bayesian network]]s equivalent to them and applying techniques for learning Bayesian networks.<ref>{{Citation |last1=Blockeel |first1=Hendrik |title=Towards Learning Non-recursive LPADs by Transforming Them into Bayesian Networks |url=http://dx.doi.org/10.1007/978-3-540-73847-3_16 |work=Inductive Logic Programming |pages=94β108 |access-date=2023-12-09 |place=Berlin, Heidelberg |publisher=Springer Berlin Heidelberg |isbn=978-3-540-73846-6 |last2=Meert |first2=Wannes|series=Lecture Notes in Computer Science |date=2007 |volume=4455 |doi=10.1007/978-3-540-73847-3_16 }}</ref><ref name="pilp" /> ProbFOIL, introduced by De Raedt and Ingo Thon in 2010, combined the inductive logic programming system [[First-order inductive learner|FOIL]] with [[ProbLog]]. Logical rules are learned from probabilistic data in the sense that both the examples themselves and their classifications can be probabilistic. The set of rules has to allow one to predict the probability of the examples from their description. In this setting, the parameters (the probability values) are fixed and the structure has to be learned.<ref>{{Citation |last1=De Raedt |first1=Luc |title=Probabilistic Rule Learning |date=2011 |url=http://link.springer.com/10.1007/978-3-642-21295-6_9 |work=Inductive Logic Programming |volume=6489 |pages=47β58 |editor-last=Frasconi |editor-first=Paolo |access-date=2023-12-09 |place=Berlin, Heidelberg |publisher=Springer Berlin Heidelberg |doi=10.1007/978-3-642-21295-6_9 |isbn=978-3-642-21294-9 |last2=Thon |first2=Ingo |s2cid=11727522 |editor2-last=Lisi |editor2-first=Francesca A.}}</ref><ref name="pilp" /> In 2011, Elena Bellodi and Fabrizio Riguzzi introduced SLIPCASE, which performs a beam search among probabilistic logic programs by iteratively refining probabilistic theories and optimizing the parameters of each theory using expectation-maximisation.<ref>{{Citation |last1=Bellodi |first1=Elena |title=Learning the Structure of Probabilistic Logic Programs |date=2012 |url=http://dx.doi.org/10.1007/978-3-642-31951-8_10 |work=Inductive Logic Programming |pages=61β75 |access-date=2023-12-09 |place=Berlin, Heidelberg |publisher=Springer Berlin Heidelberg |isbn=978-3-642-31950-1 |last2=Riguzzi |first2=Fabrizio|doi=10.1007/978-3-642-31951-8_10 }}</ref> Its extension SLIPCOVER, proposed in 2014, uses bottom clauses generated as in [[Progol]] to guide the refinement process, thus reducing the number of revisions and exploring the search space more effectively. Moreover, SLIPCOVER separates the search for promising clauses from that of the theory: the space of clauses is explored with a [[beam search]], while the space of theories is searched [[Greedy search|greedily]].<ref>{{Cite journal |last1=Bellodi |first1=Elena |last2=Riguzzi |first2=Fabrizio |date=2014-01-15 |title=Structure learning of probabilistic logic programs by searching the clause space |url=http://dx.doi.org/10.1017/s1471068413000689 |journal=Theory and Practice of Logic Programming |volume=15 |issue=2 |pages=169β212 |doi=10.1017/s1471068413000689 |arxiv=1309.2080 |s2cid=17669522 |issn=1471-0684}}</ref><ref name="pilp" />
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)