Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Bayesian network
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Inference complexity and approximation algorithms== In 1990, while working at Stanford University on large bioinformatic applications, Cooper proved that exact inference in Bayesian networks is [[NP-hard]].<ref> {{cite journal | first = Gregory F. | last = Cooper | name-list-style = vanc | title = The Computational Complexity of Probabilistic Inference Using Bayesian Belief Networks | url = https://stat.duke.edu/~sayan/npcomplete.pdf | journal = Artificial Intelligence | volume = 42 | issue = 2β3 | date = 1990 | pages = 393β405 | doi = 10.1016/0004-3702(90)90060-d | s2cid = 43363498 }} </ref> This result prompted research on approximation algorithms with the aim of developing a tractable approximation to probabilistic inference. In 1993, Paul Dagum and [[Michael Luby]] proved two surprising results on the complexity of approximation of probabilistic inference in Bayesian networks.<ref> {{cite journal | vauthors = Dagum P, Luby M | author-link1 = Paul Dagum | author-link2 = Michael Luby | title = Approximating probabilistic inference in Bayesian belief networks is NP-hard | journal = Artificial Intelligence | volume = 60 | issue = 1 | date = 1993 | pages = 141β153 | doi = 10.1016/0004-3702(93)90036-b | citeseerx = 10.1.1.333.1586 }} </ref> First, they proved that no tractable [[deterministic algorithm]] can approximate probabilistic inference to within an [[absolute error]] ''Ι'' < 1/2. Second, they proved that no tractable [[randomized algorithm]] can approximate probabilistic inference to within an absolute error ''Ι'' < 1/2 with confidence probability greater than 1/2. At about the same time, [[Dan Roth|Roth]] proved that exact inference in Bayesian networks is in fact [[Sharp-P-complete|#P-complete]] (and thus as hard as counting the number of satisfying assignments of a [[conjunctive normal form]] formula (CNF)) and that approximate inference within a factor 2<sup>''n''<sup>1β''Ι''</sup></sup> for every ''Ι'' > 0, even for Bayesian networks with restricted architecture, is NP-hard.<ref>D. Roth, [http://cogcomp.cs.illinois.edu/page/publication_view/5 On the hardness of approximate reasoning], IJCAI (1993)</ref><ref>D. Roth, [http://cogcomp.cs.illinois.edu/papers/hardJ.pdf On the hardness of approximate reasoning], Artificial Intelligence (1996)</ref> In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as naΓ―ve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm<ref>{{cite journal | vauthors = Dagum P, Luby M | author-link1 = Paul Dagum | author-link2 = Michael Luby | title = An optimal approximation algorithm for Bayesian inference | url = http://icsi.berkeley.edu/~luby/PAPERS/bayesian.ps | journal = Artificial Intelligence | volume = 93 | issue = 1β2 | date = 1997 | pages = 1β27 | doi = 10.1016/s0004-3702(97)00013-1 | citeseerx = 10.1.1.36.7946 | access-date = 2015-12-19 | archive-url = https://web.archive.org/web/20170706064354/http://www1.icsi.berkeley.edu/~luby/PAPERS/bayesian.ps | archive-date = 2017-07-06 }}</ref> developed by Dagum and Luby was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by <math>1/p(n)</math> where <math>p(n)</math> was any polynomial of the number of nodes in the network, <math>n</math>.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)