Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Markov random field
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== Inference == As in a [[Bayesian network]], one may calculate the [[conditional distribution]] of a set of nodes <math> V' = \{ v_1 ,\ldots, v_i \} </math> given values to another set of nodes <math> W' = \{ w_1 ,\ldots, w_j \} </math> in the Markov random field by summing over all possible assignments to <math>u \notin V',W'</math>; this is called [[exact inference]]. However, exact inference is a [[Sharp-P-complete|#P-complete]] problem, and thus computationally intractable in the general case. Approximation techniques such as [[Markov chain Monte Carlo]] and loopy [[belief propagation]] are often more feasible in practice. Some particular subclasses of MRFs, such as trees (see [[Chow–Liu tree]]), have polynomial-time inference algorithms; discovering such subclasses is an active research topic. There are also subclasses of MRFs that permit efficient [[Maximum a posteriori|MAP]], or most likely assignment, inference; examples of these include associative networks.<ref>{{citation | last1 = Taskar | first1 = Benjamin | last2 = Chatalbashev | first2 = Vassil | last3 = Koller | first3 = Daphne | author3-link = Daphne Koller | editor-last = Brodley | editor-first = Carla E. | editor-link = Carla Brodley | contribution = Learning associative Markov networks | doi = 10.1145/1015330.1015444 | publisher = [[Association for Computing Machinery]] | series = ACM International Conference Proceeding Series | title = Proceedings of the Twenty-First International Conference on Machine Learning (ICML 2004), Banff, Alberta, Canada, July 4-8, 2004 | volume = 69 | pages = 102 | year = 2004| title-link = International Conference on Machine Learning | isbn = 978-1581138283 | citeseerx = 10.1.1.157.329 | s2cid = 11312524 }}.</ref><ref>{{citation | last1 = Duchi | first1 = John C. | last2 = Tarlow | first2 = Daniel | last3 = Elidan | first3 = Gal | last4 = Koller | first4 = Daphne | author4-link = Daphne Koller | editor1-last = Schölkopf | editor1-first = Bernhard | editor2-last = Platt | editor2-first = John C. | editor3-last = Hoffman | editor3-first = Thomas | contribution = Using Combinatorial Optimization within Max-Product Belief Propagation | contribution-url = http://papers.nips.cc/paper/3117-using-combinatorial-optimization-within-max-product-belief-propagation | pages = 369–376 | publisher = [[MIT Press]] | series = [[Conference on Neural Information Processing Systems|Advances in Neural Information Processing Systems]] | volume = 19 | title = Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 4-7, 2006 | year = 2006}}.</ref> Another interesting sub-class is the one of decomposable models (when the graph is [[Chordal graph|chordal]]): having a closed-form for the [[Maximum likelihood estimate|MLE]], it is possible to discover a consistent structure for hundreds of variables.<ref name="Petitjean">{{cite conference|last1=Petitjean|first1=F.|last2=Webb|first2=G.I.|last3=Nicholson|first3=A.E.|author-link3=Ann Nicholson|year=2013|title=Scaling log-linear analysis to high-dimensional data|url=http://www.tiny-clues.eu/Research/Petitjean2013-ICDM.pdf|conference=International Conference on Data Mining|location=Dallas, TX, USA|publisher=IEEE}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)