Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Hidden Markov model
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Other extensions === Yet another variant is the ''factorial hidden Markov model'', which allows for a single observation to be conditioned on the corresponding hidden variables of a set of <math>K</math> independent Markov chains, rather than a single Markov chain. It is equivalent to a single HMM, with <math>N^K</math> states (assuming there are <math>N</math> states for each chain), and therefore, learning in such a model is difficult: for a sequence of length <math>T</math>, a straightforward Viterbi algorithm has complexity <math>O(N^{2K} \, T)</math>. To find an exact solution, a junction tree algorithm could be used, but it results in an <math>O(N^{K+1} \, K \, T)</math> complexity. In practice, approximate techniques, such as variational approaches, could be used.<ref>{{cite journal |last1=Ghahramani |first1=Zoubin |author-link1=Zoubin Ghahramani |last2=Jordan |first2=Michael I. |author-link2=Michael I. Jordan |title=Factorial Hidden Markov Models |journal=[[Machine Learning (journal)|Machine Learning]] |year=1997 |volume=29 |issue=2/3 |pages=245–273 |doi=10.1023/A:1007425814087|doi-access=free}}</ref> All of the above models can be extended to allow for more distant dependencies among hidden states, e.g. allowing for a given state to be dependent on the previous two or three states rather than a single previous state; i.e. the transition probabilities are extended to encompass sets of three or four adjacent states (or in general <math>K</math> adjacent states). The disadvantage of such models is that dynamic-programming algorithms for training them have an <math>O(N^K \, T)</math> running time, for <math>K</math> adjacent states and <math>T</math> total observations (i.e. a length-<math>T</math> Markov chain). This extension has been widely used in [[bioinformatics]], in the modeling of [[Nucleic acid sequence|DNA sequences]]. Another recent extension is the ''triplet Markov model'',<ref name="TMM">{{cite journal |doi=10.1016/S1631-073X(02)02462-7 |volume=335 |issue=3 |title=Chaı̂nes de Markov Triplet |year=2002 |journal=Comptes Rendus Mathématique |pages=275–278 |last1=Pieczynski |first1=Wojciech|url=http://www.numdam.org/item/10.1016/S1631-073X(02)02462-7.pdf}}</ref> in which an auxiliary underlying process is added to model some data specificities. Many variants of this model have been proposed. One should also mention the interesting link that has been established between the ''theory of evidence'' and the ''triplet Markov models''<ref name="TMMEV">{{cite journal |doi=10.1016/j.ijar.2006.05.001 |volume=45 |title=Multisensor triplet Markov chains and theory of evidence |year=2007 |journal=International Journal of Approximate Reasoning |pages=1–16 |last1=Pieczynski |first1=Wojciech|doi-access=free}}</ref> and which allows to fuse data in Markovian context<ref name="JASP">[http://asp.eurasipjournals.com/content/pdf/1687-6180-2012-134.pdf Boudaren et al.] {{Webarchive|url=https://web.archive.org/web/20140311164443/http://asp.eurasipjournals.com/content/pdf/1687-6180-2012-134.pdf |date=2014-03-11}}, M. Y. Boudaren, E. Monfrini, W. Pieczynski, and A. Aissani, Dempster-Shafer fusion of multisensor signals in nonstationary Markovian context, EURASIP Journal on Advances in Signal Processing, No. 134, 2012.</ref> and to model nonstationary data.<ref name="TSP">[https://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=1468502&contentType=Journals+%26+Magazines&searchField%3DSearch_All%26queryText%3Dlanchantin+pieczynski Lanchantin et al.], P. Lanchantin and W. Pieczynski, Unsupervised restoration of hidden non stationary Markov chain using evidential priors, IEEE Transactions on Signal Processing, Vol. 53, No. 8, pp. 3091-3098, 2005.</ref><ref name="SPL">[https://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6244854&contentType=Journals+%26+Magazines&searchField%3DSearch_All%26queryText%3Dboudaren Boudaren et al.], M. Y. Boudaren, E. Monfrini, and W. Pieczynski, Unsupervised segmentation of random discrete data hidden with switching noise distributions, IEEE Signal Processing Letters, Vol. 19, No. 10, pp. 619-622, October 2012.</ref> Alternative multi-stream data fusion strategies have also been proposed in recent literature, e.g.,<ref>Sotirios P. Chatzis, Dimitrios Kosmopoulos, [https://ieeexplore.ieee.org/document/6164251/ "Visual Workflow Recognition Using a Variational Bayesian Treatment of Multistream Fused Hidden Markov Models,"] IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 7, pp. 1076-1086, July 2012.</ref> Finally, a different rationale towards addressing the problem of modeling nonstationary data by means of hidden Markov models was suggested in 2012.<ref name="Reservoir-HMM">{{cite journal |last1=Chatzis |first1=Sotirios P. |last2=Demiris |first2=Yiannis |year=2012 |title=A Reservoir-Driven Non-Stationary Hidden Markov Model |journal=Pattern Recognition |volume=45 |issue=11 |pages=3985–3996 |doi=10.1016/j.patcog.2012.04.018|bibcode=2012PatRe..45.3985C |hdl=10044/1/12611 |hdl-access=free}}</ref> It consists in employing a small recurrent neural network (RNN), specifically a reservoir network,<ref>M. Lukosevicius, H. Jaeger (2009) Reservoir computing approaches to recurrent neural network training, Computer Science Review '''3''': 127–149.</ref> to capture the evolution of the temporal dynamics in the observed data. This information, encoded in the form of a high-dimensional vector, is used as a conditioning variable of the HMM state transition probabilities. Under such a setup, eventually is obtained a nonstationary HMM, the transition probabilities of which evolve over time in a manner that is inferred from the data, in contrast to some unrealistic ad-hoc model of temporal evolution. In 2023, two innovative algorithms were introduced for the Hidden Markov Model. These algorithms enable the computation of the posterior distribution of the HMM without the necessity of explicitly modeling the joint distribution, utilizing only the conditional distributions.<ref>Azeraf, E., Monfrini, E., & Pieczynski, W. (2023). Equivalence between LC-CRF and HMM, and Discriminative Computing of HMM-Based MPM and MAP. Algorithms, 16(3), 173.</ref><ref>Azeraf, E., Monfrini, E., Vignon, E., & Pieczynski, W. (2020). Hidden markov chains, entropic forward-backward, and part-of-speech tagging. arXiv preprint arXiv:2005.10629.</ref> Unlike traditional methods such as the Forward-Backward and Viterbi algorithms, which require knowledge of the joint law of the HMM and can be computationally intensive to learn, the Discriminative Forward-Backward and Discriminative Viterbi algorithms circumvent the need for the observation's law.<ref>Azeraf, E., Monfrini, E., & Pieczynski, W. (2022). Deriving discriminative classifiers from generative models. arXiv preprint arXiv:2201.00844.</ref><ref>Ng, A., & Jordan, M. (2001). On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. Advances in neural information processing systems, 14.</ref> This breakthrough allows the HMM to be applied as a discriminative model, offering a more efficient and versatile approach to leveraging Hidden Markov Models in various applications. The model suitable in the context of longitudinal data is named latent Markov model.<ref>{{Cite book|title=Panel Analysis: Latent Probability Models for Attitude and Behaviour Processes|last=Wiggins|first=L. M.|publisher=Elsevier|year=1973|location=Amsterdam}}</ref> The basic version of this model has been extended to include individual covariates, random effects and to model more complex data structures such as multilevel data. A complete overview of the latent Markov models, with special attention to the model assumptions and to their practical use is provided in<ref>{{Cite book|url=https://sites.google.com/site/latentmarkovbook/home|title=Latent Markov models for longitudinal data|last1=Bartolucci|first1=F.|last2=Farcomeni|first2=A.|last3=Pennoni|first3=F.|publisher=Chapman and Hall/CRC|year=2013|isbn=978-14-3981-708-7|location=Boca Raton}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)