Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Speech recognition
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Models, methods, and algorithms== Both [[acoustic model]]ing and [[language model]]ing are important parts of modern statistically based speech recognition algorithms. Hidden Markov models (HMMs) are widely used in many systems. Language modeling is also used in many other natural language processing applications such as [[document classification]] or [[statistical machine translation]]. ===Hidden Markov models=== {{Main|Hidden Markov model}} Modern general-purpose speech recognition systems are based on hidden Markov models. These are statistical models that output a sequence of symbols or quantities. HMMs are used in speech recognition because a speech signal can be viewed as a piecewise stationary signal or a short-time stationary signal. In a short time scale (e.g., 10 milliseconds), speech can be approximated as a [[stationary process]]. Speech can be thought of as a [[Markov model]] for many stochastic purposes. Another reason why HMMs are popular is that they can be trained automatically and are simple and computationally feasible to use. In speech recognition, the hidden Markov model would output a sequence of ''n''-dimensional real-valued vectors (with ''n'' being a small integer, such as 10), outputting one of these every 10 milliseconds. The vectors would consist of [[cepstrum|cepstral]] coefficients, which are obtained by taking a [[Fourier transform]] of a short time window of speech and decorrelating the spectrum using a [[cosine transform]], then taking the first (most significant) coefficients. The hidden Markov model will tend to have in each state a statistical distribution that is a mixture of diagonal covariance Gaussians, which will give a likelihood for each observed vector. Each word, or (for more general speech recognition systems), each [[phoneme]], will have a different output distribution; a hidden Markov model for a sequence of words or phonemes is made by concatenating the individual trained hidden Markov models for the separate words and phonemes. Described above are the core elements of the most common, HMM-based approach to speech recognition. Modern speech recognition systems use various combinations of a number of standard techniques in order to improve results over the basic approach described above. A typical large-vocabulary system would need [[context dependency]] for the [[phoneme]]s (so that phonemes with different left and right context would have different realizations as HMM states); it would use [[cepstral normalization]] to normalize for a different speaker and recording conditions; for further speaker normalization, it might use vocal tract length normalization (VTLN) for male-female normalization and [[maximum likelihood linear regression]] (MLLR) for more general speaker adaptation. The features would have so-called [[delta coefficient|delta]] and [[delta-delta coefficient]]s to capture speech dynamics and in addition, might use [[heteroscedastic linear discriminant analysis]] (HLDA); or might skip the delta and delta-delta coefficients and use [[splicing (speech recognition)|splicing]] and an [[Linear Discriminant Analysis|LDA]]-based projection followed perhaps by [[heteroscedastic]] linear discriminant analysis or a [[global semi-tied co variance]] transform (also known as [[maximum likelihood linear transform]], or MLLT). Many systems use so-called discriminative training techniques that dispense with a purely statistical approach to HMM parameter estimation and instead optimize some classification-related measure of the training data. Examples are maximum [[mutual information]] (MMI), minimum classification error (MCE), and minimum phone error (MPE). Decoding of the speech (the term for what happens when the system is presented with a new utterance and must compute the most likely source sentence) would probably use the [[Viterbi algorithm]] to find the best path, and here there is a choice between dynamically creating a combination hidden Markov model, which includes both the acoustic and language model information and combining it statically beforehand (the [[finite state transducer]], or FST, approach). A possible improvement to decoding is to keep a set of good candidates instead of just keeping the best candidate, and to use a better scoring function ([[re scoring (ASR)|re scoring]]) to rate these good candidates so that we may pick the best one according to this refined score. The set of candidates can be kept either as a list (the [[N-best list]] approach) or as a subset of the models (a [[lattice (order)|lattice]]). Re scoring is usually done by trying to minimize the [[Bayes risk]]<ref>{{Cite journal |last1=Goel |first1=Vaibhava |last2=Byrne |first2=William J. |year=2000 |title=Minimum Bayes-risk automatic speech recognition |url=http://www.clsp.jhu.edu/people/vgoel/publications/CSAL.ps |url-status=live |journal=Computer Speech & Language |volume=14 |issue=2 |pages=115–135 |doi=10.1006/csla.2000.0138 |s2cid=206561058 |archive-url=https://web.archive.org/web/20110725225846/http://www.clsp.jhu.edu/people/vgoel/publications/CSAL.ps |archive-date=25 July 2011 |access-date=28 March 2011 |doi-access=free |df=dmy-all}}</ref> (or an approximation thereof) Instead of taking the source sentence with maximal probability, we try to take the sentence that minimizes the expectancy of a given loss function with regards to all possible transcriptions (i.e., we take the sentence that minimizes the average distance to other possible sentences weighted by their estimated probability). The loss function is usually the [[Levenshtein distance]], though it can be different distances for specific tasks; the set of possible transcriptions is, of course, pruned to maintain tractability. Efficient algorithms have been devised to re score [[lattice (order)|lattices]] represented as weighted [[finite state transducers]] with [[edit distance]]s represented themselves as a [[finite state transducer]] verifying certain assumptions.<ref>{{Cite journal |last=Mohri |first=M. |year=2002 |title=Edit-Distance of Weighted Automata: General Definitions and Algorithms |url=http://www.cs.nyu.edu/~mohri/pub/edit.pdf |url-status=live |journal=International Journal of Foundations of Computer Science |volume=14 |issue=6 |pages=957–982 |doi=10.1142/S0129054103002114 |archive-url=https://web.archive.org/web/20120318032640/http://www.cs.nyu.edu/~mohri/pub/edit.pdf |archive-date=18 March 2012 |access-date=28 March 2011 |df=dmy-all}}</ref> ===Dynamic time warping (DTW)-based speech recognition=== {{Main|Dynamic time warping}} Dynamic time warping is an approach that was historically used for speech recognition but has now largely been displaced by the more successful HMM-based approach. Dynamic time warping is an algorithm for measuring similarity between two sequences that may vary in time or speed. For instance, similarities in walking patterns would be detected, even if in one video the person was walking slowly and if in another he or she were walking more quickly, or even if there were accelerations and deceleration during the course of one observation. DTW has been applied to video, audio, and graphics – indeed, any data that can be turned into a linear representation can be analyzed with DTW. A well-known application has been automatic speech recognition, to cope with different speaking speeds. In general, it is a method that allows a computer to find an optimal match between two given sequences (e.g., time series) with certain restrictions. That is, the sequences are "warped" non-linearly to match each other. This sequence alignment method is often used in the context of hidden Markov models. ===Neural networks=== {{Main|Artificial neural network}} Neural networks emerged as an attractive acoustic modeling approach in ASR in the late 1980s. Since then, neural networks have been used in many aspects of speech recognition such as phoneme classification,<ref>{{Cite journal |last1=Waibel |first1=A. |last2=Hanazawa |first2=T. |last3=Hinton |first3=G. |last4=Shikano |first4=K. |last5=Lang |first5=K. J. |year=1989 |title=Phoneme recognition using time-delay neural networks |journal=IEEE Transactions on Acoustics, Speech, and Signal Processing |volume=37 |issue=3 |pages=328–339 |doi=10.1109/29.21701 |s2cid=9563026 |hdl-access=free |hdl=10338.dmlcz/135496}}</ref> phoneme classification through multi-objective evolutionary algorithms,<ref name="Bird Wanner Ekárt Faria 2020 p=113402">{{Cite journal |last1=Bird |first1=Jordan J. |last2=Wanner |first2=Elizabeth |last3=Ekárt |first3=Anikó |last4=Faria |first4=Diego R. |year=2020 |title=Optimisation of phonetic aware speech recognition through multi-objective evolutionary algorithms |url=https://publications.aston.ac.uk/id/eprint/41416/1/Speech_Recog_ESWA_2_.pdf |journal=Expert Systems with Applications |publisher=Elsevier BV |volume=153 |page=113402 |doi=10.1016/j.eswa.2020.113402 |issn=0957-4174 |s2cid=216472225 |access-date=9 September 2024 |archive-date=9 September 2024 |archive-url=https://web.archive.org/web/20240909053419/https://publications.aston.ac.uk/id/eprint/41416/1/Speech_Recog_ESWA_2_.pdf |url-status=live }}</ref> isolated word recognition,<ref>{{Cite journal |last1=Wu |first1=J. |last2=Chan |first2=C. |year=1993 |title=Isolated Word Recognition by Neural Network Models with Cross-Correlation Coefficients for Speech Dynamics |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=15 |issue=11 |pages=1174–1185 |doi=10.1109/34.244678}}</ref> [[audiovisual speech recognition]], audiovisual speaker recognition and speaker adaptation. [[Artificial neural network|Neural networks]] make fewer explicit assumptions about feature statistical properties than HMMs and have several qualities making them more attractive recognition models for speech recognition. When used to estimate the probabilities of a speech feature segment, neural networks allow discriminative training in a natural and efficient manner. However, in spite of their effectiveness in classifying short-time units such as individual phonemes and isolated words,<ref>S. A. Zahorian, A. M. Zimmer, and F. Meng, (2002) "[https://www.researchgate.net/profile/Stephen_Zahorian/publication/221480228_Vowel_classification_for_computer-based_visual_feedback_for_speech_training_for_the_hearing_impaired/links/00b7d525d25f51c585000000.pdf Vowel Classification for Computer based Visual Feedback for Speech Training for the Hearing Impaired]," in ICSLP 2002</ref> early neural networks were rarely successful for continuous recognition tasks because of their limited ability to model temporal dependencies. One approach to this limitation was to use neural networks as a pre-processing, feature transformation or dimensionality reduction,<ref>{{Cite book |last1=Hu |first1=Hongbing |title=ICASSP 2010 |last2=Zahorian |first2=Stephen A. |year=2010 |chapter=Dimensionality Reduction Methods for HMM Phonetic Recognition |chapter-url=http://bingweb.binghamton.edu/~hhu1/paper/Hu2010Dimensionality.pdf |archive-url=http://archive.wikiwix.com/cache/20120706063756/http://bingweb.binghamton.edu/~hhu1/paper/Hu2010Dimensionality.pdf |archive-date=6 July 2012 |url-status=live |df=dmy-all}}</ref> step prior to HMM based recognition. However, more recently, LSTM and related recurrent neural networks (RNNs),<ref name="lstm" /><ref name="sak2015" /><ref name="fernandez2007">{{Cite book |last1=Fernandez |first1=Santiago |title=Proceedings of IJCAI |last2=Graves |first2=Alex |last3=Schmidhuber |first3=Jürgen |author-link3=Jürgen Schmidhuber |year=2007 |chapter=Sequence labelling in structured domains with hierarchical recurrent neural networks |chapter-url=http://www.aaai.org/Papers/IJCAI/2007/IJCAI07-124.pdf |archive-url=https://web.archive.org/web/20170815003130/http://www.aaai.org/Papers/IJCAI/2007/IJCAI07-124.pdf |archive-date=15 August 2017 |url-status=live |df=dmy-all}}</ref><ref>{{Cite arXiv |eprint=1303.5778 |class=cs.NE |first1=Alex |last1=Graves |first2=Abdel-rahman |last2=Mohamed |title=Speech recognition with deep recurrent neural networks |first3=Geoffrey |last3=Hinton |year=2013}} ICASSP 2013.</ref> Time Delay Neural Networks(TDNN's),<ref>{{Cite journal |last=Waibel |first=Alex |year=1989 |title=Modular Construction of Time-Delay Neural Networks for Speech Recognition |url=http://isl.anthropomatik.kit.edu/cmu-kit/Modular_Construction_of_Time-Delay_Neural_Networks_for_Speech_Recognition.pdf |url-status=live |journal=Neural Computation |volume=1 |issue=1 |pages=39–46 |doi=10.1162/neco.1989.1.1.39 |s2cid=236321 |archive-url=https://web.archive.org/web/20160629180846/http://isl.anthropomatik.kit.edu/cmu-kit/Modular_Construction_of_Time-Delay_Neural_Networks_for_Speech_Recognition.pdf |archive-date=29 June 2016 |df=dmy-all}}</ref> and transformers<ref name=":1" /><ref name=":3" /><ref name=":4" /> have demonstrated improved performance in this area. ====Deep feedforward and recurrent neural networks==== {{Main|Deep learning}} Deep neural networks and denoising [[autoencoder]]s<ref>{{Cite book |last1=Maas |first1=Andrew L. |title=Proceedings of Interspeech 2012 |last2=Le |first2=Quoc V. |last3=O'Neil |first3=Tyler M. |last4=Vinyals |first4=Oriol |last5=Nguyen |first5=Patrick |last6=Ng |first6=Andrew Y. |author-link6=Andrew Ng |year=2012 |chapter=Recurrent Neural Networks for Noise Reduction in Robust ASR}}</ref> are also under investigation. A deep feedforward neural network (DNN) is an [[artificial neural network]] with multiple hidden layers of units between the input and output layers.<ref name=HintonDengYu2012/> Similar to shallow neural networks, DNNs can model complex non-linear relationships. DNN architectures generate compositional models, where extra layers enable composition of features from lower layers, giving a huge learning capacity and thus the potential of modeling complex patterns of speech data.<ref name=BOOK2014/> A success of DNNs in large vocabulary speech recognition occurred in 2010 by industrial researchers, in collaboration with academic researchers, where large output layers of the DNN based on context dependent HMM states constructed by decision trees were adopted.<ref name="Roles2010">{{Cite journal |last1=Yu |first1=D. |last2=Deng |first2=L. |last3=Dahl |first3=G. |date=2010 |title=Roles of Pre-Training and Fine-Tuning in Context-Dependent DBN-HMMs for Real-World Speech Recognition |url=https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/dbn4asr-nips2010.pdf |journal=NIPS Workshop on Deep Learning and Unsupervised Feature Learning}}</ref><ref name="ref27">{{Cite journal |last1=Dahl |first1=George E. |last2=Yu |first2=Dong |last3=Deng |first3=Li |last4=Acero |first4=Alex |date=2012 |title=Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition |journal=IEEE Transactions on Audio, Speech, and Language Processing |volume=20 |issue=1 |pages=30–42 |doi=10.1109/TASL.2011.2134090 |s2cid=14862572}}</ref> <ref name="ICASSP2013">Deng L., Li, J., Huang, J., Yao, K., Yu, D., Seide, F. et al. [https://pdfs.semanticscholar.org/6bdc/cfe195bc49d218acc5be750aa49e41f408e4.pdf Recent Advances in Deep Learning for Speech Research at Microsoft] {{Webarchive|url=https://web.archive.org/web/20240909052236/https://pdfs.semanticscholar.org/6bdc/cfe195bc49d218acc5be750aa49e41f408e4.pdf |date=9 September 2024 }}. ICASSP, 2013.</ref> See comprehensive reviews of this development and of the state of the art as of October 2014 in the recent Springer book from Microsoft Research.<ref name="ReferenceA" /> See also the related background of automatic speech recognition and the impact of various machine learning paradigms, notably including [[deep learning]], in recent overview articles.<ref>{{Cite journal |last1=Deng |first1=L. |last2=Li |first2=Xiao |date=2013 |title=Machine Learning Paradigms for Speech Recognition: An Overview |url=http://cvsp.cs.ntua.gr/courses/patrec/slides_material2018/slides-2018/DengLi_MLParadigms-SpeechRecogn-AnOverview_TALSP13.pdf |journal=IEEE Transactions on Audio, Speech, and Language Processing |volume=21 |issue=5 |pages=1060–1089 |doi=10.1109/TASL.2013.2244083 |s2cid=16585863 |access-date=9 September 2024 |archive-date=9 September 2024 |archive-url=https://web.archive.org/web/20240909052239/http://cvsp.cs.ntua.gr/courses/patrec/slides_material2018/slides-2018/DengLi_MLParadigms-SpeechRecogn-AnOverview_TALSP13.pdf |url-status=live }}</ref><ref name="scholarpedia2015">{{Cite journal |last=Schmidhuber |first=Jürgen |author-link=Jürgen Schmidhuber |year=2015 |title=Deep Learning |journal=Scholarpedia |volume=10 |issue=11 |page=32832 |bibcode=2015SchpJ..1032832S |doi=10.4249/scholarpedia.32832 |doi-access=free}}</ref> One fundamental principle of [[deep learning]] is to do away with hand-crafted [[feature engineering]] and to use raw features. This principle was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linear filter-bank features,<ref name="interspeech2010">L. Deng, M. Seltzer, D. Yu, A. Acero, A. Mohamed, and G. Hinton (2010) [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.185.1908&rep=rep1&type=pdf Binary Coding of Speech Spectrograms Using a Deep Auto-encoder]. Interspeech.</ref> showing its superiority over the Mel-Cepstral features which contain a few stages of fixed transformation from spectrograms. The true "raw" features of speech, waveforms, have more recently been shown to produce excellent larger-scale speech recognition results.<ref name="interspeech2014">{{Cite book |last1=Tüske |first1=Zoltán |title=Interspeech 2014 |last2=Golik |first2=Pavel |last3=Schlüter |first3=Ralf |last4=Ney |first4=Hermann |year=2014 |chapter=Acoustic Modeling with Deep Neural Networks Using Raw Time Signal for LVCSR |chapter-url=https://www-i6.informatik.rwth-aachen.de/publications/download/937/T%7Bu%7DskeZolt%7Ba%7DnGolikPavelSchl%7Bu%7DterRalfNeyHermann--AcousticModelingwithDeepNeuralNetworksUsingRawTimeSignalfor%7BLVCSR%7D--2014.pdf |archive-url=https://web.archive.org/web/20161221174753/https://www-i6.informatik.rwth-aachen.de/publications/download/937/T%7Bu%7DskeZolt%7Ba%7DnGolikPavelSchl%7Bu%7DterRalfNeyHermann--AcousticModelingwithDeepNeuralNetworksUsingRawTimeSignalfor%7BLVCSR%7D--2014.pdf |archive-date=21 December 2016 |url-status=live |df=dmy-all}}</ref> === End-to-end automatic speech recognition === Since 2014, there has been much research interest in "end-to-end" ASR. Traditional phonetic-based (i.e., all [[Hidden Markov model|HMM]]-based model) approaches required separate components and training for the pronunciation, acoustic, and [[language model]]. End-to-end models jointly learn all the components of the speech recognizer. This is valuable since it simplifies the training process and deployment process. For example, a [[N-gram|n-gram language model]] is required for all HMM-based systems, and a typical n-gram language model often takes several gigabytes in memory making them impractical to deploy on mobile devices.<ref>{{Cite book |last=Jurafsky |first=Daniel |title=Speech and Language Processing |year=2016}}</ref> Consequently, modern commercial ASR systems from [[Google]] and [[Apple Inc.|Apple]] ({{as of|2017|lc=y}}) are deployed on the cloud and require a network connection as opposed to the device locally. The first attempt at end-to-end ASR was with [[Connectionist temporal classification|Connectionist Temporal Classification]] (CTC)-based systems introduced by [[Alex Graves (computer scientist)|Alex Graves]] of [[DeepMind|Google DeepMind]] and Navdeep Jaitly of the [[University of Toronto]] in 2014.<ref>{{Cite journal |last=Graves |first=Alex |year=2014 |title=Towards End-to-End Speech Recognition with Recurrent Neural Networks |url=http://www.jmlr.org/proceedings/papers/v32/graves14.pdf |url-status=dead |journal=ICML |archive-url=https://web.archive.org/web/20170110184531/http://jmlr.org/proceedings/papers/v32/graves14.pdf |archive-date=10 January 2017 |access-date=22 July 2019}}</ref> The model consisted of [[recurrent neural network]]s and a CTC layer. Jointly, the RNN-CTC model learns the pronunciation and acoustic model together, however it is incapable of learning the language due to [[conditional independence]] assumptions similar to a HMM. Consequently, CTC models can directly learn to map speech acoustics to English characters, but the models make many common spelling mistakes and must rely on a separate language model to clean up the transcripts. Later, [[Baidu]] expanded on the work with extremely large datasets and demonstrated some commercial success in Chinese Mandarin and English.<ref>{{Cite arXiv |eprint=1512.02595 |class=cs.CL |first=Dario |last=Amodei |title=Deep Speech 2: End-to-End Speech Recognition in English and Mandarin |year=2016}}</ref> In 2016, [[University of Oxford]] presented [[LipNet]],<ref>{{Cite web |date=4 November 2016 |title=LipNet: How easy do you think lipreading is? |url=https://www.youtube.com/watch?v=fa5QGremQf8 |url-status=live |archive-url=https://web.archive.org/web/20170427104009/https://www.youtube.com/watch?v=fa5QGremQf8 |archive-date=27 April 2017 |access-date=5 May 2017 |website=YouTube |df=dmy-all}}</ref> the first end-to-end sentence-level lipreading model, using spatiotemporal convolutions coupled with an RNN-CTC architecture, surpassing human-level performance in a restricted grammar dataset.<ref>{{Cite arXiv |eprint=1611.01599 |class=cs.CV |first1=Yannis |last1=Assael |first2=Brendan |last2=Shillingford |title=LipNet: End-to-End Sentence-level Lipreading |date=5 November 2016 |last3=Whiteson |first3=Shimon |last4=de Freitas |first4=Nando}}</ref> A large-scale CNN-RNN-CTC architecture was presented in 2018 by [[DeepMind|Google DeepMind]] achieving 6 times better performance than human experts.<ref name=":0">{{Cite arXiv |eprint=1807.05162 |class=cs.CV |first1=Brendan |last1=Shillingford |first2=Yannis |last2=Assael |title=Large-Scale Visual Speech Recognition |date=2018-07-13 |last3=Hoffman |first3=Matthew W. |last4=Paine |first4=Thomas |last5=Hughes |first5=Cían |last6=Prabhu |first6=Utsav |last7=Liao |first7=Hank |last8=Sak |first8=Hasim |last9=Rao |first9=Kanishka}}</ref> In 2019, [[Nvidia]] launched two CNN-CTC ASR models, Jasper and QuarzNet, with an overall performance WER of 3%.<ref>{{Cite book |last1=Li |first1=Jason |last2=Lavrukhin |first2=Vitaly |last3=Ginsburg |first3=Boris |last4=Leary |first4=Ryan |last5=Kuchaiev |first5=Oleksii |last6=Cohen |first6=Jonathan M. |last7=Nguyen |first7=Huyen |last8=Gadde |first8=Ravi Teja |title=Interspeech 2019 |date=2019 |chapter=Jasper: An End-to-End Convolutional Neural Acoustic Model |chapter-url=https://www.isca-archive.org/interspeech_2019/li19_interspeech.html |pages=71–75 |doi=10.21437/Interspeech.2019-1819|arxiv=1904.03288 }}</ref><ref>{{Citation |last1=Kriman |first1=Samuel |title=QuartzNet: Deep Automatic Speech Recognition with 1D Time-Channel Separable Convolutions |date=2019-10-22 |arxiv=1910.10261 |last2=Beliaev |first2=Stanislav |last3=Ginsburg |first3=Boris |last4=Huang |first4=Jocelyn |last5=Kuchaiev |first5=Oleksii |last6=Lavrukhin |first6=Vitaly |last7=Leary |first7=Ryan |last8=Li |first8=Jason |last9=Zhang |first9=Yang}}</ref> Similar to other deep learning applications, [[transfer learning]] and [[domain adaptation]] are important strategies for reusing and extending the capabilities of deep learning models, particularly due to the high costs of training models from scratch, and the small size of available corpus in many languages and/or specific domains.<ref>{{Cite journal |last1=Medeiros |first1=Eduardo |last2=Corado |first2=Leonel |last3=Rato |first3=Luís |last4=Quaresma |first4=Paulo |last5=Salgueiro |first5=Pedro |date=May 2023 |title=Domain Adaptation Speech-to-Text for Low-Resource European Portuguese Using Deep Learning |journal=Future Internet |language=en |volume=15 |issue=5 |pages=159 |doi=10.3390/fi15050159 |doi-access=free |issn=1999-5903}}</ref><ref>{{Cite journal |last1=Joshi |first1=Raviraj |last2=Singh |first2=Anupam |date=May 2022 |editor-last=Malmasi |editor-first=Shervin |editor2-last=Rokhlenko |editor2-first=Oleg |editor3-last=Ueffing |editor3-first=Nicola |editor4-last=Guy |editor4-first=Ido |editor5-last=Agichtein |editor5-first=Eugene |editor6-last=Kallumadi |editor6-first=Surya |title=A Simple Baseline for Domain Adaptation in End to End ASR Systems Using Synthetic Data |url=https://aclanthology.org/2022.ecnlp-1.28/ |journal=Proceedings of the Fifth Workshop on E-Commerce and NLP (ECNLP 5) |location=Dublin, Ireland |publisher=Association for Computational Linguistics |pages=244–249 |doi=10.18653/v1/2022.ecnlp-1.28|arxiv=2206.13240 }}</ref><ref>{{Cite book |last1=Sukhadia |first1=Vrunda N. |last2=Umesh |first2=S. |chapter=Domain Adaptation of Low-Resource Target-Domain Models Using Well-Trained ASR Conformer Models |date=2023-01-09 |title=2022 IEEE Spoken Language Technology Workshop (SLT) |chapter-url=https://ieeexplore.ieee.org/document/10023233 |publisher=IEEE |pages=295–301 |doi=10.1109/SLT54892.2023.10023233 |arxiv=2202.09167 |isbn=979-8-3503-9690-4}}</ref> An alternative approach to CTC-based models are attention-based models. Attention-based ASR models were introduced simultaneously by Chan et al. of [[Carnegie Mellon University]] and [[Google Brain]] and Bahdanau et al. of the [[Université de Montréal|University of Montreal]] in 2016.<ref>{{Cite journal |last1=Chan |first1=William |last2=Jaitly |first2=Navdeep |last3=Le |first3=Quoc |last4=Vinyals |first4=Oriol |year=2016 |title=Listen, Attend and Spell: A Neural Network for Large Vocabulary Conversational Speech Recognition |url=https://storage.googleapis.com/pub-tools-public-publication-data/pdf/44926.pdf |journal=ICASSP |access-date=9 September 2024 |archive-date=9 September 2024 |archive-url=https://web.archive.org/web/20240909053931/https://storage.googleapis.com/pub-tools-public-publication-data/pdf/44926.pdf |url-status=live }}</ref><ref>{{Cite arXiv |eprint=1508.04395 |class=cs.CL |first=Dzmitry |last=Bahdanau |title=End-to-End Attention-based Large Vocabulary Speech Recognition |year=2016}}</ref> The model named "Listen, Attend and Spell" (LAS), literally "listens" to the acoustic signal, pays "attention" to different parts of the signal and "spells" out the transcript one character at a time. Unlike CTC-based models, attention-based models do not have conditional-independence assumptions and can learn all the components of a speech recognizer including the pronunciation, acoustic and language model directly. This means, during deployment, there is no need to carry around a language model making it very practical for applications with limited memory. By the end of 2016, the attention-based models have seen considerable success including outperforming the CTC models (with or without an external language model).<ref>{{Cite arXiv |eprint=1612.02695 |class=cs.NE |first1=Jan |last1=Chorowski |first2=Navdeep |last2=Jaitly |title=Towards better decoding and language model integration in sequence to sequence models |date=8 December 2016}}</ref> Various extensions have been proposed since the original LAS model. Latent Sequence Decompositions (LSD) was proposed by [[Carnegie Mellon University]], [[Massachusetts Institute of Technology|MIT]] and [[Google Brain]] to directly emit sub-word units which are more natural than English characters;<ref>{{Cite arXiv |eprint=1610.03035 |class=stat.ML |first1=William |last1=Chan |first2=Yu |last2=Zhang |title=Latent Sequence Decompositions |date=10 October 2016 |last3=Le |first3=Quoc |last4=Jaitly |first4=Navdeep}}</ref> [[University of Oxford]] and [[DeepMind|Google DeepMind]] extended LAS to "Watch, Listen, Attend and Spell" (WLAS) to handle lip reading surpassing human-level performance.<ref>{{Cite book |last1=Chung |first1=Joon Son |title=2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) |last2=Senior |first2=Andrew |last3=Vinyals |first3=Oriol |last4=Zisserman |first4=Andrew |date=16 November 2016 |isbn=978-1-5386-0457-1 |pages=3444–3453 |chapter=Lip Reading Sentences in the Wild |doi=10.1109/CVPR.2017.367 |arxiv=1611.05358 |s2cid=1662180}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)