Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Speech recognition
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Further information== === Conferences and journals === Popular speech recognition conferences held each year or two include SpeechTEK and SpeechTEK Europe, [[International Conference on Acoustics, Speech, and Signal Processing|ICASSP]], [[Interspeech]]/Eurospeech, and the IEEE ASRU. Conferences in the field of [[natural language processing]], such as [[Association for Computational Linguistics|ACL]], [[North American Chapter of the Association for Computational Linguistics|NAACL]], EMNLP, and HLT, are beginning to include papers on [[speech processing]]. Important journals include the [[IEEE]] Transactions on Speech and Audio Processing (later renamed [[IEEE]] Transactions on Audio, Speech and Language Processing and since Sept 2014 renamed [[IEEE]]/ACM Transactions on Audio, Speech and Language Processing—after merging with an ACM publication), Computer Speech and Language, and Speech Communication. === Books === Books like "Fundamentals of Speech Recognition" by [[Lawrence Rabiner]] can be useful to acquire basic knowledge but may not be fully up to date (1993). Another good source can be "Statistical Methods for Speech Recognition" by [[Frederick Jelinek]] and "Spoken Language Processing (2001)" by [[Xuedong Huang]] etc., "Computer Speech", by [[Manfred R. Schroeder]], second edition published in 2004, and "Speech Processing: A Dynamic and Optimization-Oriented Approach" published in 2003 by Li Deng and Doug O'Shaughnessey. The updated textbook ''Speech and Language Processing'' (2008) by [[Daniel Jurafsky|Jurafsky]] and Martin presents the basics and the state of the art for ASR. [[Speaker recognition]] also uses the same features, most of the same front-end processing, and classification techniques as is done in speech recognition. A comprehensive textbook, "Fundamentals of Speaker Recognition" is an in depth source for up to date details on the theory and practice.<ref name="auto">{{Cite book |last=Beigi |first=Homayoon |url=http://www.fundamentalsofspeakerrecognition.org |title=Fundamentals of Speaker Recognition |publisher=Springer |year=2011 |isbn=978-0-387-77591-3 |location=New York |archive-url=https://web.archive.org/web/20180131140911/http://www.fundamentalsofspeakerrecognition.org/ |archive-date=31 January 2018 |url-status=live |df=dmy-all}}</ref> A good insight into the techniques used in the best modern systems can be gained by paying attention to government sponsored evaluations such as those organised by [[DARPA]] (the largest speech recognition-related project ongoing as of 2007 is the GALE project, which involves both speech recognition and translation components). A good and accessible introduction to speech recognition technology and its history is provided by the general audience book "The Voice in the Machine. Building Computers That Understand Speech" by [[Roberto Pieraccini]] (2012). The most recent book on speech recognition is ''Automatic Speech Recognition: A Deep Learning Approach'' (Publisher: Springer) written by Microsoft researchers D. Yu and L. Deng and published near the end of 2014, with highly mathematically oriented technical detail on how deep learning methods are derived and implemented in modern speech recognition systems based on DNNs and related deep learning methods.<ref name="ReferenceA">{{Cite journal |last1=Yu |first1=D. |last2=Deng |first2=L. |date=2014 |title=Automatic Speech Recognition: A Deep Learning Approach (Publisher: Springer)}}</ref> A related book, published earlier in 2014, "Deep Learning: Methods and Applications" by L. Deng and D. Yu provides a less technical but more methodology-focused overview of DNN-based speech recognition during 2009–2014, placed within the more general context of deep learning applications including not only speech recognition but also image recognition, natural language processing, information retrieval, multimodal processing, and multitask learning.<ref name="BOOK2014">{{Cite journal |last1=Deng |first1=Li |last2=Yu |first2=Dong |year=2014 |title=Deep Learning: Methods and Applications |url=http://research.microsoft.com/pubs/209355/DeepLearning-NowPublishing-Vol7-SIG-039.pdf |url-status=live |journal=Foundations and Trends in Signal Processing |volume=7 |issue=3–4 |pages=197–387 |citeseerx=10.1.1.691.3679 |doi=10.1561/2000000039 |archive-url=https://web.archive.org/web/20141022161017/http://research.microsoft.com/pubs/209355/DeepLearning-NowPublishing-Vol7-SIG-039.pdf |archive-date=22 October 2014 |df=dmy-all}}</ref> === Software === In terms of freely available resources, [[Carnegie Mellon University]]'s [[CMU Sphinx|Sphinx]] toolkit is one place to start to both learn about speech recognition and to start experimenting. Another resource (free but copyrighted) is the [[HTK (software)|HTK]] book (and the accompanying HTK toolkit). For more recent and state-of-the-art techniques, [[Kaldi (software)|Kaldi]] toolkit can be used.<ref>Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek, O., Goel, N., ... & Vesely, K. (2011). The Kaldi speech recognition toolkit. In IEEE 2011 workshop on automatic speech recognition and understanding (No. CONF). IEEE Signal Processing Society.</ref> In 2017 [[Mozilla]] launched the open source project called [[Common Voice]]<ref>{{Cite web |title=Common Voice by Mozilla |url=https://voice.mozilla.org/ |url-status=dead |archive-url=https://web.archive.org/web/20200227020208/https://voice.mozilla.org/ |archive-date=27 February 2020 |access-date=9 November 2019 |website=voice.mozilla.org}}</ref> to gather big database of voices that would help build free speech recognition project DeepSpeech (available free at [[GitHub]]),<ref>{{Cite web |date=9 November 2019 |title=A TensorFlow implementation of Baidu's DeepSpeech architecture: mozilla/DeepSpeech |url=https://github.com/mozilla/DeepSpeech |via=GitHub |access-date=9 September 2024 |archive-date=9 September 2024 |archive-url=https://web.archive.org/web/20240909053949/https://github.com/mozilla/DeepSpeech |url-status=live }}</ref> using Google's open source platform [[TensorFlow]].<ref>{{Cite web |date=9 November 2019 |title=GitHub - tensorflow/docs: TensorFlow documentation |url=https://github.com/tensorflow/docs |via=GitHub |access-date=9 September 2024 |archive-date=9 September 2024 |archive-url=https://web.archive.org/web/20240909053830/https://github.com/tensorflow/docs |url-status=live }}</ref> When Mozilla redirected funding away from the project in 2020, it was forked by its original developers as Coqui STT<ref>{{Cite web |title=Coqui, a startup providing open speech tech for everyone |url=https://github.com/coqui-ai |access-date=2022-03-07 |website=GitHub |archive-date=9 September 2024 |archive-url=https://web.archive.org/web/20240909054614/https://github.com/coqui-ai |url-status=live }}</ref> using the same open-source license.<ref>{{Cite magazine |last=Coffey |first=Donavyn |date=2021-04-28 |title=Māori are trying to save their language from Big Tech |url=https://www.wired.co.uk/article/maori-language-tech |access-date=2021-10-16 |magazine=Wired UK |language=en-GB |issn=1357-0978 |archive-date=9 September 2024 |archive-url=https://web.archive.org/web/20240909053950/https://www.wired.com/story/maori-language-tech/ |url-status=live }}</ref><ref>{{Cite web |date=2021-07-07 |title=Why you should move from DeepSpeech to coqui.ai |url=https://discourse.mozilla.org/t/why-you-should-move-from-deepspeech-to-coqui-ai/82798 |access-date=2021-10-16 |website=Mozilla Discourse |language=en-US}}</ref> Google [[Gboard]] supports speech recognition on all [[Android (operating system)|Android]] applications. It can be activated through the [[microphone]] [[Icon (computing)|icon]].<ref>{{Cite web |title=Type with your voice |url=https://support.google.com/gboard/answer/2781851?hl=en&co=GENIE.Platform%3DAndroid |access-date=9 September 2024 |archive-date=9 September 2024 |archive-url=https://web.archive.org/web/20240909054332/https://support.google.com/gboard/answer/2781851?hl=en&co=GENIE.Platform%3DAndroid |url-status=live }}</ref> Speech recognition can be activated in [[Microsoft Windows]] operating systems by pressing Windows logo key + Ctrl + S.<ref>{{cite web|url=https://support.microsoft.com/en-us/windows/use-voice-recognition-in-windows-83ff75bd-63eb-0b6c-18d4-6fae94050571|title=Use voice recognition in Windows|archive-url=https://web.archive.org/web/20250409223456/https://support.microsoft.com/en-us/windows/use-voice-recognition-in-windows-83ff75bd-63eb-0b6c-18d4-6fae94050571|archive-date=April 9, 2025|url-status=live}}</ref> The commercial cloud based speech recognition APIs are broadly available. For more software resources, see [[List of speech recognition software]].
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)