Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Natural language processing
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Text and speech processing === ; [[Optical character recognition]] (OCR) :Given an image representing printed text, determine the corresponding text. ; [[Speech recognition]]: Given a sound clip of a person or people speaking, determine the textual representation of the speech. This is the opposite of [[text to speech]] and is one of the extremely difficult problems colloquially termed "[[AI-complete]]" (see above). In [[natural speech]] there are hardly any pauses between successive words, and thus [[speech segmentation]] is a necessary subtask of speech recognition (see below). In most spoken languages, the sounds representing successive letters blend into each other in a process termed [[coarticulation]], so the conversion of the [[analog signal]] to discrete characters can be a very difficult process. Also, given that words in the same language are spoken by people with different accents, the speech recognition software must be able to recognize the wide variety of input as being identical to each other in terms of its textual equivalent. ; [[Speech segmentation]]: Given a sound clip of a person or people speaking, separate it into words. A subtask of [[speech recognition]] and typically grouped with it. ; [[Text-to-speech]] :Given a text, transform those units and produce a spoken representation. Text-to-speech can be used to aid the visually impaired.<ref>{{Citation|last1=Yi|first1=Chucai|title=Assistive Text Reading from Complex Background for Blind Persons|date=2012|work=Camera-Based Document Analysis and Recognition|pages=15β28|publisher=Springer Berlin Heidelberg|language=en|citeseerx=10.1.1.668.869|doi=10.1007/978-3-642-29364-1_2|isbn=9783642293634|last2=Tian|first2=Yingli|author2-link=Yingli Tian|series=Lecture Notes in Computer Science |volume=7139 }}</ref> ; [[Word segmentation]] ([[Tokenization (lexical analysis)|Tokenization]]) :Tokenization is a process used in text analysis that divides text into individual words or word fragments. This technique results in two key components: a word index and tokenized text. The word index is a list that maps unique words to specific numerical identifiers, and the tokenized text replaces each word with its corresponding numerical token. These numerical tokens are then used in various deep learning methods.<ref name=":0" /> :For a language like [[English language|English]], this is fairly trivial, since words are usually separated by spaces. However, some written languages like [[Chinese language|Chinese]], [[Japanese language|Japanese]] and [[Thai language|Thai]] do not mark word boundaries in such a fashion, and in those languages text segmentation is a significant task requiring knowledge of the [[vocabulary]] and [[Morphology (linguistics)|morphology]] of words in the language. Sometimes this process is also used in cases like [[bag of words]] (BOW) creation in data mining.{{Citation needed|date=May 2024}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)