Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Speech synthesis
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Prosodics and emotional content === {{See also|Emotional speech recognition|Prosody (linguistics)}} A study in the journal ''Speech Communication'' by Amy Drahota and colleagues at the [[University of Portsmouth]], [[UK]], reported that listeners to voice recordings could determine, at better than chance levels, whether or not the speaker was smiling.<ref>{{Cite news|title=Smile -and the world can hear you |date=January 9, 2008 |url=http://www.port.ac.uk/aboutus/newsandevents/news/title,74220,en.html |archive-date=May 17, 2008 |archive-url=https://web.archive.org/web/20080517102201/http://www.port.ac.uk/aboutus/newsandevents/news/title%2C74220%2Cen.html |publisher=University of Portsmouth |url-status=dead }}</ref><ref>{{Cite news |title=Smile β And The World Can Hear You, Even If You Hide |work=Science Daily |date=January 2008 |url=https://www.sciencedaily.com/releases/2008/01/080111224745.htm}}</ref><ref>{{Cite journal |last1 = Drahota |first1 = A. |title = The vocal communication of different kinds of smile |doi = 10.1016/j.specom.2007.10.001 |journal = Speech Communication |volume = 50 |issue = 4 |pages = 278β287 |year = 2008 |s2cid = 46693018 |url = http://peer.ccsd.cnrs.fr/docs/00/49/91/97/PDF/PEER_stage2_10.1016%252Fj.specom.2007.10.001.pdf |url-status = dead |archive-url = https://web.archive.org/web/20130703062330/https://peer.ccsd.cnrs.fr/docs/00/49/91/97/PDF/PEER_stage2_10.1016/j.specom.2007.10.001.pdf |archive-date = 2013-07-03 }}<!-- also available here: http://ganymedes.lib.unideb.hu:8080/udpeer/bitstream/2437.2/2984/1/PEER_stage2_10.1016%252Fj.specom.2007.10.001.pdf --></ref> It was suggested that identification of the vocal features that signal emotional content may be used to help make synthesized speech sound more natural. One of the related issues is modification of the [[pitch contour]] of the sentence, depending upon whether it is an affirmative, interrogative or exclamatory sentence. One of the techniques for pitch modification<ref name="Muralishankar2004" /> uses [[discrete cosine transform]] in the source domain ([[linear prediction]] residual). Such pitch synchronous pitch modification techniques need a priori pitch marking of the synthesis speech database using techniques such as epoch extraction using dynamic [[Plosive|plosion]] index applied on the integrated linear prediction residual of the [[Voice (phonetics)|voiced]] regions of speech.<ref>{{cite journal|last1=Prathosh|first1=A. P.|last2=Ramakrishnan|first2=A. G.|last3=Ananthapadmanabha|first3=T. V.|title=Epoch extraction based on integrated linear prediction residual using plosion index|journal=IEEE Trans. Audio Speech Language Processing|date=December 2013|volume=21|issue=12|pages=2471β2480|doi=10.1109/TASL.2013.2273717|s2cid=10491251}}</ref> In general, prosody remains a challenge for speech synthesizers, and is an active research topic.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)