Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Speech recognition
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Security concerns=== Speech recognition can become a means of attack, theft, or accidental operation. For example, activation words like "Alexa" spoken in an audio or video broadcast can cause devices in homes and offices to start listening for input inappropriately, or possibly take an unwanted action.<ref>{{Cite news |date=6 March 2016 |title=Listen Up: Your AI Assistant Goes Crazy For NPR Too |url=https://www.npr.org/2016/03/06/469383361/listen-up-your-ai-assistant-goes-crazy-for-npr-too |url-status=live |archive-url=https://web.archive.org/web/20170723210358/http://www.npr.org/2016/03/06/469383361/listen-up-your-ai-assistant-goes-crazy-for-npr-too |archive-date=23 July 2017 |work=[[NPR]] |df=dmy-all}}</ref> Voice-controlled devices are also accessible to visitors to the building, or even those outside the building if they can be heard inside. Attackers may be able to gain access to personal information, like calendar, address book contents, private messages, and documents. They may also be able to impersonate the user to send messages or make online purchases. Two attacks have been demonstrated that use artificial sounds. One transmits ultrasound and attempt to send commands without nearby people noticing.<ref>{{Cite news |last=Claburn |first=Thomas |date=25 August 2017 |title=Is it possible to control Amazon Alexa, Google Now using inaudible commands? Absolutely |url=https://www.theregister.co.uk/2017/08/25/amazon_alexa_answers_inaudible_commands/?mt=1504024969000 |url-status=live |archive-url=https://web.archive.org/web/20170902051123/https://www.theregister.co.uk/2017/08/25/amazon_alexa_answers_inaudible_commands/?mt=1504024969000 |archive-date=2 September 2017 |work=[[The Register]] |df=dmy-all}}</ref> The other adds small, inaudible distortions to other speech or music that are specially crafted to confuse the specific speech recognition system into recognizing music as speech, or to make what sounds like one command to a human sound like a different command to the system.<ref>{{Cite web |date=31 January 2018 |title=Attack Targets Automatic Speech Recognition Systems |url=https://www.vice.com/en/article/attack-targets-automatic-speech-recognition-systems/ |url-status=live |archive-url=https://web.archive.org/web/20180303050744/https://motherboard.vice.com/en_us/article/d34nnz/attack-targets-automatic-speech-recognition-systems |archive-date=3 March 2018 |access-date=1 May 2018 |website=vice.com |df=dmy-all}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)