Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Speech recognition
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Health care=== ====Medical documentation==== In the [[health care]] sector, speech recognition can be implemented in front-end or back-end of the medical documentation process. Front-end speech recognition is where the provider dictates into a speech-recognition engine, the recognized words are displayed as they are spoken, and the dictator is responsible for editing and signing off on the document. Back-end or deferred speech recognition is where the provider dictates into a [[digital dictation]] system, the voice is routed through a speech-recognition machine and the recognized draft document is routed along with the original voice file to the editor, where the draft is edited and report finalized. Deferred speech recognition is widely used in the industry currently. One of the major issues relating to the use of speech recognition in healthcare is that the [[American Recovery and Reinvestment Act of 2009]] ([[American Recovery and Reinvestment Act of 2009|ARRA]]) provides for substantial financial benefits to physicians who utilize an EMR according to "Meaningful Use" standards. These standards require that a substantial amount of data be maintained by the EMR (now more commonly referred to as an [[Electronic Health Record]] or EHR). The use of speech recognition is more naturally suited to the generation of narrative text, as part of a radiology/pathology interpretation, progress note or discharge summary: the ergonomic gains of using speech recognition to enter structured discrete data (e.g., numeric values or codes from a list or a [[controlled vocabulary]]) are relatively minimal for people who are sighted and who can operate a keyboard and mouse. A more significant issue is that most EHRs have not been expressly tailored to take advantage of voice-recognition capabilities. A large part of the clinician's interaction with the EHR involves navigation through the user interface using menus, and tab/button clicks, and is heavily dependent on keyboard and mouse: voice-based navigation provides only modest ergonomic benefits. By contrast, many highly customized systems for radiology or pathology dictation implement voice "macros", where the use of certain phrases β e.g., "normal report", will automatically fill in a large number of default values and/or generate boilerplate, which will vary with the type of the exam β e.g., a chest X-ray vs. a gastrointestinal contrast series for a radiology system. ====Therapeutic use==== Prolonged use of speech recognition software in conjunction with [[word processor]]s has shown benefits to short-term-memory restrengthening in [[brain AVM]] patients who have been treated with [[Resection (surgery)|resection]]. Further research needs to be conducted to determine cognitive benefits for individuals whose AVMs have been treated using radiologic techniques.{{citation needed|date=November 2016}}
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)