Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Closed captioning
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{short description|Process of displaying interpretive texts to screens}} {{More citations needed|date=December 2022}} [[File:Closed captioning symbol.svg|thumb|The ''CC in a television'' symbol was created at [[WGBH-TV|WGBH]].|alt=The logo "CC" in a rounded white rectangle, framed black]] [[Image:New Zealand deaf symbol.svg|thumb|The "Slashed ear" symbol is the [[International Symbol for Deafness]] used by [[TVNZ]] and other [[Television in New Zealand|New Zealand broadcasters]], as well as on [[VHS]] tapes released by [[Alliance Atlantis]]. The symbol was used{{when|date=April 2020}} on road signs to identify [[Telecommunications device for the deaf|TTY]] access. A similar symbol depicting an ear (slashed or not) is used on television in several other countries, including [[France]] and [[Spain]].|alt=A symbol of a slashed ear]] '''Closed captioning''' ('''CC''') is the process of displaying text on a television, video screen, or other visual display to provide additional or interpretive information, where the viewer is given the choice of whether the text is displayed. Closed captions are typically used as a [[transcription (linguistics)|transcription]] of the audio portion of a program as it occurs (either [[wikt:verbatim|verbatim]] or in edited form), sometimes including descriptions of non-speech elements. Other uses have included providing a textual alternative language translation of a presentation's primary audio language that is usually burned-in (or "open") to the video and unselectable. [[HTML5]] defines '''subtitles''' as a "transcription or translation of the dialogue when sound is available but not understood" by the viewer (for example, dialogue in a foreign language) and '''captions''' as a "transcription or translation of the dialogue, sound effects, relevant musical cues, and other relevant audio information when sound is unavailable or not clearly audible" (for example, when audio is muted or the viewer is deaf or hard of hearing).<ref> {{cite web |url-status=live |url=http://www.w3.org/TR/html5/embedded-content-0.html#the-track-element |archive-url= https://web.archive.org/web/20130606104953/http://www.w3.org/TR/html5/embedded-content-0.html |archive-date=2013-06-06 |title=4.8.10 The track element |work=HTML Standard }}</ref> == Terminology == The term ''closed'' indicates that the captions are not visible until activated by the viewer, usually via the [[remote control]] or menu option. On the other hand, the terms ''open'', ''burned-in'', ''baked on'', ''hard-coded'', or simply ''hard'' indicate that the captions are visible to all viewers as they are embedded in the video. In the United States and Canada, the terms ''[[subtitles]]'' and ''captions'' have different meanings. ''Subtitles'' assume the viewer can hear but cannot understand the language or accent, or the speech is not entirely clear, so they transcribe only dialogue and some on-screen text. ''Captions'' aim to describe to the deaf and hard of hearing all significant audio content—spoken dialogue and non-speech information such as the identity of speakers and, occasionally, their manner of speaking—along with any significant [[music]] or [[sound effect]]s using words or symbols. Also, the term ''closed caption'' has come to be used to also refer to the North American [[EIA-608]] encoding that is used with NTSC-compatible video. The [[United Kingdom]], [[Republic of Ireland|Ireland]], and a number of other countries do not distinguish between subtitles and captions and use ''subtitles'' as the general term.{{citation needed|date=May 2023}} The equivalent of ''captioning'' is usually referred to as ''subtitles for the hard of hearing''. Their presence is referenced on screen by notation which says "Subtitles", or previously "Subtitles 888" or just "888" (the latter two are in reference to the conventional [[Teletext|videotext]] channel for captions), which is why the term ''subtitle'' is also used to refer to the [[Ceefax]]-based videotext encoding that is used with PAL-compatible video. The term ''subtitle'' has been replaced with ''caption'' in a number of markets—such as Australia and New Zealand—that purchase large amounts of imported US material, with much of that video having had the US CC logo already superimposed over the start of it. In New Zealand, broadcasters superimpose an ear logo with a line through it that represents subtitles for the hard of hearing, even though they are currently referred to as captions. In the UK, modern digital television services have subtitles for the majority of programs, so it is no longer necessary to highlight which have subtitling/captioning and which do not.{{citation needed|date=February 2020}} [[Remote control]] handsets for TVs, DVD players, and similar devices in most European markets often use "SUB" or "SUBTITLE" on the button used to control the display of subtitles and captions. == History{{anchor|Open captioning}} == === Open captioning === Regular open-captioned broadcasts began on [[Public Broadcasting Service|PBS]]'s ''[[The French Chef]]'' in 1972.<ref name="caphist">{{cite web|url=http://www.ncicap.org/caphist.asp |title=A Brief History of Captioned Television |website=National Captioning Institute |url-status=dead |archive-url=https://web.archive.org/web/20110719060406/http://www.ncicap.org/caphist.asp |archive-date=2011-07-19 }}</ref> [[WGBH-TV|WGBH]] began open captioning of the programs ''[[Zoom (1972 TV series)|Zoom]]'', ''[[ABC World News Tonight]]'', and ''[[Once Upon a Classic]]'' shortly thereafter. === Technical development of closed captioning === Closed captioning was first demonstrated in the United States at the First National Conference on Television for the Hearing Impaired at the University of Tennessee in Knoxville, Tennessee, in December 1971.<ref name="caphist" /> A second demonstration of closed captioning was held at Gallaudet College (now [[Gallaudet University]]) on February 15, 1972, where [[American Broadcasting Company|ABC]] and the [[National Bureau of Standards]] demonstrated closed captions embedded within a normal broadcast of ''[[The Mod Squad]]''. At the same time in the UK the BBC was demonstrating its Ceefax text based broadcast service which they were already using as a foundation to the development of a closed caption production system. They were working with professor [[Alan Newell (English computer scientist)|Alan Newell]] from the University of Southampton who had been developing prototypes in the late 1960s. The closed captioning system was successfully encoded and broadcast in 1973 with the cooperation of PBS station [[WETA-TV|WETA]].<ref name="caphist" /> As a result of these tests, the FCC in 1976 set aside Line 21 for the transmission of closed captions. PBS engineers then developed the caption editing consoles that would be used to caption prerecorded programs. The [[BBC]] in the UK was the first broadcaster to include closed captions (called subtitles in the UK) in 1979 based on the [[Teletext]] framework for pre-recorded programming. ==== Real-time captioning ==== Real-time captioning, a process for captioning live broadcasts, was developed by the [[National Captioning Institute]] in 1982.<ref name="caphist" /> As developed in 1992, real-time captioning used [[stenotype]] operators who are able to type at speeds of up to 375 words per minute provide captions for live television programs, allowing the viewer to see the captions within two to three seconds of the words being spoken. Improvements in [[speech recognition]] technology mean that live captioning may be fully or partially automated. [[BBC Sport]] broadcasts use a "respeaker": a trained human who repeats the running commentary (with careful enunciation and some simplification and [[markup language|markup]]) for input to the automated text generation system. This is generally reliable, though errors are not unknown.<ref>{{cite web|url=https://www.bbc.com/news/uk-england-tyne-41473443|title=Match of the Day 2: Newcastle subtitle error leaves BBC red-faced|date=2 October 2017|work=[[BBC Online]]|access-date=2 October 2017}}</ref> In the 1980s, [[DARPA]] sponsored a number of projects aimed at developing automatic speech recognition software. Much of this work was done by researchers at Carnegie Mellon University. In the 1990s, this program included a novel focus of using this technology for news transcription purposes.<ref>{{cite journal|pages=191–192|title=Speech Recognition by Machine: A Review|volume=6|number=3|year=2009|journal=International Journal of Computer Science and Information Security|arxiv=1001.2267 |last1=Anusuya |first1=M. A. |last2=Katti |first2=S. K. }}</ref> Later developments have yielded live, real-time AI-based captioning generating systems.<ref>{{cite news|url=https://www.theverge.com/2025/1/9/24339817/vlc-player-automatic-ai-subtitling-translation|title=VLC player demos real-time AI subtitling for videos}}</ref> === Full-scale closed captioning === The National Captioning Institute was created in 1979 in order to get the cooperation of the commercial television networks.<ref name="caphist"/> The first use of regularly scheduled closed captioning on American television occurred on March 16, 1980.<ref>Gannon, Jack. 1981. ''Deaf Heritage-A Narrative History of Deaf America''. Silver Spring, MD: National Association of the Deaf, pp. 384-387</ref> [[Public Broadcasting Service|PBS]] developed the line-21 decoder, a decoding unit that could be connected to a standard television set. This was sold commercially by [[Sears]] under the name Telecaption.<ref>{{cite book|title=Developing Technologies for Television Captioning|page=166|url=https://www.google.com/books/edition/Developing_Technologies_for_Television_C/h_kEbqTWTjgC?hl=en&gbpv=1&dq=telecaption+adapter+PBS+developed&pg=PA166&printsec=frontcover}}</ref> The first programs seen with captioning were a ''[[Walt Disney anthology series|Disney's Wonderful World]]'' presentation of the film ''[[Son of Flubber]]'' on [[NBC]], an ''[[The ABC Sunday Night Movie|ABC Sunday Night Movie]]'' airing of ''[[Semi-Tough]]'', and ''[[Masterpiece Theatre]]'' on PBS.<ref>"Today on TV", ''Chicago Daily Herald'', March 11, 1980, Section 2-5</ref> Since 2010 the BBC provides captioning for all programming across all seven of its main broadcast channels [[BBC One]], [[BBC Two]], [[BBC Three]], [[BBC Four]], [[CBBC (TV channel)|CBBC]], [[CBeebies]] and [[BBC News (British TV channel)|BBC News]]. [[BBC iPlayer]] launched in 2008 as the first captioned [[video on demand|video-on-demand]] service from a major broadcaster with levels of captioning comparable to those provided on its broadcast channels. === Legislative development in the U.S. === Until the passage of the Television Decoder Circuitry Act of 1990, television captioning was performed by a set-top box manufactured by Sanyo Electric and marketed by the National Captioning Institute (NCI). (At that time a set-top decoder cost about as much as a TV set itself, approximately $200.) Through discussions with the manufacturer it was established that the appropriate circuitry integrated into the television set would be less expensive than the stand-alone box, and Ronald May, then a Sanyo employee, provided the expert witness testimony on behalf of Sanyo and Gallaudet University in support of the passage of the bill. On January 23, 1991, the [[Television Decoder Circuitry Act of 1990]] was passed by Congress.<ref name="caphist"/> This Act gave the [[Federal Communications Commission]] (FCC) power to enact rules on the implementation of closed captioning. This Act required all analog television receivers with screens of at least 13 inches or greater, either sold or manufactured, to have the ability to display closed captioning by July 1, 1993.<ref>{{Cite web|url=https://www.access-board.gov/sec508/guide/1194.24-decoderact.htm|title=Crossing at Roundabouts - United States Access Board|website=www.access-board.gov|access-date=2019-07-20|archive-date=2020-11-05|archive-url=https://web.archive.org/web/20201105232853/https://www.access-board.gov/sec508/guide/1194.24-decoderact.htm|url-status=dead}}</ref> Also, in 1990, the [[Americans with Disabilities Act]] (ADA) was passed to ensure equal opportunity for persons with disabilities.<ref name="caphist" /> The ADA prohibits discrimination against persons with disabilities in public accommodations or commercial facilities. Title III of the ADA requires that public facilities—such as hospitals, bars, shopping centers and museums (but not movie theaters)—provide access to verbal information on televisions, films and slide shows. The Federal Communications Commission requires all providers of programs to caption material which has audio in English or Spanish, with certain exceptions specified in Section 79.1(d) of the commission's rules. These exceptions apply to new networks; programs in languages other than English or Spanish; networks having to spend over 2% of income on captioning; networks having less than US$3,000,000 in revenue; and certain local programs; among other exceptions.<ref>{{Cite web|url=https://www.fcc.gov/general/self-implementing-exemptions-closed-captioning-rules|title=Self Implementing Exemptions From Closed Captioning Rules|date=July 8, 2011|website=Federal Communications Commission}}</ref> Those who are not covered by the exceptions may apply for a hardship waiver.<ref>{{Cite web|url=https://www.fcc.gov/economically-burdensome-exemption-closed-captioning-requirements|title=Economically Burdensome Exemption from Closed Captioning Requirements|date=May 30, 2017|website=Federal Communications Commission}}</ref> The [[Telecommunications Act of 1996]] expanded on the Decoder Circuitry Act to place the same requirements on [[digital television]] receivers by July 1, 2002.<ref>{{Cite web|url=https://www.fcc.gov/consumers/guides/closed-captioning-television|title=Closed Captioning on Television|date=May 6, 2011|website=Federal Communications Commission}}</ref> All TV programming distributors in the U.S. are required to provide closed captions for Spanish-language video programming as of January 1, 2010.<ref>{{Cite web|url=https://www.ecfr.gov/current/title-47/chapter-I/subchapter-C/part-79/subpart-A/section-79.1|title=§ 79.1 Closed captioning of televised video programming.|work=[[Code of Federal Regulations]]}}</ref> A bill, H.R. 3101, the Twenty-First Century Communications and Video Accessibility Act of 2010, was passed by the United States House of Representatives in July 2010.<ref>{{cite web|title=Twenty-First Century Communications and Video Accessibility Act of 2010|year=2010|url=http://www.govtrack.us/congress/bills/111/hr3101|access-date=2013-03-28|archive-date=2023-03-26|archive-url=https://web.archive.org/web/20230326024400/https://www.govtrack.us/congress/bills/111/hr3101|url-status=dead}}</ref> A similar bill, S. 3304, with the same name, was passed by the United States Senate on August 5, 2010 and by the House of Representatives on September 28, 2010, and was signed by President [[Barack Obama]] on October 8, 2010. The Act requires, in part, any [[Advanced Television Systems Committee standards|ATSC]]-decoding set-top box remote to have a button to turn the closed captioning in the output signal on or off. It also requires broadcasters to provide captioning for television programs redistributed on the Internet.<ref>{{cite web|title=Twenty-First Century Communications and Video Accessibility Act of 2010|year=2010|url=http://www.govtrack.us/congress/bills/111/s3304|access-date=2013-03-28|archive-date=2023-03-26|archive-url=https://web.archive.org/web/20230326024406/https://www.govtrack.us/congress/bills/111/s3304|url-status=dead}}</ref> On February 20, 2014, the FCC unanimously approved the implementation of quality standards for closed captioning,<ref>{{cite web|title=FCC Moves to Upgrade TV Closed Captioning Quality|year=2014|url=https://www.fcc.gov/document/fcc-moves-upgrade-tv-closed-captioning-quality}}</ref> addressing accuracy, timing, completeness, and placement. This is the first time the FCC has addressed quality issues in captions. In 2015, a law was passed in Hawaii requiring two screenings a week of each movie with captions on the screen. In 2022 a law took effect in New York City requiring movie theaters to offer captions on the screen for up to four showtimes per movie each week, including weekends and Friday nights.<ref>{{Cite web|url=https://apnews.com/article/technology-health-95d57b9ad5d17246d50d84c84c3127d5|title=Why captions are suddenly everywhere and how they got there|date=June 27, 2022|website=AP NEWS}}</ref> Some state and local governments (including [[Boston, Massachusetts]]; [[Portland, Oregon]]; [[Rochester, New York]]; and [[Washington (state)|the State of Washington]]) require closed captioning to be activated on TVs in public places at all times, even if no one has requested it.<ref>{{cite news |url=https://www.washingtonpost.com/wellness/2022/12/22/boston-closed-captioning-tvs/ |title=More cities are requiring captions on public TVs. Here's why that matters. |newspaper=[[The Washington Post]]}}</ref> ===Philippines=== As amended by RA 10905, all TV networks in the Philippines are required to provide closed captions.<ref>{{Cite web|url=https://www.kbp.org.ph/philippine-tv-to-provide-closed-captioning|title=Philippine TV to Provide Closed Captioning – Kapisanan ng mga Brodkaster ng Pilipinas|website=www.kbp.org.ph|archive-url=https://web.archive.org/web/20231117173613/https://www.kbp.org.ph/philippine-tv-to-provide-closed-captioning|archive-date=2023-11-17|url-status=dead}}</ref> As of 2018, the three major TV networks in the country are currently testing the closed captioning system on their transmissions. [[ABS-CBN]] added closed captions in their daily ''[[Chaplet of the Divine Mercy|3 O'Clock Habit]]'' in the afternoon. [[TV5 (Philippine TV network)|TV5]] started implementing closed captions on their live noon and nightly news programs. [[GMA Network|GMA]] once broadcast news programs with closed captions but since stopped. Only select [[Korean drama]] and local or foreign movies, ''{{Lang|tl|Biyahe ni}} Drew'' (English title ''Drew's Travel Adventure'') and ''{{lang|tl|Idol sa Kusina}}'' (English title ''Kitchen Idol'') are broadcast with proper closed captioning.<ref>{{cite web|url=https://www.yugatech.com/news/gma-tv5-now-airing-shows-with-closed-captioning/#SlhAodUTy5JM4oD4.99|title=GMA, TV5 now airing shows with closed captioning |author=Carl Lamiel|date=October 14, 2017|publisher=YugaTech|access-date=February 2, 2019}}</ref> Since 2016 all Filipino-language films, as well as some streaming services, like iWant, have included English subtitles in some showings. The law regarding this was proposed by Gerald Anthony Gullas Jr., a lawmaker from Cebu City, who had implemented the regulations on standardizing both official languages of the Philippines, as the people had not mastered English vocabulary.<ref>{{cite web|url=https://www.rappler.com/entertainment/40688-lawmaker-wants-english-subtitles-ph-tv-movies|title=Lawmaker wants English subtitles for PH TV, movies|date=October 6, 2013|publisher=[[Rappler]]|access-date=September 6, 2019}}</ref> === Legislative development in Australia === The government of Australia provided [[seed funding]] in 1981 for the establishment of the Australian Caption Centre (ACC) and the purchase of equipment. Captioning by the ACC commenced in 1982 and a further grant from the Australian government enabled the ACC to achieve and maintain financial self-sufficiency. The ACC, now known as [[Media Access Australia]], sold its commercial captioning division to [[Red Bee Media]] in December 2005. Red Bee Media continues to provide captioning services in Australia today.<ref>{{cite web |url = http://www.dbcde.gov.au/__data/assets/pdf_file/0020/84710/Media_Access_Australia_-_Response_to_Media_Access_Review_2008.pdf |title = Submission to DBCDE's investigation into Access to Electronic Media for the Hearing and Vision Impaired |access-date = 2009-02-07 |author1 = Alex Varley |date = June 2008 |publisher = Media Access Australia |location = Australia |pages = 12, 18, 43 |url-status = dead |archive-url = https://web.archive.org/web/20090326214643/http://www.dbcde.gov.au/__data/assets/pdf_file/0020/84710/Media_Access_Australia_-_Response_to_Media_Access_Review_2008.pdf |archive-date = 2009-03-26 }}</ref><ref>{{cite web|url=http://www.mediaaccess.org.au/index.php?option=com_content&view=article&id=359&Itemid=100|title=About Media Access Australia|publisher=Media Access Australia|location=Australia|url-status=live|archive-url=https://web.archive.org/web/20090101180233/http://www.mediaaccess.org.au/index.php?option=com_content&view=article&id=359&Itemid=100|archive-date=1 January 2009|access-date=2009-02-07}}</ref><ref>{{cite web|url=http://www.redbeemedia.com.au/aboutus-australia.html |title=About Red Bee Media Australia |access-date=2009-02-07 |publisher=Red Bee Media Australia Pty Limited |location=Australia |url-status=dead |archive-url=https://web.archive.org/web/20090613004348/http://www.redbeemedia.com.au/aboutus-australia.html |archive-date=June 13, 2009 }}</ref> === Funding development in New Zealand === In 1981, [[TVNZ]] held a [[telethon]] to raise funds for Teletext-encoding equipment used for the creation and editing of text-based broadcast services for the deaf. The service came into use in 1984 with caption creation and importing paid for as part of the public broadcasting fee until the creation of the [[NZ On Air]] taxpayer fund, which is used to provide captioning for NZ On Air content and TVNZ news shows and for conversion of [[EIA-608]] US captions to the preferred [[EBU]] STL format for only [[TVNZ 1]], [[TV 2 (New Zealand)|TV 2]] and [[TV3 (New Zealand)|TV 3]] with archived captions available to [[Four (New Zealand)|FOUR]] and select [[SKY Network Television|Sky]] programming. During the second half of 2012, [[TV3 (New Zealand)|TV3]] and [[Four (New Zealand)|FOUR]] began providing non-Teletext DVB image-based captions on their HD service and used the same format on the satellite service, which has since caused major timing issues in relation to server load and the loss of captions from most SD DVB-S receivers, such as the ones Sky Television provides their customers. As of April 2, 2013, only the Teletext page 801 caption service will remain in use with the informational Teletext non-caption content being discontinued. == Application == Closed captions were created for [[deaf]] and [[Hearing impairment|hard of hearing]] individuals to assist in comprehension. They can also be used as a tool by those learning to read, or those learning to speak a non-native language, or in environments where the audio is difficult to hear or is intentionally muted. Captions can also be used by viewers who simply wish to read a transcript along with the program audio.{{cn|date=May 2025}} In the United States, the [[National Captioning Institute]] noted that [[English as a foreign or second language]] (ESL) learners were the largest group buying decoders in the late 1980s and early 1990s before built-in decoders became a standard feature of US television sets. This suggested that the largest audience of closed captioning was people whose native language was not English. In the United Kingdom, of 7.5 million people using TV subtitles (closed captioning), 6 million have no hearing impairment.<ref>{{cite web |url=http://www.ofcom.org.uk/consult/condocs/accessservs/summary/ |website=[[Ofcom]] |title=Television access services |archive-url=https://web.archive.org/web/20100601080942/http://www.ofcom.org.uk/consult/condocs/accessservs/summary/ |archive-date=June 1, 2010 |url-status=dead}}</ref> Closed captions are also used in public environments, such as bars and restaurants, where patrons may not be able to hear over the background noise, or where multiple televisions are displaying different programs. In addition, online videos may be treated through digital processing of their audio content by various robotic algorithms (robots). Multiple chains of errors are the result. When a video is truly and accurately transcribed, then the closed-captioning publication serves a useful purpose, and the content is available for search engines to index and make available to users on the Internet.<ref>{{cite web |url = http://www.dbcde.gov.au/__data/assets/pdf_file/0020/84710/Media_Access_Australia_-_Response_to_Media_Access_Review_2008.pdf |title = Submission to DBCDE's investigation into Access to Electronic Media for the Hearing and Vision Impaired |access-date = 2009-01-29 |author1 = Alex Varley |date = June 2008 |publisher = Media Access Australia |location = Australia |page = 16 |quote = The use of captions and audio description is not limited to deaf and blind people. Captions can be used in situations of "temporary" deafness, such as watching televisions in public areas where the sound has been turned down (commonplace in America and starting to appear more in Australia). |url-status = dead |archive-url = https://web.archive.org/web/20081203071118/http://www.dbcde.gov.au/__data/assets/pdf_file/0020/84710/Media_Access_Australia_-_Response_to_Media_Access_Review_2008.pdf |archive-date = 2008-12-03 }}</ref><ref>{{cite web |url = http://www.sfgov.org/site/sfmdc_page.asp?id=86619 |title = Resolution in Support of Board of Supervisors' Ordinance Requiring Activation of Closed Captioning on Televisions in Public Areas |access-date = 2009-01-29 |author = Mayor's Disability Council |date = May 16, 2008 |publisher = City and County of San Francisco |quote = that television receivers located in any part of a facility open to the general public have closed captioning activated at all times when the facility is open and the television receiver is in use. |url-status = dead |archive-url = https://web.archive.org/web/20090128130124/http://www.sfgov.org/site/sfmdc_page.asp?id=86619 |archive-date = January 28, 2009 }}</ref><ref>{{cite web|url=http://www.ada.gov/norwegian.htm|title=Settlement Agreement Between The United States And Norwegian American Hospital Under The Americans With Disabilities Act|author1=Alex Varley|date=April 18, 2005|publisher=U.S. Department of Justice|access-date=2009-01-29|quote=will have closed captioning operating in all public areas where there are televisions with closed captioning; televisions in public areas without built-in closed captioning capability will be replaced with televisions that have such capability}}</ref> Some television sets can be set to automatically turn captioning on when the volume is muted.{{cn|date=May 2025}} == Television and video == For live programs, spoken words comprising the television program's [[soundtrack]] are transcribed by a human operator (a [[speech-to-text reporter]]) using [[stenotype]]- or [[stenomask]]-type machines, whose phonetic output is instantly translated into text by a computer and displayed on the screen. This technique was developed in the 1970s as an initiative of the [[BBC]]'s [[Ceefax]] [[teletext]] service.<ref>{{cite web|url=http://teletext.mb21.co.uk/timeline/early-ceefax-subtitling.shtml|title=mb21 - ether.net - The Teletext Museum - Timeline|work=mb21.co.uk}}</ref> In collaboration with the BBC, a university student took on the research project of writing the first phonetics-to-text conversion program for this purpose. Sometimes, the captions of live broadcasts, like news bulletins, sports events, live entertainment shows, and other live shows, fall behind by a few seconds. This delay is because the machine does not know what the person is going to say next, so after the person on the show says the sentence, the captions appear.<ref>{{cite web|url=http://www.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP065.pdf|title=Publications|work=bbc.co.uk|url-status=dead|archive-url=https://web.archive.org/web/20061012151554/http://www.bbc.co.uk/rd/pubs/whp/whp-pdf-files/WHP065.pdf|archive-date=12 October 2006}}</ref> Automatic computer speech recognition works well when trained to recognize a single voice, and so since 2003, the BBC does live subtitling by having someone re-speak what is being broadcast. Live captioning is also a form of [[real-time text]]. Meanwhile, sport events on ESPN are using [[court reporter]]s, using a special (steno) keyboard and individually constructed "dictionaries." In some cases, the transcript is available beforehand, and captions are simply displayed during the program after being edited. For programs that have a mix of prepared and live content, such as [[news bulletin]]s, a combination of techniques is used. For prerecorded programs, commercials, and home videos, audio is transcribed and captions are prepared, positioned, and timed in advance. For all types of [[NTSC]] programming, captions are encoded into [[EIA-608|Line 21]] of the [[vertical blanking interval]]{{dash}}a part of the TV picture that sits just above the visible portion and is usually unseen. For [[ATSC Standards|ATSC]] ([[digital television]]) programming, three streams are encoded in the video: two are backward-compatible ''Line 21'' captions, and the third is a set of up to 63 additional caption streams encoded in [[EIA-708]] format.<ref name="atsc.org">{{cite web|url=http://www.atsc.org/faq/faq_closed.html |title=Closed Captioning FAQ |access-date=2008-05-31 |url-status=dead |archive-url=https://web.archive.org/web/20080901221032/http://www.atsc.org/faq/faq_closed.html |archive-date=2008-09-01 }} - ATSC Closed Captioning FAQ ([http://www.evertz.com/resources/cc-imp-paper.pdf cached copy] {{webarchive |url=https://web.archive.org/web/20060322091039/http://www.evertz.com/resources/cc-imp-paper.pdf |date=2006-03-22 }})</ref> Captioning is modulated and stored differently in [[PAL]] and [[SECAM]] countries (625 lines, 50 fields per second), where [[teletext]] is used rather than in [[EIA-608]], but the methods of preparation and the Line 21 field used are similar. For home [[Betamax|Beta]] and [[VHS]] videotapes, a shift down of this Line 21 field must be done due to the greater number of VBI lines used in 625 line PAL countries, though only a small minority of European PAL VHS machines support this (or any) format for closed caption recording. Like all teletext fields, teletext captions can not be stored by a standard 625 line VHS recorder (due to the lack of field shifting support); they are available on all professional [[S-VHS]] recordings due to all fields being recorded. Recorded Teletext caption fields also suffer from a higher number of caption errors due to increased number of bits and a low [[signal-to-noise ratio]], especially on low-bandwidth VHS. This is why Teletext captions were stored on floppy disk, separate from the analogue master tape. DVDs have their own system for subtitles and captions, which are digitally inserted in the data stream and decoded on playback into video. For older televisions, a set-top box or other decoder is usually required. In the US, since the passage of the Television Decoder Circuitry Act, manufacturers of most television receivers sold have been required to include closed captioning display capability. High-definition TV sets, receivers, and [[TV tuner card|tuner cards]] are also covered, though the technical specifications are different (high-definition display screens, as opposed to high-definition TVs, may lack captioning). Canada has no similar law but receives the same sets as the US in most cases. During transmission, single byte errors can be replaced by a white space which can appear at the beginning of the program. More byte errors during EIA-608 transmission can affect the screen momentarily, by defaulting to a real-time mode such as the "roll up" style, type random letters on screen, and then revert to normal. Uncorrectable byte errors within the teletext page header will cause whole captions to be dropped. EIA-608, due to using only two characters per video frame, sends these captions ahead of time storing them in a second buffer awaiting a command to display them; Teletext sends these in real-time. The use of capitalization varies among caption providers. Most caption providers capitalize all words while others such as WGBH and non-US providers prefer to use mixed-case letters. There are two main styles of Line 21 closed captioning: * '''Roll-up''' or '''scroll-up''' or '''paint-on''' or '''scrolling''': Real-time words sent in paint-on or scrolling mode appear from left to right, up to one line at a time; when a line is filled in roll-up mode, the whole line scrolls up to make way for a new line, and the line on top is erased. The lines usually appear at the bottom of the screen, but can actually be placed on any of the 14 screen rows to avoid covering graphics or action. This method is used when captioning video in real-time such as for live events, where a sequential word-by-word captioning process is needed or a pre-made intermediary file isn't available. This method is signaled on [[EIA-608]] by a two-byte caption command or in Teletext by replacing rows for a roll-up effect and duplicating rows for a paint-on effect. This allows for real-time caption line editing. [[File:Closed Caption Demonstration Still-Felix.png|thumb|225px|A still frame showing simulated closed captioning in the pop-on style]] * '''Pop-on''' or '''pop-up''' or '''block''': A caption appears on any of the 14 screen rows as a complete sentence, which can be followed by additional captions. This method is used when captions come from an intermediary file (such as the Scenarist or EBU STL file formats) for pre-taped television and film programming, commonly produced at captioning facilities. This method of captioning can be aided by digital scripts or voice recognition software, and if used for live events, would require a video delay to avoid a large delay in the captions' appearance on-screen, which occurs with Teletext-encoded live subtitles. === Caption formatting === [[TVNZ]] Access Services and Red Bee Media for [[BBC]] and Australia example: <pre style="color:#004000;"> I got the machine ready.</pre> <pre style="color:red;">ENGINE STARTING (speeding away) </pre> UK IMS for [[ITV (TV network)|ITV]] and Sky example: <pre style="color:red;"> (man) I got the machine ready. (engine starting) </pre> US WGBH Access Services example: <pre> MAN: I got the machine ready. (engine starting) </pre> US [[National Captioning Institute]] example: <pre> Man: I GOT THE MACHINE READY. [ENGINE STARTING] </pre> US [[CaptionMax]] example: <pre> - I got the machine ready. [engine starting] </pre> US in-house real-time roll-up example: <pre> >> Man: I GOT THE MACHINE READY. [engine starting] </pre> Non-US in-house real-time roll-up example: <pre style="color:#808000;"> MAN: I got the machine ready. (ENGINE STARTING) </pre> US [[VITAC]] example: <pre> Man: I got the machine ready. [ Engine starting ] </pre> ==== Syntax ==== For real-time captioning done outside of captioning facilities, the following syntax is used: * '>>' (two prefixed [[greater-than sign]]s) indicates a change in single speaker. ** Sometimes appended with the speaker's name in alternate case, followed by a [[colon (punctuation)|colon]]. * '>>>' (three prefixed greater-than signs) indicates a change in news story. Styles of syntax that are used by various captioning producers: * Capitals indicate main on-screen dialogue and the name of the speaker. ** Legacy [[EIA-608]] home caption decoder fonts had no [[descender]]s on lowercase letters. ** Outside North America, capitals with background coloration indicate a song title or sound effect description. ** Outside North America, capitals with black or no background coloration indicates when a word is stressed or emphasized. * Descenders indicate background sound description and [[Offscreen|off-screen]] dialogue. ** Most modern caption producers, such as [[WGBH-TV]], use [[mixed case]] for both on-screen and [[Offscreen|off-screen]] dialogue. * '-' (a prefixed dash) indicates a change in single speaker (used by [[National Captioning Institute]] or [[CaptionMax]]). * Words in [[italic type|italics]] indicate when a word is stressed or emphasized and when real world names are quoted. ** Italics and [[bold type]] are only supported by [[EIA-608]]. ** Some North American providers use this for [[Narration|narrated]] dialogue. ** Some providers use this for [[offscreen|off-screen]] dialogue. ** Italics are also applied when a word is spoken in a foreign language. * Text coloration indicates captioning credits and sponsorship. ** Used by [[music video]]s in the past, but generally has declined due to system incompatibilities. ** In Ceefax/Teletext countries, it indicates a change in single speaker in place of '>>'. ** Some Teletext countries use coloration to indicate when a word is stressed or emphasized. ** Coloration is limited to white, green, blue, cyan, red, yellow and magenta. ** UK order of use for text is [[white]], [[green]], [[cyan]], [[yellow]]; and backgrounds is [[black]], [[red]], [[blue]], [[magenta]], [[white]]. ** US order of use for text is [[white]], [[yellow]], [[cyan]], [[green]]; and backgrounds is [[black]], [[blue]], [[red]], [[magenta]], white. * [[Square brackets]] or [[parentheses]] indicate a song title or sound effect description. * [[Parentheses]] indicate speaker's vocal pitch e.g., (man), (woman), (boy) or (girl). ** Outside North America, [[parentheses]] indicate a silent on-screen action. * A pair of [[eighth note]]s is used to bracket a line of [[lyrics]] to indicate singing. ** A pair of eighth notes on a line of no text are used during a section of instrumental music or even voice tones playing with the music. ** Outside North America, a single [[number sign]] is used on a line of [[lyrics]] to indicate singing or may just instead use the eighth notes without the lyrics playing. ** An additional musical notation character is appended to the end of the last line of lyrics to indicate the song's end. ** As the symbol is unsupported by [[Ceefax]]/[[Teletext]], a [[number sign]] - which resembles a musical [[sharp (music)|sharp]] - is substituted. ==== Technical aspects ==== There were many shortcomings in the original Line 21 specification from a [[typography|typographic]] standpoint, since, for example, it lacked many of the characters required for captioning in languages other than English. Since that time, the core Line 21 character set has been expanded to include quite a few more characters, handling most requirements for languages common in North and South America such as [[French language|French]], [[spanish language|Spanish]], and [[portuguese language|Portuguese]], though those extended characters are not required in all decoders and are thus unreliable in everyday use. The problem has been almost eliminated with a market specific full set of Western European characters and a private adopted [[Norpak]] extension for [[South Korea]]n and [[Japan]]ese markets. The full [[EIA-708]] standard for digital television has worldwide character set support, but there has been little use of it due to [[EBU]] Teletext dominating [[Digital Video Broadcasting|DVB]] countries, which has its own extended character sets. Captions are often edited to make them easier to read and to reduce the amount of text displayed onscreen. This editing can be very minor, with only a few occasional unimportant missed lines, to severe, where virtually every line spoken by the actors is condensed. The measure used to guide this editing is words per minute, commonly varying from 180 to 300, depending on the type of program. Offensive words are also captioned, but if the program is censored for TV broadcast, the broadcaster might not have arranged for the captioning to be edited or censored also. The "TV Guardian", a television [[set-top box]], is available to parents who wish to censor offensive language of programs—the video signal is fed into the box and if it detects an offensive word in the captioning, the audio signal is bleeped or muted for that period of time. === Caption channels === [[File:Cc3tout.jpg|thumb|150px|right|A [[bug (television)|bug]] touting CC1 and CC3 captions (on [[Telemundo]])]] The Line 21 data stream can consist of data from several data channels [[multiplexed]] together. Odd field 1 can have four data channels: two separate synchronized captions (CC1, CC2) with caption-related text, such as website [[URL]]s (T1, T2). Even field 2 can have five additional data channels: two separate synchronized captions (CC3, CC4) with caption related text (T3, T4), and [[Extended Data Services]] (XDS) for Now/Next [[EPG]] details. XDS data structure is defined in CEA-608. As CC1 and CC2 share bandwidth, if there is a lot of data in CC1, there will be little room for CC2 data and is generally only used for the primary audio captions. Similarly, CC3 and CC4 share the second even field of line 21. Since some early caption decoders supported only single field decoding of CC1 and CC2, captions for [[Second audio program|SAP]] in a second language were often placed in CC2. This led to bandwidth problems, and the U.S. [[Federal Communications Commission]] (FCC) recommendation is that bilingual programming should have the second caption language in CC3. Many Spanish television networks such as [[Univision]] and [[Telemundo]], for example, provides [[Telemundo#English subtitles|English subtitles]] for many of its [[Spain|Spanish]] programs in CC3. [[Canada|Canadian]] broadcasters use CC3 for French translated SAPs, which is also a similar practice in South Korea and Japan. Ceefax and Teletext can have a larger number of captions for other languages due to the use of multiple VBI lines. However, only [[Europe|European countries]] used a second subtitle page for second language audio tracks where either the [[NICAM]] dual mono or [[Zweikanalton]] were used. == Digital television interoperability issues == The US [[ATSC Standards|ATSC]] [[digital television]] system originally specified two different kinds of ''closed captioning'' datastream standards: the original analog-compatible (available by [[EIA-608|Line 21]]) and the more modern digital-only [[CEA-708]] formats are delivered within the video stream.<ref name="atsc.org" /> The [[Federal Communications Commission|US FCC]] mandates that broadcasters deliver (and generate, if necessary) both datastream formats with the [[CTA-708|CEA-708]] format merely a conversion of the Line 21 format.<ref name="atsc.org" /> The [[Canada|Canadian]] [[CRTC]] has not mandated that broadcasters either broadcast both datastream formats or exclusively in one format. Most broadcasters and networks to avoid large conversion cost outlays just provide [[EIA-608]] captions along with a transcoded [[CTA-708|CEA-708]] version encapsulated within [[CTA-708|CEA-708]] packets. === Incompatibility issues with digital TV === Many viewers find that when they acquire a [[digital television]] or [[set-top box]] they are unable to view closed caption (CC) information, even though the broadcaster is sending it and the [[Television|TV]] is able to display it. Originally, CC information was included in the picture ("line 21") via a [[Composite video|composite video input]], but there is no equivalent capability in digital video interconnects (such as [[Digital Visual Interface|DVI]] and [[HDMI]]) between the display and a "source". A "source", in this case, can be a [[DVD player]] or a [[Terrestrial television|terrestrial]] or cable digital television receiver. When CC information is encoded in the [[MPEG-2]] data stream, only the device that decodes the [[MPEG-2|MPEG-2 data]] (a source) has access to the ''closed caption'' information; there is no standard for transmitting the CC information to a display monitor separately. Thus, if there is CC information, the source device needs to overlay the CC information on the picture prior to transmitting to the display over the interconnect's video output. The responsibility of decoding the CC information and overlaying onto the visible video image has been taken away from the TV display and put into the "source" of DVI and HDMI digital video interconnects. Because the TV handles "mute" and, when using [[Digital Visual Interface|DVI]] and [[HDMI]], a different device handles turning on and off CC, this means the "captions come on automatically when the [[Television|TV]] is muted" feature no longer works. That source device—such as a [[DVD player]] or [[set-top box]]—must "burn" the image of the CC text into the picture data carried by the [[HDMI]] or [[Digital Visual Interface|DVI]] cable; there's no other way for the CC text to be carried over the [[HDMI]] or [[Digital Visual Interface|DVI]] cable.<ref> {{cite web |url=https://denon.custhelp.com/app/answers/detail/a_id/299/~/hdmi-support-for-closed-captioning |title=HDMI Support for 'Closed Captioning' |archive-url=https://web.archive.org/web/20210213061121/https://denon.custhelp.com/app/answers/detail/a_id/299/~/hdmi-support-for-closed-captioning |archive-date=2021-02-13 |url-status=dead}} </ref><ref> {{cite web |url=https://video.stackexchange.com/questions/14977/what-types-of-cables-support-closed-captioning |title=What types of cables support closed captioning?}} </ref><ref> {{cite web |author=Steve Barber |url=https://www.nchearingloss.org/article_digcap.htm |title=Understanding Digital Captions |archive-url=https://web.archive.org/web/20240226160247/https://www.nchearingloss.org/article_digcap.htm |archive-date=2024-02-26 |url-status=dead}} </ref><ref> {{cite web |author=Neil Bauman |url=https://hearinglosshelp.com/blog/getting-captions-on-your-new-tv-the-good-the-bad-and-the-downright-frustrating/ |title=Getting Captions On Your New TV—The Good, the Bad and the Downright Frustrating}} </ref><ref> {{cite web |author=Stuart Sweet |url=https://blog.solidsignal.com/tutorials/can-get-closed-captioning-hdmi-directv/ |title=Can you get closed captioning over HDMI with DIRECTV?|date=31 May 2022 }} </ref><ref> {{cite web |url=https://creativecow.net/forums/thread/closed-captions-support-in-hdlink/ |title=closed captions support in HDLink}} </ref> Many source devices do not have the ability to overlay CC information, for controlling the CC overlay can be complicated. For example, the [[Motorola]] DCT-5xxx and -6xxx cable set-top receivers have the ability to decode CC information located on the [[MPEG-2]] stream and overlay it on the picture, but turning CC on and off requires turning off the unit and going into a special setup menu (it is not on the standard configuration menu and it cannot be controlled using the remote). Historically, [[DVD player]]s, [[Videocassette recorder|VCRs]] and set-top tuners did not need to do this overlaying, since they simply passed this information on to the TV, and they are not mandated to perform this overlaying. Many modern digital television receivers can be directly connected to cables, but often cannot receive scrambled channels that the user is paying for. Thus, the lack of a standard way of sending CC information between components, along with the lack of a mandate to add this information to a picture, results in CC being unavailable to many hard-of-hearing and deaf users. The [[EBU]] [[Ceefax]]-based teletext systems are the source for closed captioning signals, thus when teletext is embedded into [[DVB-T]] or [[DVB-S]] the closed captioning signal is included.<ref>{{Cite web|url=https://www.etsi.org/deliver/etsi_en/300700_300799/300743/01.03.01_60/en_300743v010301p.pdf|title=ETSI EN 300 743: Digital Video Broadcasting (DVB); Subtitling systems}}</ref> However, for DVB-T and DVB-S, it is not necessary for a teletext page signal to also be present ([[ITV1]], for example, does not carry analogue teletext signals on [[Sky UK|Sky Digital]], but does carry the embedded version, accessible from the "Services" menu of the receiver, or more recently by turning them off/on from a mini menu accessible from the "help" button). The [[BBC|BBC's]] Subtitle (Captioning) Editorial Guidelines were born out of the capabilities of [[Teletext]] but are now used by multiple European broadcasters as the editorial and design best practice guide <ref>{{Cite web|url=https://bbc.github.io/subtitle-guidelines/|title=BBC Subtitle Guidelines|website=bbc.github.io|access-date=2019-07-19|archive-date=2019-10-20|archive-url=https://web.archive.org/web/20191020222240/http://bbc.github.io/subtitle-guidelines/|url-status=dead}}</ref> === New Zealand === In New Zealand, captions use an [[EBU]] [[Ceefax]]-based teletext system on [[Digital Video Broadcasting|DVB]] broadcasts via [[Satellite television|satellite]] and [[cable television]] with the exception of [[MediaWorks New Zealand]] channels who completely switched to [[DVB-T|DVB]] [[Run-length encoding|RLE]] subtitles in 2012 on both [[Freeview (UK)|Freeview]] satellite and [[UHF]] broadcasts, this decision was made based on the [[TVNZ]] practice of using this format on only [[Digital Video Broadcasting|DVB]] [[UHF]] broadcasts (aka [[Freeview (UK)|Freeview HD]]). This made [[composite video]] connected TVs incapable of decoding the captions on their own. Also, these [[Pre-rendering|pre-rendered]] subtitles use classic caption style opaque backgrounds with an overly large [[Point (typography)|font size]] and obscure the picture more than the more modern, partially transparent backgrounds. === Digital television standard captioning improvements === The [[CTA-708|CEA-708]] specification provides for dramatically improved ''captioning'' * An enhanced character set with more [[Diacritic|accented letters]] and non-Latin letters, and more special symbols * Viewer-adjustable text size (called the "caption volume control" in the specification), allowing individuals to adjust their TVs to display small, normal, or large captions * More text and background colors, including both transparent and translucent backgrounds to optionally replace the big [[black]] block * More text styles, including edged or [[drop shadow]]ed text rather than the letters on a solid background * More text fonts, including [[monospaced]] and proportional spaced, [[serif]] and [[sans-serif]], and some playful cursive fonts * Higher [[Bandwidth (computing)|bandwidth]], to allow more [[data]] per minute of [[video]] * More language channels, to allow the encoding of more independent caption streams As of 2009, most closed captioning for digital television environments is done using tools designed for analog captioning (working to the [[EIA-608|CEA-608]] [[NTSC]] specification rather than the [[CTA-708|CEA-708]] [[ATSC standards|ATSC]] specification). The captions are then run through transcoders made by companies like EEG Enterprises or [[Evertz Microsystems|Evertz]], which convert the analog [[EIA-608|Line 21 caption format]] to the digital format. This means that none of the [[CTA-708|CEA-708]] features are used unless they were also contained in [[EIA-608|CEA-608]]. == Uses in other media == === DVDs and Blu-ray Discs === NTSC DVDs may carry closed captions in data packets of the MPEG-2 video streams inside of the Video-TS folder. Once played out of the analog outputs of a set top DVD player, the caption data is converted to the Line 21 format.<ref>{{cite web|url=https://www.dvddemystified.com/dvdfaq.html#3.4|title=DVD FAQ|at=[3.4] What are the video details?|author=Jim Taylor|work=dvddemystified.com}}</ref> They are output by the player to the [[composite video]] (or an available [[RF connector]]) for a connected TV's built-in decoder or a set-top decoder as usual. They can not be output on [[S-Video]] or [[component video]] outputs due to the lack of a [[colorburst]] signal on Line 21. (Actually, regardless of this, if the DVD player is in interlaced rather than progressive mode, closed captioning ''will'' be displayed on the TV over component video input if the TV captioning is turned on and set to CC1.) When viewed on a personal computer, caption data can be viewed by software that can read and decode the caption data packets in the MPEG-2 streams of the DVD-Video disc. [[Windows Media Player]] (before [[Windows 7]]) in Vista supported only closed caption channels 1 and 2 (not 3 or 4). [[Apple Computer|Apple's]] [[DVD Player (Mac OS)|DVD Player]] does not have the ability to read and decode Line 21 caption data which are recorded on a DVD made from an over-the-air broadcast. It can display some movie DVD captions. In addition to Line 21 closed captions, video DVDs may also carry subtitles, which generally rendered from the [[EIA-608]] captions as a bitmap overlay that can be turned on and off via a set top DVD player or DVD player software, just like the textual captions. This type of captioning is usually carried in a subtitle track labeled either "English for the hearing impaired" or, more recently, "SDH" (subtitled for the deaf and Hard of hearing).<ref>{{cite web|url=https://www.dvddemystified.com/dvdfaq.html#1.45|title=DVD FAQ|at=[1.45] What's the difference between Closed Captions and subtitles?|author=Jim Taylor|work=dvddemystified.com}}</ref> Many popular Hollywood DVD-Videos can carry both subtitles and closed captions (e.g. ''[[Stepmom (1998 film)|Stepmom]]'' DVD by Columbia Pictures). On some DVDs, the Line 21 captions may contain the same text as the subtitles; on others, only the Line 21 captions include the additional non-speech information (even sometimes song lyrics) needed for deaf and hard-of-hearing viewers. European Region 2 DVDs do not carry Line 21 captions, and instead list the subtitle languages available-English is often listed twice, one as the representation of the dialogue alone, and a second subtitle set which carries additional information for the deaf and hard-of-hearing audience. (Many deaf/{{abbr|HOH|hard-of-hearing}} subtitle files on DVDs are reworkings of original teletext subtitle files.) [[Blu-ray]] media typically cannot carry any [[Vertical blanking interval|VBI]] data such as Line 21 closed captioning due to the design of [[DVI]]-based [[High-Definition Multimedia Interface]] (HDMI) specifications that was only extended for synchronized digital audio replacing older analog standards, such as [[VGA]], S-Video, component video, and [[SCART]]. However, a few early titles from 20th Century Fox Home Entertainment carried Line 21 closed captions that are output when using the analog outputs (typically composite video) of a few Blu-ray players. Both Blu-ray and DVD can use either PNG bitmap subtitles or 'advanced subtitles' to carry SDH type subtitling, the latter being an XML-based textual format which includes font, styling and positioning information as well as a unicode representation of the text. Advanced subtitling can also include additional media accessibility features such as "descriptive audio". === Movies === There are several competing technologies used to provide captioning for movies in theaters. Cinema captioning falls into the categories of open and closed. The definition of "closed" captioning in this context is different from television, as it refers to any technology that allows as few as one member of the audience to view the captions. Open captioning in a film theater can be accomplished through burned-in captions, projected text or [[bitmap]]s, or (rarely) a display located above or below the movie screen. Typically, this display is a large LED sign. In a digital theater, open caption display capability is built into the digital projector. Closed caption capability is also available, with the ability for 3rd-party closed caption devices to plug into the digital cinema server. Probably the best known closed captioning option for film theaters is the [[Rear Window Captioning System]] from the [[WGBH-TV|National Center for Accessible Media]]. Upon entering the theater, viewers requiring captions are given a panel of flat translucent glass or plastic on a gooseneck stalk, which can be mounted in front of the viewer's seat. In the back of the theater is an [[LED]] display that shows the captions in mirror image. The panel reflects captions for the viewer but is nearly invisible to surrounding patrons. The panel can be positioned so that the viewer watches the movie through the panel, and captions appear either on or near the movie image. A company called Cinematic Captioning Systems has a similar reflective system called Bounce Back. A major problem for distributors has been that these systems are each proprietary, and require separate distributions to the theater to enable them to work. Proprietary systems also incur license fees. For film projection systems, [[Digital Theater Systems]], the company behind the DTS [[surround sound]] standard, has created a digital captioning device called the DTS-CSS (Cinema Subtitling System). It is a combination of a laser projector which places the captioning (words, sounds) anywhere on the screen and a thin playback device with a [[CD]] that holds many languages. If the Rear Window Captioning System is used, the DTS-CSS player is also required for sending caption text to the Rear Window sign located in the rear of the theater. Special effort has been made to build accessibility features into digital projection systems (see [[digital cinema]]). Through [[SMPTE]], standards now exist that dictate how open and closed captions, as well as hearing-impaired and visually impaired narrative audio, are packaged with the rest of the digital movie. This eliminates the proprietary caption distributions required for film, and the associated royalties. SMPTE has also standardized the communication of closed caption content between the digital cinema server and 3rd-party closed caption systems (the CSP/RPL protocol). As a result, new, competitive closed caption systems for digital cinema are now emerging that will work with any standards-compliant digital cinema server. These newer closed caption devices include cupholder-mounted electronic displays and wireless glasses which display caption text in front of the wearer's eyes.<ref>{{cite web|url=http://mkpe.com/publications/d-cinema/misc/enabling_the_disabled.php|title=Enabling the Disabled in Digital Cinema|author=MKPE Consulting LLC|work=mkpe.com}}</ref> Bridge devices are also available to enable the use of Rear Window systems. As of mid-2010, the remaining challenge to the wide introduction of accessibility in digital cinema is the industry-wide transition to SMPTE DCP, the standardized packaging method for very high quality, secure distribution of digital movies. === Sports venues === Captioning systems have also been adopted by most major league and high-profile college [[stadium]]s and [[arena]]s, typically through dedicated portions of their main [[scoreboard]]s or as part of balcony [[fascia (architecture)|fascia]] LED boards. These screens display captions of the [[public address]] announcer and other spoken content, such as those contained within in-game segments, public service announcements, and lyrics of songs played in-stadium. In some facilities, these systems were added as a result of discrimination lawsuits. Following a lawsuit under the [[Americans with Disabilities Act]], [[FedExField]] added caption screens in 2006.<ref name=wp-captionsredskins>{{cite news|title=Redskins Ordered To Continue Captions|url=https://www.washingtonpost.com/wp-dyn/content/article/2008/10/02/AR2008100201989.html|access-date=20 July 2015|newspaper=Washington Post|date=October 3, 2008}}</ref><ref>{{Cite web|url=https://www.proskauer.com/alert/fourth-circuit-holds-ada-requires-expanded-access|title=Fourth Circuit Holds ADA Requires Expanded Access to Aural Content in Stadiums|date=April 4, 2011}}</ref> Some stadiums utilize on-site captioners while others outsource them to external providers who caption remotely.<ref name="espn-stadiumcaption">{{cite web|title=Lifeline for hearing-impaired at ballparks|url=https://www.espn.com/espn/page2/story?page=lukas/110607_stadium_closed_captioning|website=ESPN.com|access-date=20 July 2015}}</ref><ref name="arepublic-captioning">{{cite news|title=Cards provide captioning for deaf at stadium|url=http://archive.azcentral.com/community/glendale/articles/20131014cards-provide-captioning-deaf-stadium.html|access-date=20 July 2015|work=The Arizona Republic}}</ref> === Video games === The infrequent appearance of closed captioning in [[video game]]s became a problem in the 1990s as games began to commonly feature voice tracks, which in some cases contained information which the player needed in order to know how to progress in the game.<ref>{{cite magazine |title=Letters |magazine=[[Next Generation (magazine)|Next Generation]] |issue=30 |publisher=[[Imagine Media]] |date=June 1997|page=133|url=https://archive.org/stream/NextGeneration30Jun1997/Next_Generation_30_Jun_1997#page/n134}}</ref> Closed captioning of video games is becoming more common. One of the first video game companies to feature closed captioning was [[Bethesda Softworks]] in their 1990 release of ''Hockey League Simulator'' and ''[[The Terminator 2029]]''.{{citation needed|date=October 2018}} Infocom also offered ''[[Zork Grand Inquisitor]]'' in 1997.<ref>{{cite web |url=http://garydrobson.com/2014/09/10/captioning-computer-games/|title=Captioning Computer Games |last=Robson |first=Gary |year=1998}}</ref> Many games since then have at least offered subtitles for spoken dialog during [[cutscene]]s, and many include significant in-game dialog and sound effects in the captions as well; for example, with subtitles turned on in the ''[[Metal Gear Solid]]'' series of stealth games, not only are subtitles available during cut scenes, but any dialog spoken during real-time gameplay will be captioned as well, allowing players who can't hear the dialog to know what enemy guards are saying and when the main character has been detected. Also, in many of developer [[Valve Corporation|Valve]]'s video games (such as ''[[Half-Life 2]]'' or ''[[Left 4 Dead]]''), when closed captions are activated, dialog and nearly all sound effects either made by the player or from other sources (e.g. gunfire, explosions) will be captioned. Video games don't offer Line 21 captioning, decoded and displayed by the television itself but rather a built-in subtitle display, more akin to that of a DVD. The game systems themselves have no role in the captioning either; each game must have its subtitle display programmed individually. Reid Kimball, a game designer who is hearing impaired, is attempting to educate game developers about closed captioning for games. Reid started the Games<nowiki>[CC</nowiki>] group to closed caption games and serve as a research and development team to aid the industry. Kimball designed the Dynamic Closed Captioning system,<ref>{{Cite web |last=Ashley |first=Robert |date=2008-11-11 |title=The Silent Majority |url=https://www.escapistmagazine.com/the-silent-majority/ |access-date=2025-03-27 |website=The Escapist |language=en-US}}</ref> writes articles and speaks at developer conferences. Games[CC]'s first closed captioning project called Doom3[CC] was nominated for an award as Best Doom3 Mod of the Year for IGDA's Choice Awards 2006 show. === Online video streaming === Internet video streaming service [[YouTube]] offers captioning services in videos. The author of the video can upload a SubViewer (*.SUB), [[SubRip]] (*.SRT) or *.SBV file.<ref>{{cite web|url=https://support.google.com/drive/answer/1372218|title=Add caption tracks to your video files|website=Google Support}}</ref> As a beta feature, the site also added the ability to automatically transcribe and generate captioning on videos, with varying degrees of success based upon the content of the video.<ref>{{cite web|url=http://youtube-global.blogspot.com/2010/03/future-will-be-captioned-improving.html|title=Official YouTube Blog: The Future Will Be Captioned: Improving Accessibility on YouTube|work=Official YouTube Blog}}</ref> However, on August 30, 2020, the company announced that communal captions will end on September 28.<ref>{{Cite web|last=Lyons|first=Kim|title=YouTube is ending its community captions feature and deaf creators aren't happy about it|url=https://www.theverge.com/2020/7/31/21349401/youtube-community-captions-deaf-creators-accessibility-google|website=The Verge|date=31 July 2020}}</ref> The automatic captioning is often inaccurate on videos with background music or exaggerated emotion in speaking. Variations in volume can also result in nonsensical machine-generated captions. Additional problems arise with strong accents, [[sarcasm]], differing contexts, or [[homonym]]s.<ref>{{cite magazine |last=Nam |first=Tammy H. |url=https://www.theatlantic.com/entertainment/archive/2014/06/why-tv-captions-are-so-terrible/373283/ |title=The Sorry State of Closed Captioning |magazine=[[The Atlantic]] |date=June 24, 2014 |access-date=December 23, 2015}}</ref> On June 30, 2010, YouTube announced a new "YouTube Ready" designation for professional caption vendors in the United States.<ref>{{cite web|url=http://youtube-global.blogspot.com/2010/06/professional-caption-services-get.html|title=Official YouTube Blog: Professional caption services get "YouTube Ready"|work=Official YouTube Blog}}</ref> The initial list included twelve companies who passed a caption quality evaluation administered by the Described and Captioned Media Project, have a website and a YouTube channel where customers can learn more about their services and have agreed to post rates for the range of services that they offer for YouTube content. [[Flash video]] also supports captions using the Distribution Exchange profile (DFXP) of W3C [[timed text]] format. The latest Flash authoring software adds free player skins and caption components that enable viewers to turn captions on/off during playback from a web page. Previous versions of Flash relied on the Captionate 3rd party component and skin to caption Flash video. Custom Flash players designed in Flex can be tailored to support the timed-text exchange profile, Captionate [[.XML]], or [[SAMI]] file (e.g. [[Hulu]] captioning). This is the preferred method for most [[United States|US]] broadcast and cable networks that are mandated by the U.S. [[Federal Communications Commission]] to provide captioned on-demand content. The media encoding firms generally use software such as [[Telestream|MacCaption]] to convert [[EIA-608]] captions to this format. The [[Silverlight]] Media Framework<ref>{{cite web|url=http://smf.codeplex.com/|title=Microsoft Media Platform: Player Framework|work=CodePlex|access-date=2011-05-15|archive-date=2011-04-23|archive-url=https://web.archive.org/web/20110423185530/http://smf.codeplex.com/|url-status=dead}}</ref> also includes support for the timed-text exchange profile for both download and adaptive streaming media. [[Windows Media Video]] can support closed captions for both video on demand streaming or live streaming scenarios. Typically, Windows Media captions support the [[SAMI]] file format but can also carry embedded closed caption data. EBU-TT-D distribution format supports multiple players across multiple platforms. QuickTime video supports raw [[EIA-608]] caption data via proprietary closed caption track, which are just [[EIA-608]] byte pairs wrapped in a [[QuickTime]] packet container with different IDs for both Line 21 fields. These captions can be turned on and off and appear in the same style as TV closed captions, with all the standard formatting (pop-on, roll-up, paint-on), and can be positioned and split anywhere on the video screen. QuickTime closed caption tracks can be viewed in [[macOS]] or [[Microsoft Windows|Windows]] versions of [[QuickTime]] Player, [[iTunes]] (via QuickTime), [[iPod Nano]], [[iPod Classic]], [[iPod Touch]], [[iPhone]], and [[iPad]]. Typical modern browsers, such as [[Microsoft Edge|Edge]] and [[Google Chrome|Chrome]], and/or typical modern operating systems, such as [[iOS 12]]+, [[Android 10]]+, [[Windows 10]]+, may manage the CC subtitle style.<ref>{{Cite web |title=Change caption settings - Microsoft Support |url=https://support.microsoft.com/en-us/windows/change-caption-settings-135c465b-8cfd-3bac-9baf-4af74bc0069a |access-date=2024-10-08 |website=support.microsoft.com}}</ref><ref>{{Cite web |title=Watch videos with subtitles and closed captions on iPhone |url=https://support.apple.com/en-gb/guide/iphone/iph3e2e23d1/12.0/ios/12.0 |access-date=2024-10-08 |website=Apple Support |language=en}}</ref> === Theatre === Live plays can be open captioned by a captioner who displays lines from the [[Script (recorded media)|script]] and including non-speech elements on a large [[Projection screen|display screen]] near the stage.<ref>{{Cite web|url=http://www.stagetext.org/content.asp?content.id=1035|title=STAGETEXT Certificate in Theatre Captioning for Deaf People|archive-url=https://web.archive.org/web/20070814080843/http://www.stagetext.org/content.asp?content.id=1035|url-status=dead|website=Stagetext.org|archive-date=August 14, 2007}}</ref> Software is also now available that automatically generates the captioning and streams the captioning to individuals sitting in the theater, with that captioning being viewed using heads-up glasses or on a smartphone or computer tablet. === Telephones === {{Main|Telecommunications Relay Service#Captioned telephone}} A captioned telephone is a [[telephone]] that displays real-time captions of the current conversation. The captions are typically displayed on a screen embedded into the telephone base. === Video conferencing === Some online video conferencing services, such as [[Google Meet]], offer the ability to display captions in real time of the current conversation. === Media monitoring services === In the United States especially, most [[media monitoring service]]s capture and index closed captioning text from news and public affairs programs, allowing them to search the text for client references. The use of closed captioning for television news monitoring was pioneered by Universal Press Clipping Bureau (Universal Information Services) in 1992,{{citation needed|date=January 2012}} and later in 1993 by Tulsa-based NewsTrak of Oklahoma (later known as Broadcast News of Mid-America, acquired by [[video news release]] pioneer Medialink Worldwide Incorporated in 1997).{{citation needed|date=January 2012}} US patent 7,009,657 describes a "method and system for the automatic collection and conditioning of closed caption text originating from multiple geographic locations" as used by news monitoring services. === Conversations === Software programs are available that automatically generate a closed-captioning of conversations. Examples of such conversations include discussions in conference rooms, classroom lectures, or religious services. === Non-linear video editing systems and closed captioning === In 2010, [[Sony Vegas Pro|Vegas Pro]], the professional non-linear editor, was updated to support importing, editing, and delivering [[CEA-608]] closed captions.<ref>[[Sony Creative Software]] (April 2010): the Vegas Pro 9.0d update.</ref> Vegas Pro 10, released on October 11, 2010, added several enhancements to the closed captioning support. TV-like CEA-608 closed captioning can now be displayed as an overlay when played back in the Preview and Trimmer windows, making it easy to check placement, edits, and timing of CC information. CEA708 style Closed Captioning is automatically created when the CEA-608 data is created. Line 21 closed captioning is now supported, as well as HD-SDI closed captioning capture and print from AJA and [[Blackmagic Design]] cards. Line 21 support provides a workflow for existing legacy media. Other improvements include increased support for multiple closed captioning file types, as well as the ability to export closed caption data for DVD Architect, YouTube, RealPlayer, QuickTime, and Windows Media Player. In mid-2009, [[Apple Inc.|Apple]] released [[Final Cut Pro]] version 7 and began support for inserting closed caption data into SD and HD tape masters via [[FireWire]] and compatible video capture cards.<ref>{{Cite web|url=https://www.apple.com/final-cut-pro/|archive-url=https://web.archive.org/web/20110608124345/http://www.apple.com/finalcutstudio/whats-new.html|url-status=dead|title=Final Cut Pro X|archive-date=June 8, 2011|website=Apple}}</ref> Up until this time, it was not possible for video editors to insert caption data with both [[EIA-608|CEA-608]] and [[CEA-708]] to their tape masters. The typical workflow included first printing the SD or HD video to a tape and sending it to a professional closed caption service company that had a stand-alone closed caption hardware encoder. This new closed captioning workflow known as [[Ecaptioning|e-Captioning]] involves making a proxy video from the non-linear system to import into a third-party non-linear closed captioning software. Once the closed captioning software project is completed, it must export a closed caption file compatible with the [[non-linear editing system]]. In the case of Final Cut Pro 7, three different file formats can be accepted: a .SCC file (Scenarist Closed Caption file) for Standard Definition video, a [[QuickTime]] 608 closed caption track (a special 608 coded track in the .mov file wrapper) for standard-definition video, and finally a QuickTime 708 closed caption track (a special 708 coded track in the .mov file wrapper) for high-definition video output. Alternatively, [[Matrox]] video systems devised another mechanism for inserting closed caption data by allowing the video editor to include CEA-608 and CEA-708 in a discrete audio channel on the video editing timeline. This allows real-time preview of the captions while editing and is compatible with Final Cut Pro 6 and 7.<ref>{{Cite web|url=http://www.cpcweb.com/mxo2/|archive-url=https://web.archive.org/web/20100416034901/http://cpcweb.com/mxo2/|url-status=dead|title=CPC Closed Captioning & Subtitling Software for Matrox MXO2<!-- Bot generated title -->|archive-date=April 16, 2010}}</ref> Other non-linear editing systems indirectly support closed captioning only in Standard Definition Line 21. Video files on the editing timeline must be composited with a Line 21 VBI graphic layer known in the industry as a "blackmovie" with closed caption data.<ref>{{Cite web|url=http://www.cpcweb.com/nle/|archive-url=https://web.archive.org/web/20100316161102/http://www.cpcweb.com/nle/|url-status=dead|title=CPC Closed Captioning & Subtitling Software for Non-linear Editors (NLEs)<!-- Bot generated title -->|archive-date=March 16, 2010}}</ref> Alternately, video editors working with the DV25 and DV50 FireWire workflows must encode their DV .avi or .mov file with VAUX data which includes CEA-608 closed caption data. == Logo == The current and most familiar logo for closed captioning consists of two [[C]]s (for "closed captioned") inside a television screen. It was created at [[WGBH-TV|WGBH]]. The other logo, [[trademark]]ed by the [[National Captioning Institute]], is that of a simple geometric rendering of a [[television set]] merged with the tail of a [[speech balloon]]; two such versions exist – one with a tail on the left, the other with a tail on the right.<ref>{{Cite web|url=http://www.ncicap.org/ncilogo.asp|archive-url=https://web.archive.org/web/20080215152222/http://www.ncicap.org/ncilogo.asp|url-status=dead|title=National Captioning Institute Logos|archive-date=February 15, 2008}}</ref> == See also == * [[Creative Commons license]] – uses a similar CC logo * [[Fansub]] * [[Same language subtitling]] * [[Sign language]] * [[Speech-to-text reporter]] (captioner), an occupation * [[Subtitles]] * [[Surtitles]] * [[SAMI|Synchronized Accessible Media Interchange]] (SAMI) file format * [[Synchronized Multimedia Integration Language]] (SMIL) file format == References == === Citations === {{Reflist}} === General and cited references === * ''Realtime Captioning, the VITAC Way'' by Amy Bowlen and Kathy DiLorenzo (no ISBN) * [https://www.bbc.co.uk/accessibility/forproducts/guides/subtitles/ ''BBC Subtitles (Captions) Editorial Guidelines''] ({{Webarchive|url=https://web.archive.org/web/20191020222240/http://bbc.github.io/subtitle-guidelines/ |date=2019-10-20 }}) * ''Closed Captioning: Subtitling, Stenography, and the Digital Convergence of Text with Television'' by Gregory J. Downey ({{ISBN|978-0-8018-8710-9}}) * ''The Closed Captioning Handbook'' by [[Gary D. Robson]] ({{ISBN|0-240-80561-5}}) * ''Alternative Realtime Careers: A Guide to Closed Captioning and CART for Court Reporters'' by [[Gary D. Robson]] ({{ISBN|1-881859-51-7}}) * ''A New Civil Right: Telecommunications Equality for Deaf and Hard of Hearing Americans'' by Karen Peltz Strauss ({{ISBN|978-1-56368-291-9}}) * ''Enabling the Disabled'' by Michael Karagosian (no ISBN) == External links == {{wiktionary|caption}} {{Wikibooks |How to use a Motorola DVR |Setup#Closed Caption |Setting up closed captions }} {{Commons category}} * {{Cite web |title=A deafie's guide to accessing captions anywhere!|url=https://www.hearinglikeme.com/a-deafies-guide-to-accessing-captions-anywhere/|last=Parfitt|first=Ellie|website=Hearing Like Me|date=15 November 2018}} *[https://web.archive.org/web/20040513111535/http://www.fcc.gov/cgb/dro/captioning_regs.html Closed Captioning of Video Programming - 47 C.F.R. 79.1]-From the [[Federal Communications Commission]] Consumer & Governmental Affairs Bureau * [http://www.fcc.gov/cgb/consumerfacts/closedcaption.html FCC Consumer Facts on Closed Captioning] {{Webarchive |url=https://web.archive.org/web/20100810191556/http://www.fcc.gov/cgb/consumerfacts/closedcaption.html |date=2010-08-10}} * [https://www.academia.edu/22218667/Teletext_for_the_deaf Alan Newell, Inventor of Closed Captioning, Teletext for the Deaf, 1982] * [http://www.ericdigests.org/1995-1/tv.htm Closed Captioned TV: A Resource for ESL Literacy Education] {{Webarchive |url=https://web.archive.org/web/20110607070125/http://www.ericdigests.org/1995-1/tv.htm |date=2011-06-07}}-From the [[Education Resources Information Center]] Clearinghouse for ESL Literacy Education, Washington, D.C. * [https://web.archive.org/web/20100626101858/http://tialumni.org/enews/kastner.asp Bill Kastner: The Man Behind Closed Captioning] * [http://www.thecatalogblog.com/2017/07/20/sears-1980-20-hours-a-week-of-captioned-programs First Sears Telecaption adapter advertised in 1980 Sears catalog] * [http://bbc.github.io/subtitle-guidelines/ BBC Best Practice Guidelines for Captioning and Subtitling (UK)] {{Webarchive |url=https://web.archive.org/web/20220715191550/https://bbc.github.io/subtitle-guidelines/ |date=2022-07-15}} * [https://tech.ebu.ch/publications/tech3380 EBU-TT-D Subtitling (Captions) Distribution Format] {{Video formats}} [[Category:Subtitling]] [[Category:Assistive technology]] [[Category:Deafness]] [[Category:Television terminology]] [[Category:High-definition television]] [[Category:Transcription (linguistics)]] [[de:Untertitel#Technische Ausführungen]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Abbr
(
edit
)
Template:Anchor
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite magazine
(
edit
)
Template:Cite news
(
edit
)
Template:Cite web
(
edit
)
Template:Cn
(
edit
)
Template:Commons category
(
edit
)
Template:Dash
(
edit
)
Template:ISBN
(
edit
)
Template:Lang
(
edit
)
Template:Main
(
edit
)
Template:More citations needed
(
edit
)
Template:Reflist
(
edit
)
Template:Short description
(
edit
)
Template:Video formats
(
edit
)
Template:Webarchive
(
edit
)
Template:When
(
edit
)
Template:Wikibooks
(
edit
)
Template:Wiktionary
(
edit
)