Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Translation memory
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==History== 1970s is the infancy stage for TM systems in which scholars carried on a preliminary round of exploratory discussions. The original idea for TM systems is often attributed{{according to whom|date=April 2018}} to Martin Kay's "Proper Place" paper,<ref>{{cite journal |last1=Kay |first1=Martin |title=The Proper Place of Men and Machines in Language Translation |journal=Machine Translation |date=March 1997 |volume=12 |issue=1β2 |pages=3β23 |doi=10.1023/A:1007911416676 |s2cid=207627954 }}</ref> but the details of it are not fully given. In this paper, it has shown the basic concept of the storing system: "The translator might start by issuing a command causing the system to display anything in the store that might be relevant to .... Before going on, he can examine past and future fragments of text that contain similar material". This observation from Kay was actually influenced by the suggestion of Peter Arthern that translators can use similar, already translated documents online. In his 1978 article <ref>{{cite journal|title=Machine Translation and Computerized Terminology Systems: A Translator's Perspective |last1=Arthern |first1=Peter |journal=Translating and the Computer: Proceedings of a Seminar, London, 14th November, 1978 |date=1978 |url=http://www.mt-archive.info/Aslib-1978-Arthern.pdf |isbn=0444853022}}</ref> he gave fully demonstration of what we call TM systems today: Any new text would be typed into a word processing station, and as it was being typed, the system would check this text against the earlier texts stored in its memory, together with its translation into all the other official languages [of the European Community]. ... One advantage over machine translation proper would be that all the passages so retrieved would be grammatically correct. In effect, we should be operating an electronic 'cut and stick' process which would, according to my calculations, save at least 15 per cent of the time which translators now employ in effectively producing translations. The idea was incorporated from ALPS (Automated Language Processing Systems) Tools first developed by researcher from Brigham Young University, and at that time the idea of TM systems was mixed with a tool called "Repetitions Processing" which only aimed to find matched strings. Only after a long time, did the concept of so-called translation memory come into being. The real exploratory stage of TM systems would be 1980s. One of the first implementations of TM system appeared in Sadler and Vendelmans' Bilingual Knowledge Bank. A Bilingual Knowledge Bank is a syntactically and referentially structured pair of corpora, one being a translation of the other, in which translation units are cross-coded between the corpora. The aim of Bilingual Knowledge Bank is to develop a corpus-based general-purpose knowledge source for applications in machine translation and computer-aided translation (Sadler & Vendelman, 1987). Another important step was made by Brian Harris with his "Bi-text". He has defined the bi-text as "a single text in two dimensions" (1988), the source and target texts related by the activity of the translator through translation units which made a similar echoes with Sadler's Bilingual Knowledge Bank. And in Harris's work he proposed something like TM system without using this name: a database of paired translations, searchable either by individual word, or by "whole translation unit", in the latter case the search being allowed to retrieve similar rather than identical units. TM technology only became commercially available on a wide scale in the late 1990s, through the efforts made by several engineers and translators. Of note is the first TM tool called Trados ([[Trados|SDL Trados]] nowadays). In this tool, when opening the source file and applying the translation memory so that any "100% matches" (identical matches) or "fuzzy matches" (similar, but not identical matches) within the text are instantly extracted and placed within the target file. Then, the "matches" suggested by the translation memory can be either accepted or overridden with new alternatives. If a translation unit is manually updated, then it is stored within the translation memory for future use as well as for repetition in the current text. In a similar way, all segments in the target file without a "match" would be translated manually and then automatically added to the translation memory. In the 2000s, online translation services began incorporating TM. Machine translation services like [[Google Translate]], as well as professional and "hybrid" translation services provided by sites like [[Gengo]] and [[Translation Cloud#Ackuna|Ackuna]], incorporate databases of TM data supplied by translators and volunteers to make more efficient connections between languages provide faster translation services to end-users.<ref>[https://techcrunch.com/2016/11/22/googles-ai-translation-tool-seems-to-have-invented-its-own-secret-internal-language/ Google's AI translation tool seems to have invented its own secret internal language] Devin Coldewey, TechCrunch, November 22, 2016</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)