Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Machine translation
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Neural MT === {{Main|Neural machine translation}} A [[deep learning]]-based approach to MT, [[neural machine translation]] has made rapid progress in recent years. However, the current consensus is that the so-called human parity achieved is not real, being based wholly on limited domains, language pairs, and certain test benchmarks<ref>Antonio Toral, Sheila Castilho, Ke Hu, and Andy Way. 2018. Attaining the unattainable? reassessing claims of human parity in neural machine translation. CoRR, abs/1808.10432.</ref> i.e., it lacks statistical significance power.<ref>{{Cite arXiv |eprint=1906.09833 |first1=Graham |last1=Yvette |first2=Haddow |last2=Barry |title=Translationese in Machine Translation Evaluation |date=2019 |last3=Koehn |first3=Philipp|class=cs.CL }}</ref> Translations by neural MT tools like [[DeepL Translator]], which is thought to usually deliver the best machine translation results as of 2022, typically still need post-editing by a human.<ref>{{cite journal |last1=Katsnelson |first1=Alla |title=Poor English skills? New AIs help researchers to write better |journal=Nature |pages=208β209 |language=en |doi=10.1038/d41586-022-02767-9 |date=29 August 2022|volume=609 |issue=7925 |pmid=36038730 |bibcode=2022Natur.609..208K |s2cid=251931306 |doi-access=free }}</ref><ref>{{cite web |last1=Korab |first1=Petr |title=DeepL: An Exceptionally Magnificent Language Translator |url=https://towardsdatascience.com/deepl-an-exceptionally-magnificent-language-translator-78e86d8062d3 |website=Medium |access-date=9 January 2023 |language=en |date=18 February 2022}}</ref><ref>{{cite news |title=DeepL outperforms Google Translate β DW β 12/05/2018 |url=https://www.dw.com/en/deepl-cologne-based-startup-outperforms-google-translate/a-46581948 |access-date=9 January 2023 |work=Deutsche Welle |language=en}}</ref> Instead of training specialized translation models on parallel datasets, one can also [[Prompt engineering|directly prompt]] generative [[large language model]]s like [[Generative pre-trained transformer|GPT]] to translate a text.<ref name="Hendy2023">{{cite arXiv |last1=Hendy |first1=Amr |last2=Abdelrehim |first2=Mohamed |last3=Sharaf |first3=Amr |last4=Raunak |first4=Vikas |last5=Gabr |first5=Mohamed |last6=Matsushita |first6=Hitokazu |last7=Kim |first7=Young Jin |last8=Afify |first8=Mohamed |last9=Awadalla |first9=Hany |title=How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation |date=2023-02-18 |eprint=2302.09210 |class=cs.CL}}</ref><ref>{{cite news |last1=Fadelli |first1=Ingrid |title=Study assesses the quality of AI literary translations by comparing them with human translations |url=https://techxplore.com/news/2022-11-quality-ai-literary-human.html |access-date=18 December 2022 |work=techxplore.com |language=en}}</ref><ref name="arxiv221014250">{{Cite arXiv|last1=Thai |first1=Katherine |last2=Karpinska |first2=Marzena |last3=Krishna |first3=Kalpesh |last4=Ray |first4=Bill |last5=Inghilleri |first5=Moira |last6=Wieting |first6=John |last7=Iyyer |first7=Mohit |title=Exploring Document-Level Literary Machine Translation with Parallel Paragraphs from World Literature |date=25 October 2022|class=cs.CL |eprint=2210.14250 }}</ref> This approach is considered promising,<ref name="WMT2023">{{cite conference |last1=Kocmi |first1=Tom |last2=Avramidis |first2=Eleftherios |last3=Bawden |first3=Rachel |last4=Bojar |first4=OndΕej |last5=Dvorkovich |first5=Anton |last6=Federmann |first6=Christian |last7=Fishel |first7=Mark |last8=Freitag |first8=Markus |last9=Gowda |first9=Thamme |last10=Grundkiewicz |first10=Roman |last11=Haddow |first11=Barry |last12=Koehn |first12=Philipp |last13=Marie |first13=Benjamin |last14=Monz |first14=Christof |last15=Morishita |first15=Makoto |date=2023 |editor-last=Koehn |editor-first=Philipp |editor2-last=Haddow |editor2-first=Barry |editor3-last=Kocmi |editor3-first=Tom |editor4-last=Monz |editor4-first=Christof |title=Findings of the 2023 Conference on Machine Translation (WMT23): LLMs Are Here but Not Quite There Yet |url=https://aclanthology.org/2023.wmt-1.1 |journal=Proceedings of the Eighth Conference on Machine Translation |location=Singapore |publisher=Association for Computational Linguistics |pages=1β42 |doi=10.18653/v1/2023.wmt-1.1|doi-access=free }}</ref> but is still more resource-intensive than specialized translation models.
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)