0%

【NLP】 2020 Neural Machine Translation 机器翻译相关论文整理

目录

1. Modeling Fluency and Faithfulness for Diverse Neural Machine Translation, AAAI 2020 [PDF] 摘要
2. Cross-Lingual Pre-Training Based Transfer for Zero-Shot Neural Machine Translation, AAAI 2020 [PDF] 摘要
3. Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion, AAAI 2020 [PDF] 摘要
4. Minimizing the Bag-of-Ngrams Difference for Non-Autoregressive Neural Machine Translation, AAAI 2020 [PDF] 摘要
5. GAN-Based Unpaired Chinese Character Image Translation via Skeleton Transformation and Stroke Rendering, AAAI 2020 [PDF] 摘要
6. Fast and Robust Face-to-Parameter Translation for Game Character Auto-Creation, AAAI 2020 [PDF] 摘要
7. Distilling Portable Generative Adversarial Networks for Image Translation, AAAI 2020 [PDF] 摘要
8. Benign Examples: Imperceptible Changes Can Enhance Image Translation Performance, AAAI 2020 [PDF] 摘要
9. Transductive Ensemble Learning for Neural Machine Translation, AAAI 2020 [PDF] 摘要
10. Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine Translation, AAAI 2020 [PDF] 摘要
11. One Homonym per Translation, AAAI 2020 [PDF] 摘要
12. Neuron Interaction Based Representation Composition for Neural Machine Translation, AAAI 2020 [PDF] 摘要
13. MetaMT, a Meta Learning Method Leveraging Multiple Domain Data for Low Resource Machine Translation, AAAI 2020 [PDF] 摘要
14. Neural Machine Translation with Joint Representation, AAAI 2020 [PDF] 摘要
15. Explicit Sentence Compression for Neural Machine Translation, AAAI 2020 [PDF] 摘要
16. Synchronous Speech Recognition and Speech-to-Text Translation with Interactive Decoding, AAAI 2020 [PDF] 摘要
17. Simplify-Then-Translate: Automatic Preprocessing for Black-Box Translation, AAAI 2020 [PDF] 摘要
18. Controlling Neural Machine Translation Formality with Synthetic Supervision, AAAI 2020 [PDF] 摘要
19. Translation-Based Matching Adversarial Network for Cross-Lingual Natural Language Inference, AAAI 2020 [PDF] 摘要
20. IntroVNMT: An Introspective Model for Variational Neural Machine Translation, AAAI 2020 [PDF] 摘要
21. Latent-Variable Non-Autoregressive Neural Machine Translation with Deterministic Inference Using a Delta Posterior, AAAI 2020 [PDF] 摘要
22. Evaluating the Cross-Lingual Effectiveness of Massively Multilingual Neural Machine Translation, AAAI 2020 [PDF] 摘要
23. Alignment-Enhanced Transformer for Constraining NMT with Pre-Specified Translations, AAAI 2020 [PDF] 摘要
24. Neural Semantic Parsing in Low-Resource Settings with Back-Translation and Meta-Learning, AAAI 2020 [PDF] 摘要
25. Generating Diverse Translation by Manipulating Multi-Head Attention, AAAI 2020 [PDF] 摘要
26. Unsupervised Neural Dialect Translation with Commonality and Diversity Modeling, AAAI 2020 [PDF] 摘要
27. Neural Machine Translation with Byte-Level Subwords, AAAI 2020 [PDF] 摘要
28. Bridging the Gap between Pre-Training and Fine-Tuning for End-to-End Speech Translation, AAAI 2020 [PDF] 摘要
29. Go From the General to the Particular: Multi-Domain Translation with Domain Transformation Networks, AAAI 2020 [PDF] 摘要
30. Acquiring Knowledge from Pre-Trained Model to Neural Machine Translation, AAAI 2020 [PDF] 摘要
31. Towards Making the Most of BERT in Neural Machine Translation, AAAI 2020 [PDF] 摘要
32. Visual Agreement Regularized Training for Multi-Modal Machine Translation, AAAI 2020 [PDF] 摘要
33. Improving Context-Aware Neural Machine Translation Using Self-Attentive Sentence Embedding, AAAI 2020 [PDF] 摘要
34. Reinforced Curriculum Learning on Pre-Trained Neural Machine Translation Models, AAAI 2020 [PDF] 摘要
35. Balancing Quality and Human Involvement: An Effective Approach to Interactive Neural Machine Translation, AAAI 2020 [PDF] 摘要
36. Semi-Supervised Text Simplification with Back-Translation and Asymmetric Denoising Autoencoders, AAAI 2020 [PDF] 摘要
37. Multimodal Structure-Consistent Image-to-Image Translation, AAAI 2020 [PDF] 摘要
38. Learning to Transfer: Unsupervised Domain Translation via Meta-Learning, AAAI 2020 [PDF] 摘要
39. Content Word Aware Neural Machine Translation, ACL 2020 [PDF] 摘要
40. Evaluating Explanation Methods for Neural Machine Translation, ACL 2020 [PDF] 摘要
41. Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation, ACL 2020 [PDF] 摘要
42. Learning Source Phrase Representations for Neural Machine Translation, ACL 2020 [PDF] 摘要
43. Multiscale Collaborative Deep Models for Neural Machine Translation, ACL 2020 [PDF] 摘要
44. Norm-Based Curriculum Learning for Neural Machine Translation, ACL 2020 [PDF] 摘要
45. Opportunistic Decoding with Timely Correction for Simultaneous Translation, ACL 2020 [PDF] 摘要
46. Multi-Hypothesis Machine Translation Evaluation, ACL 2020 [PDF] 摘要
47. Multimodal Quality Estimation for Machine Translation, ACL 2020 [PDF] 摘要
48. Bilingual Dictionary Based Neural Machine Translation without Using Parallel Sentences, ACL 2020 [PDF] 摘要
49. Boosting Neural Machine Translation with Similar Translations, ACL 2020 [PDF] 摘要
50. Character-Level Translation with Self-attention, ACL 2020 [PDF] 摘要
51. Enhancing Machine Translation with Dependency-Aware Self-Attention, ACL 2020 [PDF] 摘要
52. Improving Massively Multilingual Neural Machine Translation and Zero-Shot Translation, ACL 2020 [PDF] 摘要
53. It’s Easier to Translate out of English than into it: Measuring Neural Translation Difficulty by Cross-Mutual Information, ACL 2020 [PDF] 摘要
54. Language-aware Interlingua for Multilingual Neural Machine Translation, ACL 2020 [PDF] 摘要
55. On the Limitations of Cross-lingual Encoders as Exposed by Reference-Free Machine Translation Evaluation, ACL 2020 [PDF] 摘要
56. “You Sound Just Like Your Father” Commercial Machine Translation Systems Include Stylistic Biases, ACL 2020 [PDF] 摘要
57. MMPE: A Multi-Modal Interface for Post-Editing Machine Translation, ACL 2020 [PDF] 摘要
58. Multi-Domain Neural Machine Translation with Word-Level Adaptive Layer-wise Domain Mixing, ACL 2020 [PDF] 摘要
59. Improving Non-autoregressive Neural Machine Translation with Monolingual Data, ACL 2020 [PDF] 摘要
60. Phone Features Improve Speech Translation, ACL 2020 [PDF] 摘要
61. ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation, ACL 2020 [PDF] 摘要
62. Leveraging Monolingual Data with Self-Supervision for Multilingual Neural Machine Translation, ACL 2020 [PDF] 摘要
63. On The Evaluation of Machine Translation Systems Trained With Back-Translation, ACL 2020 [PDF] 摘要
64. Simultaneous Translation Policies: From Fixed to Adaptive, ACL 2020 [PDF] 摘要
65. A Novel Graph-based Multi-modal Fusion Encoder for Neural Machine Translation, ACL 2020 [PDF] 摘要
66. Dynamic Programming Encoding for Subword Segmentation in Neural Machine Translation, ACL 2020 [PDF] 摘要
67. Learning to Recover from Multi-Modality Errors for Non-Autoregressive Neural Machine Translation, ACL 2020 [PDF] 摘要
68. On the Inference Calibration of Neural Machine Translation, ACL 2020 [PDF] 摘要
69. A Reinforced Generation of Adversarial Examples for Neural Machine Translation, ACL 2020 [PDF] 摘要
70. A Retrieve-and-Rewrite Initialization Method for Unsupervised Machine Translation, ACL 2020 [PDF] 摘要
71. A Simple and Effective Unified Encoder for Document-Level Machine Translation, ACL 2020 [PDF] 摘要
72. Does Multi-Encoder Help? A Case Study on Context-Aware Neural Machine Translation, ACL 2020 [PDF] 摘要
73. Knowledge Distillation for Multilingual Unsupervised Neural Machine Translation, ACL 2020 [PDF] 摘要
74. Lexically Constrained Neural Machine Translation with Levenshtein Transformer, ACL 2020 [PDF] 摘要
75. On Exposure Bias, Hallucination and Domain Shift in Neural Machine Translation, ACL 2020 [PDF] 摘要
76. Automatic Machine Translation Evaluation using Source Language Inputs and Cross-lingual Language Model, ACL 2020 [PDF] 摘要
77. Curriculum Pre-training for End-to-End Speech Translation, ACL 2020 [PDF] 摘要
78. SimulSpeech: End-to-End Simultaneous Speech to Text Translation, ACL 2020 [PDF] 摘要
79. Selecting Backtranslated Data from Multiple Sources for Improved Neural Machine Translation, ACL 2020 [PDF] 摘要
80. Modeling Word Formation in English–German Neural Machine Translation, ACL 2020 [PDF] 摘要
81. Multimodal Transformer for Multimodal Machine Translation, ACL 2020 [PDF] 摘要
82. Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics, ACL 2020 [PDF] 摘要
83. AdvAug: Robust Adversarial Augmentation for Neural Machine Translation, ACL 2020 [PDF] 摘要
84. Contextual Neural Machine Translation Improves Translation of Cataphoric Pronouns, ACL 2020 [PDF] 摘要
85. Improving Neural Machine Translation with Soft Template Prediction, ACL 2020 [PDF] 摘要
86. Tagged Back-translation Revisited: Why Does It Really Work?, ACL 2020 [PDF] 摘要
87. Worse WER, but Better BLEU? Leveraging Word Embedding as Intermediate in Multitask End-to-End Speech Translation, ACL 2020 [PDF] 摘要
88. Are we Estimating or Guesstimating Translation Quality?, ACL 2020 [PDF] 摘要
89. Document Translation vs. Query Translation for Cross-Lingual Information Retrieval in the Medical Domain, ACL 2020 [PDF] 摘要
90. Gender in Danger? Evaluating Speech Translation Technology on the MuST-SHE Corpus, ACL 2020 [PDF] 摘要
91. Uncertainty-Aware Curriculum Learning for Neural Machine Translation, ACL 2020 [PDF] 摘要
92. Speech Translation and the End-to-End Promise: Taking Stock of Where We Are, ACL 2020 [PDF] 摘要
93. Hard-Coded Gaussian Attention for Neural Machine Translation, ACL 2020 [PDF] 摘要
94. In Neural Machine Translation, What Does Transfer Learning Transfer?, ACL 2020 [PDF] 摘要
95. Learning a Multi-Domain Curriculum for Neural Machine Translation, ACL 2020 [PDF] 摘要
96. Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem, ACL 2020 [PDF] 摘要
97. Translationese as a Language in “Multilingual” NMT, ACL 2020 [PDF] 摘要
98. Using Context in Neural Machine Translation Training Objectives, ACL 2020 [PDF] 摘要
99. Variational Neural Machine Translation with Normalizing Flows, ACL 2020 [PDF] 摘要
100. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension, ACL 2020 [PDF] 摘要
101. Unsupervised Multimodal Neural Machine Translation with Pseudo Visual Pivoting, ACL 2020 [PDF] 摘要
102. Addressing Posterior Collapse with Mutual Information for Improved Variational Neural Machine Translation, ACL 2020 [PDF] 摘要
103. Balancing Training for Multilingual Neural Machine Translation, ACL 2020 [PDF] 摘要
104. Evaluating Robustness to Input Perturbations for Neural Machine Translation, ACL 2020 [PDF] 摘要
105. Regularized Context Gates on Transformer for Machine Translation, ACL 2020 [PDF] 摘要
106. CLIReval: Evaluating Machine Translation as a Cross-Lingual Information Retrieval Task, ACL 2020 [PDF] 摘要
107. ESPnet-ST: All-in-One Speech Translation Toolkit, ACL 2020 [PDF] 摘要
108. MMPE: A Multi-Modal Interface using Handwriting, Touch Reordering, and Speech Commands for Post-Editing Machine Translation, ACL 2020 [PDF] 摘要
109. Zero-shot North Korean to English Neural Machine Translation by Character Tokenization and Phoneme Decomposition, ACL 2020 [PDF] 摘要
110. Multi-Task Neural Model for Agglutinative Language Translation, ACL 2020 [PDF] 摘要
111. Efficient Neural Machine Translation for Low-Resource Languages via Exploiting Related Languages, ACL 2020 [PDF] 摘要
112. Pre-training via Leveraging Assisting Languages for Neural Machine Translation, ACL 2020 [PDF] 摘要
113. Checkpoint Reranking: An Approach to Select Better Hypothesis for Neural Machine Translation Systems, ACL 2020 [PDF] 摘要
114. Compositional Generalization by Factorizing Alignment and Translation, ACL 2020 [PDF] 摘要
115. Proceedings of the First Workshop on Automatic Simultaneous Translation, ACL 2020 [PDF] 摘要
116. Dynamic Sentence Boundary Detection for Simultaneous Translation, ACL 2020 [PDF] 摘要
117. End-to-End Speech Translation with Adversarial Training, ACL 2020 [PDF] 摘要
118. Robust Neural Machine Translation with ASR Errors, ACL 2020 [PDF] 摘要
119. Modeling Discourse Structure for Document-level Neural Machine Translation, ACL 2020 [PDF] 摘要
120. Proceedings of the 17th International Conference on Spoken Language Translation, ACL 2020 [PDF] 摘要
121. ON-TRAC Consortium for End-to-End and Simultaneous Speech Translation Challenge Tasks at IWSLT 2020, ACL 2020 [PDF] 摘要
122. Start-Before-End and End-to-End: Neural Speech Translation by AppTek and RWTH Aachen University, ACL 2020 [PDF] 摘要
123. KIT’s IWSLT 2020 SLT Translation System, ACL 2020 [PDF] 摘要
124. End-to-End Simultaneous Translation System for IWSLT2020 Using Modality Agnostic Meta-Learning, ACL 2020 [PDF] 摘要
125. DiDi Labs’ End-to-end System for the IWSLT 2020 Offline Speech TranslationTask, ACL 2020 [PDF] 摘要
126. End-to-End Offline Speech Translation System for IWSLT 2020 using Modality Agnostic Meta-Learning, ACL 2020 [PDF] 摘要
127. End-to-End Speech-Translation with Knowledge Distillation: FBK@IWSLT2020, ACL 2020 [PDF] 摘要
128. SRPOL’s System for the IWSLT 2020 End-to-End Speech Translation Task, ACL 2020 [PDF] 摘要
129. The University of Helsinki Submission to the IWSLT2020 Offline SpeechTranslation Task, ACL 2020 [PDF] 摘要
130. LIT Team’s System Description for Japanese-Chinese Machine Translation Task in IWSLT 2020, ACL 2020 [PDF] 摘要
131. OPPO’s Machine Translation System for the IWSLT 2020 Open Domain Translation Task, ACL 2020 [PDF] 摘要
132. Character Mapping and Ad-hoc Adaptation: Edinburgh’s IWSLT 2020 Open Domain Translation System, ACL 2020 [PDF] 摘要
133. CASIA’s System for IWSLT 2020 Open Domain Translation, ACL 2020 [PDF] 摘要
134. Deep Blue Sonics’ Submission to IWSLT 2020 Open Domain Translation Task, ACL 2020 [PDF] 摘要
135. University of Tsukuba’s Machine Translation System for IWSLT20 Open Domain Translation Task, ACL 2020 [PDF] 摘要
136. Xiaomi’s Submissions for IWSLT 2020 Open Domain Translation Task, ACL 2020 [PDF] 摘要
137. ISTIC’s Neural Machine Translation System for IWSLT’2020, ACL 2020 [PDF] 摘要
138. Octanove Labs’ Japanese-Chinese Open Domain Translation System, ACL 2020 [PDF] 摘要
139. NAIST’s Machine Translation Systems for IWSLT 2020 Conversational Speech Translation Task, ACL 2020 [PDF] 摘要
140. Generating Fluent Translations from Disfluent Text Without Access to Fluent References: IIT Bombay@IWSLT2020, ACL 2020 [PDF] 摘要
141. The HW-TSC Video Speech Translation System at IWSLT 2020, ACL 2020 [PDF] 摘要
142. ELITR Non-Native Speech Translation at IWSLT 2020, ACL 2020 [PDF] 摘要
143. Is 42 the Answer to Everything in Subtitling-oriented Speech Translation?, ACL 2020 [PDF] 摘要
144. Re-translation versus Streaming for Simultaneous Translation, ACL 2020 [PDF] 摘要
145. Towards Stream Translation: Adaptive Computation Time for Simultaneous Machine Translation, ACL 2020 [PDF] 摘要
146. Neural Simultaneous Speech Translation Using Alignment-Based Chunking, ACL 2020 [PDF] 摘要
147. From Speech-to-Speech Translation to Automatic Dubbing, ACL 2020 [PDF] 摘要
148. Joint Translation and Unit Conversion for End-to-end Localization, ACL 2020 [PDF] 摘要
149. How Human is Machine Translationese? Comparing Human and Machine Translations of Text and Speech, ACL 2020 [PDF] 摘要
150. Proceedings of the Fourth Workshop on Neural Generation and Translation, ACL 2020 [PDF] 摘要
151. Findings of the Fourth Workshop on Neural Generation and Translation, ACL 2020 [PDF] 摘要
152. Compressing Neural Machine Translation Models with 4-bit Precision, ACL 2020 [PDF] 摘要
153. The Unreasonable Volatility of Neural Machine Translation Models, ACL 2020 [PDF] 摘要
154. Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation, ACL 2020 [PDF] 摘要
155. Training and Inference Methods for High-Coverage Neural Machine Translation, ACL 2020 [PDF] 摘要
156. English-to-Japanese Diverse Translation by Combining Forward and Backward Outputs, ACL 2020 [PDF] 摘要
157. The ADAPT System Description for the STAPLE 2020 English-to-Portuguese Translation Task, ACL 2020 [PDF] 摘要
158. Exploring Model Consensus to Generate Translation Paraphrases, ACL 2020 [PDF] 摘要
159. Growing Together: Modeling Human Language Learning With n-Best Multi-Checkpoint Machine Translation, ACL 2020 [PDF] 摘要
160. Generating Diverse Translations via Weighted Fine-tuning and Hypotheses Filtering for the Duolingo STAPLE Task, ACL 2020 [PDF] 摘要
161. The JHU Submission to the 2020 Duolingo Shared Task on Simultaneous Translation and Paraphrase for Language Education, ACL 2020 [PDF] 摘要
162. Simultaneous paraphrasing and translation by fine-tuning Transformer models, ACL 2020 [PDF] 摘要
163. Efficient and High-Quality Neural Machine Translation with OpenNMT, ACL 2020 [PDF] 摘要
164. Edinburgh’s Submissions to the 2020 Machine Translation Efficiency Task, ACL 2020 [PDF] 摘要
165. Improving Document-Level Neural Machine Translation with Domain Adaptation, ACL 2020 [PDF] 摘要
166. Simultaneous Translation and Paraphrase for Language Education, ACL 2020 [PDF] 摘要
167. A Translation-Based Approach to Morphology Learning for Low Resource Languages, ACL 2020 [PDF] 摘要
168. FFR v1.1: Fon-French Neural Machine Translation, ACL 2020 [PDF] 摘要
169. Towards Mitigating Gender Bias in a decoder-based Neural Machine Translation model by Adding Contextual Information, ACL 2020 [PDF] 摘要
170. Multitask Models for Controlling the Complexity of Neural Machine Translation, ACL 2020 [PDF] 摘要
171. HausaMT v1.0: Towards English–Hausa Neural Machine Translation, ACL 2020 [PDF] 摘要
172. An Evaluation of Subword Segmentation Strategies for Neural Machine Translation of Morphologically Rich Languages, ACL 2020 [PDF] 摘要
173. On the Linguistic Representational Power of Neural Machine Translation Models, CL 2020 [PDF] 摘要
174. Unsupervised Word Translation with Adversarial Autoencoder, CL 2020 [PDF] 摘要
175. A Systematic Study of Inner-Attention-Based Sentence Representations in Multilingual Neural Machine Translation, CL 2020 [PDF] 摘要
176. Imitation Attacks and Defenses for Black-box Machine Translation Systems, EMNLP 2020 [PDF] 摘要
177. Shallow-to-Deep Training for Neural Machine Translation, EMNLP 2020 [PDF] 摘要
178. CSP: Code-Switching Pre-training for Neural Machine Translation, EMNLP 2020 [PDF] 摘要
179. Non-Autoregressive Machine Translation with Latent Alignments, EMNLP 2020 [PDF] 摘要
180. Language Model Prior for Low-Resource Neural Machine Translation, EMNLP 2020 [PDF] 摘要
181. Losing Heads in the Lottery: Pruning Transformer Attention in Neural Machine Translation, EMNLP 2020 [PDF] 摘要
182. Translationese in Machine Translation Evaluation, EMNLP 2020 [PDF] 摘要
183. Towards Enhancing Faithfulness for Neural Machine Translation, EMNLP 2020 [PDF] 摘要
184. Automatic Machine Translation Evaluation in Many Languages via Zero-Shot Paraphrasing, EMNLP 2020 [PDF] 摘要
185. Uncertainty-Aware Semantic Augmentation for Neural Machine Translation, EMNLP 2020 [PDF] 摘要
186. Iterative Domain-Repaired Back-Translation, EMNLP 2020 [PDF] 摘要
187. Self-Induced Curriculum Learning in Self-Supervised Neural Machine Translation, EMNLP 2020 [PDF] 摘要
188. Simultaneous Machine Translation with Visual Context, EMNLP 2020 [PDF] 摘要
189. Iterative Refinement in the Continuous Space for Non-Autoregressive Neural Machine Translation, EMNLP 2020 [PDF] 摘要
190. Dynamic Context Selection for Document-level Neural Machine Translation via Reinforcement Learning, EMNLP 2020 [PDF] 摘要
191. Dynamic Data Selection and Weighting for Iterative Back-Translation, EMNLP 2020 [PDF] 摘要
192. Multi-task Learning for Multilingual Neural Machine Translation, EMNLP 2020 [PDF] 摘要
193. Accurate Word Alignment Induction from Neural Machine Translation, EMNLP 2020 [PDF] 摘要
194. Token-level Adaptive Training for Neural Machine Translation, EMNLP 2020 [PDF] 摘要
195. Multi-Unit Transformers for Neural Machine Translation, EMNLP 2020 [PDF] 摘要
196. Translation Artifacts in Cross-lingual Transfer Learning, EMNLP 2020 [PDF] 摘要
197. Towards Detecting and Exploiting Disambiguation Biases in Neural Machine Translation, EMNLP 2020 [PDF] 摘要
198. Direct Segmentation Models for Streaming Speech Translation, EMNLP 2020 [PDF] 摘要
199. Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for Bengali-English Machine Translation, EMNLP 2020 [PDF] 摘要
200. An Empirical Study of Generation Order for Machine Translation, EMNLP 2020 [PDF] 摘要
201. Distilling Multiple Domains for Neural Machine Translation, EMNLP 2020 [PDF] 摘要
202. Data Rejuvenation: Exploiting Inactive Training Examples for Neural Machine Translation, EMNLP 2020 [PDF] 摘要
203. Bridging Linguistic Typology and Multilingual Machine Translation with Multi-view Language Representations, EMNLP 2020 [PDF] 摘要
204. Improving Word Sense Disambiguation with Translations, EMNLP 2020 [PDF] 摘要
205. PyMT5: Multi-mode Translation of Natural Language and Python Code with Transformers, EMNLP 2020 [PDF] 摘要
206. Learning Adaptive Segmentation Policy for Simultaneous Translation, EMNLP 2020 [PDF] 摘要
207. ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization, EMNLP 2020 [PDF] 摘要
208. Generating Diverse Translation from Model Distribution with Dropout, EMNLP 2020 [PDF] 摘要
209. Pre-training Multilingual Neural Machine Translation by Leveraging Alignment Information, EMNLP 2020 [PDF] 摘要
210. Self-Paced Learning for Neural Machine Translation, EMNLP 2020 [PDF] 摘要
211. Simulated Multiple Reference Training Improves Low-Resource Machine Translation, EMNLP 2020 [PDF] 摘要
212. On the Sparsity of Neural Machine Translation Models, EMNLP 2020 [PDF] 摘要
213. Translation Quality Estimation by Jointly Learning to Score and Rank, EMNLP 2020 [PDF] 摘要
214. Incorporating a Local Translation Mechanism into Non-autoregressive Translation, EMNLP 2020 [PDF] 摘要
215. Language Adapters for Zero Shot Neural Machine Translation, EMNLP 2020 [PDF] 摘要
216. Effectively Pretraining a Speech Translation Decoder with Machine Translation Data, EMNLP 2020 [PDF] 摘要
217. Long-Short Term Masking Transformer: A Simple but Effective Baseline for Document-level Neural Machine Translation, EMNLP 2020 [PDF] 摘要
218. Fully Quantized Transformer for Machine Translation, EMNLP 2020 [PDF] 摘要
219. Improving Grammatical Error Correction with Machine Translation Pairs, EMNLP 2020 [PDF] 摘要
220. Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation, EMNLP 2020 [PDF] 摘要
221. Multi-Agent Mutual Learning at Sentence-Level and Token-Level for Neural Machine Translation, EMNLP 2020 [PDF] 摘要
222. Active Learning Approaches to Enhancing Neural Machine Translation: An Empirical Study, EMNLP 2020 [PDF] 摘要
223. Adversarial Subword Regularization for Robust Neural Machine Translation, EMNLP 2020 [PDF] 摘要
224. Automatically Identifying Gender Issues in Machine Translation using Perturbations, EMNLP 2020 [PDF] 摘要
225. Dual Reconstruction: a Unifying Objective for Semi-Supervised Neural Machine Translation, EMNLP 2020 [PDF] 摘要
226. Participatory Research for Low-resourced Machine Translation: A Case Study in African Languages, EMNLP 2020 [PDF] 摘要
227. Computer Assisted Translation with Neural Quality Estimation and Auotmatic Post-Editing, EMNLP 2020 [PDF] 摘要
228. On Romanization for Model Transfer Between Scripts in Neural Machine Translation, EMNLP 2020 [PDF] 摘要
229. Adaptive Feature Selection for End-to-End Speech Translation, EMNLP 2020 [PDF] 摘要
230. PharmMT: A Neural Machine Translation Approach to Simplify Prescription Directions, EMNLP 2020 [PDF] 摘要
231. Graph-to-Tree Neural Networks for Learning Structured Input-Output Translation with Applications to Semantic Parsing and Math Word Problem, EMNLP 2020 [PDF] 摘要
232. On Long-Tailed Phenomena in Neural Machine Translation, EMNLP 2020 [PDF] 摘要
233. A Multilingual View of Unsupervised Machine Translation, EMNLP 2020 [PDF] 摘要
234. KoBE: Knowledge-Based Machine Translation Evaluation, EMNLP 2020 [PDF] 摘要
235. Improving Target-side Lexical Transfer in Multilingual Neural Machine Translation, EMNLP 2020 [PDF] 摘要
236. The Box is in the Pen: Evaluating Commonsense Reasoning in Neural Machine Translation, EMNLP 2020 [PDF] 摘要
237. It’s not a Non-Issue: Negation as a Source of Error in Machine Translation, EMNLP 2020 [PDF] 摘要
238. Fluent and Low-latency Simultaneous Speech-to-Speech Translation with Self-adaptive Training, EMNLP 2020 [PDF] 摘要
239. Finding the Optimal Vocabulary Size for Neural Machine Translation, EMNLP 2020 [PDF] 摘要
240. Reference Language based Unsupervised Neural Machine Translation, EMNLP 2020 [PDF] 摘要
241. Assessing Human-Parity in Machine Translation on the Segment Level, EMNLP 2020 [PDF] 摘要
242. Factorized Transformer for Multi-Domain Neural Machine Translation, EMNLP 2020 [PDF] 摘要
243. Vocabulary Adaptation for Domain Adaptation in Neural Machine Translation, EMNLP 2020 [PDF] 摘要
244. Training Flexible Depth Model by Multi-Task Learning for Neural Machine Translation, EMNLP 2020 [PDF] 摘要
245. Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation, EMNLP 2020 [PDF] 摘要
246. On the Weaknesses of Reinforcement Learning for Neural Machine Translation, ICLR 2020 [PDF] 摘要
247. Understanding Knowledge Distillation in Non-autoregressive Machine Translation, ICLR 2020 [PDF] 摘要
248. U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation, ICLR 2020 [PDF] 摘要
249. Incorporating BERT into Neural Machine Translation, ICLR 2020 [PDF] 摘要
250. Neural Machine Translation with Universal Visual Representation, ICLR 2020 [PDF] 摘要
251. A Latent Morphology Model for Open-Vocabulary Neural Machine Translation, ICLR 2020 [PDF] 摘要
252. Mirror-Generative Neural Machine Translation, ICLR 2020 [PDF] 摘要
253. Lexical-Constraint-Aware Neural Machine Translation via Data Augmentation, IJCAI 2020 [PDF] 摘要
254. Modeling Voting for System Combination in Machine Translation, IJCAI 2020 [PDF] 摘要
255. Task-Level Curriculum Learning for Non-Autoregressive Neural Machine Translation, IJCAI 2020 [PDF] 摘要
256. Neural Machine Translation with Error Correction, IJCAI 2020 [PDF] 摘要
257. Efficient Context-Aware Neural Machine Translation with Layer-Wise Weighting and Input-Aware Gating, IJCAI 2020 [PDF] 摘要
258. Towards Making the Most of Context in Neural Machine Translation, IJCAI 2020 [PDF] 摘要
259. Knowledge Graphs Enhanced Neural Machine Translation, IJCAI 2020 [PDF] 摘要
260. Bridging the Gap between Training and Inference for Neural Machine Translation (Extended Abstract), IJCAI 2020 [PDF] 摘要
261. Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System?, TACL 2020 [PDF] 摘要
262. Better Document-Level Machine Translation with Bayes’ Rule, TACL 2020 [PDF] 摘要
263. Reproducible and Efficient Benchmarks for Hyperparameter Optimization of Neural Machine Translation Systems, TACL 2020 [PDF] 摘要

摘要

1. Modeling Fluency and Faithfulness for Diverse Neural Machine Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: AI and the Web
  Yang Feng, Wanying Xie, Shuhao Gu, Chenze Shao, Wen Zhang, Zhengxin Yang, Dong Yu
Neural machine translation models usually adopt the teacher forcing strategy for training which requires the predicted sequence matches ground truth word by word and forces the probability of each prediction to approach a 0-1 distribution. However, the strategy casts all the portion of the distribution to the ground truth word and ignores other words in the target vocabulary even when the ground truth word cannot dominate the distribution. To address the problem of teacher forcing, we propose a method to introduce an evaluation module to guide the distribution of the prediction. The evaluation module accesses each prediction from the perspectives of fluency and faithfulness to encourage the model to generate the word which has a fluent connection with its past and future translation and meanwhile tends to form a translation equivalent in meaning to the source. The experiments on multiple translation tasks show that our method can achieve significant improvements over strong baselines.

2. Cross-Lingual Pre-Training Based Transfer for Zero-Shot Neural Machine Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: AI and the Web
  Baijun Ji, Zhirui Zhang, Xiangyu Duan, Min Zhang, Boxing Chen, Weihua Luo
Transfer learning between different language pairs has shown its effectiveness for Neural Machine Translation (NMT) in low-resource scenario. However, existing transfer methods involving a common target language are far from success in the extreme scenario of zero-shot translation, due to the language space mismatch problem between transferor (the parent model) and transferee (the child model) on the source side. To address this challenge, we propose an effective transfer learning approach based on cross-lingual pre-training. Our key idea is to make all source languages share the same feature space and thus enable a smooth transition for zero-shot translation. To this end, we introduce one monolingual pre-training method and two bilingual pre-training methods to obtain a universal encoder for different languages. Once the universal encoder is constructed, the parent model built on such encoder is trained with large-scale annotated data and then directly applied in zero-shot translation scenario. Experiments on two public datasets show that our approach significantly outperforms strong pivot-based baseline and various multilingual NMT approaches.

3. Modality to Modality Translation: An Adversarial Representation Learning and Graph Fusion Network for Multimodal Fusion [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: AI and the Web
  Sijie Mai, Haifeng Hu, Songlong Xing
Learning joint embedding space for various modalities is of vital importance for multimodal fusion. Mainstream modality fusion approaches fail to achieve this goal, leaving a modality gap which heavily affects cross-modal fusion. In this paper, we propose a novel adversarial encoder-decoder-classifier framework to learn a modality-invariant embedding space. Since the distributions of various modalities vary in nature, to reduce the modality gap, we translate the distributions of source modalities into that of target modality via their respective encoders using adversarial training. Furthermore, we exert additional constraints on embedding space by introducing reconstruction loss and classification loss. Then we fuse the encoded representations using hierarchical graph neural network which explicitly explores unimodal, bimodal and trimodal interactions in multi-stage. Our method achieves state-of-the-art performance on multiple datasets. Visualization of the learned embeddings suggests that the joint embedding space learned by our method is discriminative.

4. Minimizing the Bag-of-Ngrams Difference for Non-Autoregressive Neural Machine Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: AI and the Web
  Chenze Shao, Jinchao Zhang, Yang Feng, Fandong Meng, Jie Zhou
Non-Autoregressive Neural Machine Translation (NAT) achieves significant decoding speedup through generating target words independently and simultaneously. However, in the context of non-autoregressive translation, the word-level cross-entropy loss cannot model the target-side sequential dependency properly, leading to its weak correlation with the translation quality. As a result, NAT tends to generate influent translations with over-translation and under-translation errors. In this paper, we propose to train NAT to minimize the Bag-of-Ngrams (BoN) difference between the model output and the reference sentence. The bag-of-ngrams training objective is differentiable and can be efficiently calculated, which encourages NAT to capture the target-side sequential dependency and correlates well with the translation quality. We validate our approach on three translation tasks and show that our approach largely outperforms the NAT baseline by about 5.0 BLEU scores on WMT14 En↔De and about 2.5 BLEU scores on WMT16 En↔Ro.

5. GAN-Based Unpaired Chinese Character Image Translation via Skeleton Transformation and Stroke Rendering [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Applications
  Yiming Gao, Jiangqin Wu
The automatic style translation of Chinese characters (CH-Char) is a challenging problem. Different from English or general artistic style transfer, Chinese characters contain a large number of glyphs with the complicated content and characteristic style. Early methods on CH-Char synthesis are inefficient and require manual intervention. Recently some GAN-based methods are proposed for font generation. The supervised GAN-based methods require numerous image pairs, which is difficult for many chirography styles. In addition, unsupervised methods often cause the blurred and incorrect strokes. Therefore, in this work, we propose a three-stage Generative Adversarial Network (GAN) architecture for multi-chirography image translation, which is divided into skeleton extraction, skeleton transformation and stroke rendering with unpaired training data. Specifically, we first propose a fast skeleton extraction method (ENet). Secondly, we utilize the extracted skeleton and the original image to train a GAN model, RNet (a stroke rendering network), to learn how to render the skeleton with stroke details in target style. Finally, the pre-trained model RNet is employed to assist another GAN model, TNet (a skeleton transformation network), to learn to transform the skeleton structure on the unlabeled skeleton set. We demonstrate the validity of our method on two chirography datasets we established.

6. Fast and Robust Face-to-Parameter Translation for Game Character Auto-Creation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Game Playing and Interactive Entertainment
  Tianyang Shi, Zhengxia Zou, Yi Yuan, Changjie Fan
With the rapid development of Role-Playing Games (RPGs), players are now allowed to edit the facial appearance of their in-game characters with their preferences rather than using default templates. This paper proposes a game character auto-creation framework that generates in-game characters according to a player's input face photo. Different from the previous methods that are designed based on neural style transfer or monocular 3D face reconstruction, we re-formulate the character auto-creation process in a different point of view: by predicting a large set of physically meaningful facial parameters under a self-supervised learning paradigm. Instead of updating facial parameters iteratively at the input end of the renderer as suggested by previous methods, which are time-consuming, we introduce a facial parameter translator so that the creation can be done efficiently through a single forward propagation from the face embeddings to parameters, with a considerable 1000x computational speedup. Despite its high efficiency, the interactivity is preserved in our method where users are allowed to optionally fine-tune the facial parameters on our creation according to their needs. Our approach also shows better robustness than previous methods, especially for those photos with head-pose variance. Comparison results and ablation analysis on seven public face verification datasets suggest the effectiveness of our method.

7. Distilling Portable Generative Adversarial Networks for Image Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Machine Learning
  Hanting Chen, Yunhe Wang, Han Shu, Changyuan Wen, Chunjing Xu, Boxin Shi, Chao Xu, Chang Xu
Despite Generative Adversarial Networks (GANs) have been widely used in various image-to-image translation tasks, they can be hardly applied on mobile devices due to their heavy computation and storage cost. Traditional network compression methods focus on visually recognition tasks, but never deal with generation tasks. Inspired by knowledge distillation, a student generator of fewer parameters is trained by inheriting the low-level and high-level information from the original heavy teacher generator. To promote the capability of student generator, we include a student discriminator to measure the distances between real images, and images generated by student and teacher generators. An adversarial learning process is therefore established to optimize student generator and student discriminator. Qualitative and quantitative analysis by conducting experiments on benchmark datasets demonstrate that the proposed method can learn portable generative models with strong performance.

8. Benign Examples: Imperceptible Changes Can Enhance Image Translation Performance [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Machine Learning
  Vignesh Srinivasan, Klaus-Robert Müller, Wojciech Samek, Shinichi Nakajima
Unpaired image-to-image domain translation involves the task of transferring an image in one domain to another domain without having pairs of data for supervision. Several methods have been proposed to address this task using Generative Adversarial Networks (GANs) and cycle consistency constraint enforcing the translated image to be mapped back to the original domain. This way, a Deep Neural Network (DNN) learns mapping such that the input training distribution transferred to the target domain matches the target training distribution. However, not all test images are expected to fall inside the data manifold in the input space where the DNN has learned to perform the mapping very well. Such images can have a poor mapping to the target domain. In this paper, we propose to perform Langevin dynamics, which makes a subtle change in the input space bringing them close to the data manifold, producing benign examples. The effect is significant improvement of the mapped image on the target domain. We also show that the score function estimation by denoising autoencoder (DAE), can practically be replaced with any autoencoding structure, which most image-to-image translation methods contain intrinsically due to the cycle consistency constraint. Thus, no additional training is required. We show advantages of our approach for several state-of-the-art image-to-image domain translation models. Quantitative evaluation shows that our proposed method leads to a substantial increase in the accuracy to the target label on multiple state-of-the-art image classifiers, while qualitative user study proves that our method better represents the target domain, achieving better human preference scores.

9. Transductive Ensemble Learning for Neural Machine Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Machine Learning
  Yiren Wang, Lijun Wu, Yingce Xia, Tao Qin, ChengXiang Zhai, Tie-Yan Liu
Ensemble learning, which aggregates multiple diverse models for inference, is a common practice to improve the accuracy of machine learning tasks. However, it has been observed that the conventional ensemble methods only bring marginal improvement for neural machine translation (NMT) when individual models are strong or there are a large number of individual models. In this paper, we study how to effectively aggregate multiple NMT models under the transductive setting where the source sentences of the test set are known. We propose a simple yet effective approach named transductive ensemble learning (TEL), in which we use all individual models to translate the source test set into the target language space and then finetune a strong model on the translated synthetic corpus. We conduct extensive experiments on different settings (with/without monolingual data) and different language pairs (English↔{German, Finnish}). The results show that our approach boosts strong individual models with significant improvement and benefits a lot from more individual models. Specifically, we achieve the state-of-the-art performances on the WMT2016-2018 English↔German translations.

10. Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Junliang Guo, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, Tie-Yan Liu
Non-autoregressive translation (NAT) models remove the dependence on previous target tokens and generate all target tokens in parallel, resulting in significant inference speedup but at the cost of inferior translation accuracy compared to autoregressive translation (AT) models. Considering that AT models have higher accuracy and are easier to train than NAT models, and both of them share the same model configurations, a natural idea to improve the accuracy of NAT models is to transfer a well-trained AT model to an NAT model through fine-tuning. However, since AT and NAT models differ greatly in training strategy, straightforward fine-tuning does not work well. In this work, we introduce curriculum learning into fine-tuning for NAT. Specifically, we design a curriculum in the fine-tuning process to progressively switch the training from autoregressive generation to non-autoregressive generation. Experiments on four benchmark translation datasets show that the proposed method achieves good improvement (more than 1 BLEU score) over previous NAT baselines in terms of translation accuracy, and greatly speed up (more than 10 times) the inference process over AT baselines.

11. One Homonym per Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Bradley Hauer, Grzegorz Kondrak
The study of homonymy is vital to resolving fundamental problems in lexical semantics. In this paper, we propose four hypotheses that characterize the unique behavior of homonyms in the context of translations, discourses, collocations, and sense clusters. We present a new annotated homonym resource that allows us to test our hypotheses on existing WSD resources. The results of the experiments provide strong empirical evidence for the hypotheses. This study represents a step towards a computational method for distinguishing between homonymy and polysemy, and constructing a definitive inventory of coarse-grained senses.

12. Neuron Interaction Based Representation Composition for Neural Machine Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Jian Li, Xing Wang, Baosong Yang, Shuming Shi, Michael R. Lyu, Zhaopeng Tu
Recent NLP studies reveal that substantial linguistic information can be attributed to single neurons, i.e., individual dimensions of the representation vectors. We hypothesize that modeling strong interactions among neurons helps to better capture complex information by composing the linguistic properties embedded in individual neurons. Starting from this intuition, we propose a novel approach to compose representations learned by different components in neural machine translation (e.g., multi-layer networks or multi-head attention), based on modeling strong interactions among neurons in the representation vectors. Specifically, we leverage bilinear pooling to model pairwise multiplicative interactions among individual neurons, and a low-rank approximation to make the model computationally feasible. We further propose extended bilinear pooling to incorporate first-order representations. Experiments on WMT14 English⇒German and English⇒French translation tasks show that our model consistently improves performances over the SOTA Transformer baseline. Further analyses demonstrate that our approach indeed captures more syntactic and semantic information as expected.

13. MetaMT, a Meta Learning Method Leveraging Multiple Domain Data for Low Resource Machine Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Rumeng Li, Xun Wang, Hong Yu
Neural machine translation (NMT) models have achieved state-of-the-art translation quality with a large quantity of parallel corpora available. However, their performance suffers significantly when it comes to domain-specific translations, in which training data are usually scarce. In this paper, we present a novel NMT model with a new word embedding transition technique for fast domain adaption. We propose to split parameters in the model into two groups: model parameters and meta parameters. The former are used to model the translation while the latter are used to adjust the representational space to generalize the model to different domains. We mimic the domain adaptation of the machine translation model to low-resource domains using multiple translation tasks on different domains. A new training strategy based on meta-learning is developed along with the proposed model to update the model parameters and meta parameters alternately. Experiments on datasets of different domains showed substantial improvements of NMT performances on a limited amount of data.

14. Neural Machine Translation with Joint Representation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Yanyang Li, Qiang Wang, Tong Xiao, Tongran Liu, Jingbo Zhu
Though early successes of Statistical Machine Translation (SMT) systems are attributed in part to the explicit modelling of the interaction between any two source and target units, e.g., alignment, the recent Neural Machine Translation (NMT) systems resort to the attention which partially encodes the interaction for efficiency. In this paper, we employ Joint Representation that fully accounts for each possible interaction. We sidestep the inefficiency issue by refining representations with the proposed efficient attention operation. The resulting Reformer models offer a new Sequence-to-Sequence modelling paradigm besides the Encoder-Decoder framework and outperform the Transformer baseline in either the small scale IWSLT14 German-English, English-German and IWSLT15 Vietnamese-English or the large scale NIST12 Chinese-English translation tasks by about 1 BLEU point. We also propose a systematic model scaling approach, allowing the Reformer model to beat the state-of-the-art Transformer in IWSLT14 German-English and NIST12 Chinese-English with about 50% fewer parameters. The code is publicly available at https://github.com/lyy1994/reformer.

15. Explicit Sentence Compression for Neural Machine Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Zuchao Li, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Zhuosheng Zhang, Hai Zhao
State-of-the-art Transformer-based neural machine translation (NMT) systems still follow a standard encoder-decoder framework, in which source sentence representation can be well done by an encoder with self-attention mechanism. Though Transformer-based encoder may effectively capture general information in its resulting source sentence representation, the backbone information, which stands for the gist of a sentence, is not specifically focused on. In this paper, we propose an explicit sentence compression method to enhance the source sentence representation for NMT. In practice, an explicit sentence compression goal used to learn the backbone information in a sentence. We propose three ways, including backbone source-side fusion, target-side fusion, and both-side fusion, to integrate the compressed sentence into NMT. Our empirical tests on the WMT English-to-French and English-to-German translation tasks show that the proposed sentence compression method significantly improves the translation performances over strong baselines.

16. Synchronous Speech Recognition and Speech-to-Text Translation with Interactive Decoding [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Yuchen Liu, Jiajun Zhang, Hao Xiong, Long Zhou, Zhongjun He, Hua Wu, Haifeng Wang, Chengqing Zong
Speech-to-text translation (ST), which translates source language speech into target language text, has attracted intensive attention in recent years. Compared to the traditional pipeline system, the end-to-end ST model has potential benefits of lower latency, smaller model size, and less error propagation. However, it is notoriously difficult to implement such a model without transcriptions as intermediate. Existing works generally apply multi-task learning to improve translation quality by jointly training end-to-end ST along with automatic speech recognition (ASR). However, different tasks in this method cannot utilize information from each other, which limits the improvement. Other works propose a two-stage model where the second model can use the hidden state from the first one, but its cascade manner greatly affects the efficiency of training and inference process. In this paper, we propose a novel interactive attention mechanism which enables ASR and ST to perform synchronously and interactively in a single model. Specifically, the generation of transcriptions and translations not only relies on its previous outputs but also the outputs predicted in the other task. Experiments on TED speech translation corpora have shown that our proposed model can outperform strong baselines on the quality of speech translation and achieve better speech recognition performances as well.

17. Simplify-Then-Translate: Automatic Preprocessing for Black-Box Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Sneha Mehta, Bahareh Azarnoush, Boris Chen, Avneesh Saluja, Vinith Misra, Ballav Bihani, Ritwik Kumar
Black-box machine translation systems have proven incredibly useful for a variety of applications yet by design are hard to adapt, tune to a specific domain, or build on top of. In this work, we introduce a method to improve such systems via automatic pre-processing (APP) using sentence simplification. We first propose a method to automatically generate a large in-domain paraphrase corpus through back-translation with a black-box MT system, which is used to train a paraphrase model that “simplifies” the original sentence to be more conducive for translation. The model is used to preprocess source sentences of multiple low-resource language pairs. We show that this preprocessing leads to better translation performance as compared to non-preprocessed source sentences. We further perform side-by-side human evaluation to verify that translations of the simplified sentences are better than the original ones. Finally, we provide some guidance on recommended language pairs for generating the simplification model corpora by investigating the relationship between ease of translation of a language pair (as measured by BLEU) and quality of the resulting simplification model from back-translations of this language pair (as measured by SARI), and tie this into the downstream task of low-resource translation.

18. Controlling Neural Machine Translation Formality with Synthetic Supervision [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Xing Niu, Marine Carpuat
This work aims to produce translations that convey source language content at a formality level that is appropriate for a particular audience. Framing this problem as a neural sequence-to-sequence task ideally requires training triplets consisting of a bilingual sentence pair labeled with target language formality. However, in practice, available training examples are limited to English sentence pairs of different styles, and bilingual parallel sentences of unknown formality. We introduce a novel training scheme for multi-task models that automatically generates synthetic training triplets by inferring the missing element on the fly, thus enabling end-to-end training. Comprehensive automatic and human assessments show that our best model outperforms existing models by producing translations that better match desired formality levels while preserving the source meaning.1

19. Translation-Based Matching Adversarial Network for Cross-Lingual Natural Language Inference [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Kunxun Qi, Jianfeng Du
Cross-lingual natural language inference is a fundamental task in cross-lingual natural language understanding, widely addressed by neural models recently. Existing neural model based methods either align sentence embeddings between source and target languages, heavily relying on annotated parallel corpora, or exploit pre-trained cross-lingual language models that are fine-tuned on a single language and hard to transfer knowledge to another language. To resolve these limitations in existing methods, this paper proposes an adversarial training framework to enhance both pre-trained models and classical neural models for cross-lingual natural language inference. It trains on the union of data in the source language and data in the target language, learning language-invariant features to improve the inference performance. Experimental results on the XNLI benchmark demonstrate that three popular neural models enhanced by the proposed framework significantly outperform the original models.

20. IntroVNMT: An Introspective Model for Variational Neural Machine Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Xin Sheng, Linli Xu, Junliang Guo, Jingchang Liu, Ruoyu Zhao, Yinlong Xu
We propose a novel introspective model for variational neural machine translation (IntroVNMT) in this paper, inspired by the recent successful application of introspective variational autoencoder (IntroVAE) in high quality image synthesis. Different from the vanilla variational NMT model, IntroVNMT is capable of improving itself introspectively by evaluating the quality of the generated target sentences according to the high-level latent variables of the real and generated target sentences. As a consequence of introspective training, the proposed model is able to discriminate between the generated and real sentences of the target language via the latent variables generated by the encoder of the model. In this way, IntroVNMT is able to generate more realistic target sentences in practice. In the meantime, IntroVNMT inherits the advantages of the variational autoencoders (VAEs), and the model training process is more stable than the generative adversarial network (GAN) based models. Experimental results on different translation tasks demonstrate that the proposed model can achieve significant improvements over the vanilla variational NMT model.

21. Latent-Variable Non-Autoregressive Neural Machine Translation with Deterministic Inference Using a Delta Posterior [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Raphael Shu, Jason Lee, Hideki Nakayama, Kyunghyun Cho
Although neural machine translation models reached high translation quality, the autoregressive nature makes inference difficult to parallelize and leads to high translation latency. Inspired by recent refinement-based approaches, we propose LaNMT, a latent-variable non-autoregressive model with continuous latent variables and deterministic inference procedure. In contrast to existing approaches, we use a deterministic inference algorithm to find the target sequence that maximizes the lowerbound to the log-probability. During inference, the length of translation automatically adapts itself. Our experiments show that the lowerbound can be greatly increased by running the inference algorithm, resulting in significantly improved translation quality. Our proposed model closes the performance gap between non-autoregressive and autoregressive approaches on ASPEC Ja-En dataset with 8.6x faster decoding. On WMT'14 En-De dataset, our model narrows the gap with autoregressive baseline to 2.0 BLEU points with 12.5x speedup. By decoding multiple initial latent variables in parallel and rescore using a teacher model, the proposed model further brings the gap down to 1.0 BLEU point on WMT'14 En-De task with 6.8x speedup.

22. Evaluating the Cross-Lingual Effectiveness of Massively Multilingual Neural Machine Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Aditya Siddhant, Melvin Johnson, Henry Tsai, Naveen Ari, Jason Riesa, Ankur Bapna, Orhan Firat, Karthik Raman
The recently proposed massively multilingual neural machine translation (NMT) system has been shown to be capable of translating over 100 languages to and from English within a single model (Aharoni, Johnson, and Firat 2019). Its improved translation performance on low resource languages hints at potential cross-lingual transfer capability for downstream tasks. In this paper, we evaluate the cross-lingual effectiveness of representations from the encoder of a massively multilingual NMT model on 5 downstream classification and sequence labeling tasks covering a diverse set of over 50 languages. We compare against a strong baseline, multilingual BERT (mBERT) (Devlin et al. 2018), in different cross-lingual transfer learning scenarios and show gains in zero-shot transfer in 4 out of these 5 tasks.

23. Alignment-Enhanced Transformer for Constraining NMT with Pre-Specified Translations [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Kai Song, Kun Wang, Heng Yu, Yue Zhang, Zhongqiang Huang, Weihua Luo, Xiangyu Duan, Min Zhang
We investigate the task of constraining NMT with pre-specified translations, which has practical significance for a number of research and industrial applications. Existing works impose pre-specified translations as lexical constraints during decoding, which are based on word alignments derived from target-to-source attention weights. However, multiple recent studies have found that word alignment derived from generic attention heads in the Transformer is unreliable. We address this problem by introducing a dedicated head in the multi-head Transformer architecture to capture external supervision signals. Results on five language pairs show that our method is highly effective in constraining NMT with pre-specified translations, consistently outperforming previous methods in translation quality.

24. Neural Semantic Parsing in Low-Resource Settings with Back-Translation and Meta-Learning [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Yibo Sun, Duyu Tang, Nan Duan, Yeyun Gong, Xiaocheng Feng, Bing Qin, Daxin Jiang
Neural semantic parsing has achieved impressive results in recent years, yet its success relies on the availability of large amounts of supervised data. Our goal is to learn a neural semantic parser when only prior knowledge about a limited number of simple rules is available, without access to either annotated programs or execution results. Our approach is initialized by rules, and improved in a back-translation paradigm using generated question-program pairs from the semantic parser and the question generator. A phrase table with frequent mapping patterns is automatically derived, also updated as training progresses, to measure the quality of generated instances. We train the model with model-agnostic meta-learning to guarantee the accuracy and stability on examples covered by rules, and meanwhile acquire the versatility to generalize well on examples uncovered by rules. Results on three benchmark datasets with different domains and programs show that our approach incrementally improves the accuracy. On WikiSQL, our best model is comparable to the state-of-the-art system learned from denotations.

25. Generating Diverse Translation by Manipulating Multi-Head Attention [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Zewei Sun, Shujian Huang, Hao-Ran Wei, Xinyu Dai, Jiajun Chen
Transformer model (Vaswani et al. 2017) has been widely used in machine translation tasks and obtained state-of-the-art results. In this paper, we report an interesting phenomenon in its encoder-decoder multi-head attention: different attention heads of the final decoder layer align to different word translation candidates. We empirically verify this discovery and propose a method to generate diverse translations by manipulating heads. Furthermore, we make use of these diverse translations with the back-translation technique for better data augmentation. Experiment results show that our method generates diverse translations without a severe drop in translation quality. Experiments also show that back-translation with these diverse translations could bring a significant improvement in performance on translation tasks. An auxiliary experiment of conversation response generation task proves the effect of diversity as well.

26. Unsupervised Neural Dialect Translation with Commonality and Diversity Modeling [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Yu Wan, Baosong Yang, Derek F. Wong, Lidia S. Chao, Haihua Du, Ben C. H. Ao
As a special machine translation task, dialect translation has two main characteristics: 1) lack of parallel training corpus; and 2) possessing similar grammar between two sides of the translation. In this paper, we investigate how to exploit the commonality and diversity between dialects thus to build unsupervised translation models merely accessing to monolingual data. Specifically, we leverage pivot-private embedding, layer coordination, as well as parameter sharing to sufficiently model commonality and diversity among source and target, ranging from lexical, through syntactic, to semantic levels. In order to examine the effectiveness of the proposed models, we collect 20 million monolingual corpus for each of Mandarin and Cantonese, which are official language and the most widely used dialect in China. Experimental results reveal that our methods outperform rule-based simplified and traditional Chinese conversion and conventional unsupervised translation models over 12 BLEU scores.

27. Neural Machine Translation with Byte-Level Subwords [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Changhan Wang, Kyunghyun Cho, Jiatao Gu
Almost all existing machine translation models are built on top of character-based vocabularies: characters, subwords or words. Rare characters from noisy text or character-rich languages such as Japanese and Chinese however can unnecessarily take up vocabulary slots and limit its compactness. Representing text at the level of bytes and using the 256 byte set as vocabulary is a potential solution to this issue. High computational cost has however prevented it from being widely deployed or used in practice. In this paper, we investigate byte-level subwords, specifically byte-level BPE (BBPE), which is compacter than character vocabulary and has no out-of-vocabulary tokens, but is more efficient than using pure bytes only is. We claim that contextualizing BBPE embeddings is necessary, which can be implemented by a convolutional or recurrent layer. Our experiments show that BBPE has comparable performance to BPE while its size is only 1/8 of that for BPE. In the multilingual setting, BBPE maximizes vocabulary sharing across many languages and achieves better translation quality. Moreover, we show that BBPE enables transferring models between languages with non-overlapping character sets.

28. Bridging the Gap between Pre-Training and Fine-Tuning for End-to-End Speech Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Chengyi Wang, Yu Wu, Shujie Liu, Zhenglu Yang, Ming Zhou
End-to-end speech translation, a hot topic in recent years, aims to translate a segment of audio into a specific language with an end-to-end model. Conventional approaches employ multi-task learning and pre-training methods for this task, but they suffer from the huge gap between pre-training and fine-tuning. To address these issues, we propose a Tandem Connectionist Encoding Network (TCEN) which bridges the gap by reusing all subnets in fine-tuning, keeping the roles of subnets consistent, and pre-training the attention module. Furthermore, we propose two simple but effective methods to guarantee the speech encoder outputs and the MT encoder inputs are consistent in terms of semantic representation and sequence length. Experimental results show that our model leads to significant improvements in En-De and En-Fr translation irrespective of the backbones.

29. Go From the General to the Particular: Multi-Domain Translation with Domain Transformation Networks [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Yong Wang, Longyue Wang, Shuming Shi, Victor O. K. Li, Zhaopeng Tu
The key challenge of multi-domain translation lies in simultaneously encoding both the general knowledge shared across domains and the particular knowledge distinctive to each domain in a unified model. Previous work shows that the standard neural machine translation (NMT) model, trained on mixed-domain data, generally captures the general knowledge, but misses the domain-specific knowledge. In response to this problem, we augment NMT model with additional domain transformation networks to transform the general representations to domain-specific representations, which are subsequently fed to the NMT decoder. To guarantee the knowledge transformation, we also propose two complementary supervision signals by leveraging the power of knowledge distillation and adversarial learning. Experimental results on several language pairs, covering both balanced and unbalanced multi-domain translation, demonstrate the effectiveness and universality of the proposed approach. Encouragingly, the proposed unified model achieves comparable results with the fine-tuning approach that requires multiple models to preserve the particular knowledge. Further analyses reveal that the domain transformation networks successfully capture the domain-specific knowledge as expected.1

30. Acquiring Knowledge from Pre-Trained Model to Neural Machine Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Rongxiang Weng, Heng Yu, Shujian Huang, Shanbo Cheng, Weihua Luo
Pre-training and fine-tuning have achieved great success in natural language process field. The standard paradigm of exploiting them includes two steps: first, pre-training a model, e.g. BERT, with a large scale unlabeled monolingual data. Then, fine-tuning the pre-trained model with labeled data from downstream tasks. However, in neural machine translation (NMT), we address the problem that the training objective of the bilingual task is far different from the monolingual pre-trained model. This gap leads that only using fine-tuning in NMT can not fully utilize prior language knowledge. In this paper, we propose an Apt framework for acquiring knowledge from pre-trained model to NMT. The proposed approach includes two modules: 1). a dynamic fusion mechanism to fuse task-specific features adapted from general knowledge into NMT network, 2). a knowledge distillation paradigm to learn language knowledge continuously during the NMT training process. The proposed approach could integrate suitable knowledge from pre-trained models to improve the NMT. Experimental results on WMT English to German, German to English and Chinese to English machine translation tasks show that our model outperforms strong baselines and the fine-tuning counterparts.

31. Towards Making the Most of BERT in Neural Machine Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Jiacheng Yang, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Weinan Zhang, Yong Yu, Lei Li
GPT-2 and BERT demonstrate the effectiveness of using pre-trained language models (LMs) on various natural language processing tasks. However, LM fine-tuning often suffers from catastrophic forgetting when applied to resource-rich tasks. In this work, we introduce a concerted training framework (CTnmt) that is the key to integrate the pre-trained LMs to neural machine translation (NMT). Our proposed CTnmt} consists of three techniques: a) asymptotic distillation to ensure that the NMT model can retain the previous pre-trained knowledge; b) a dynamic switching gate to avoid catastrophic forgetting of pre-trained knowledge; and c) a strategy to adjust the learning paces according to a scheduled policy. Our experiments in machine translation show CTnmt gains of up to 3 BLEU score on the WMT14 English-German language pair which even surpasses the previous state-of-the-art pre-training aided NMT by 1.4 BLEU score. While for the large WMT14 English-French task with 40 millions of sentence-pairs, our base model still significantly improves upon the state-of-the-art Transformer big model by more than 1 BLEU score.

32. Visual Agreement Regularized Training for Multi-Modal Machine Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Pengcheng Yang, Boxing Chen, Pei Zhang, Xu Sun
Multi-modal machine translation aims at translating the source sentence into a different language in the presence of the paired image. Previous work suggests that additional visual information only provides dispensable help to translation, which is needed in several very special cases such as translating ambiguous words. To make better use of visual information, this work presents visual agreement regularized training. The proposed approach jointly trains the source-to-target and target-to-source translation models and encourages them to share the same focus on the visual information when generating semantically equivalent visual words (e.g. “ball” in English and “ballon” in French). Besides, a simple yet effective multi-head co-attention model is also introduced to capture interactions between visual and textual features. The results show that our approaches can outperform competitive baselines by a large margin on the Multi30k dataset. Further analysis demonstrates that the proposed regularized training can effectively improve the agreement of attention on the image, leading to better use of visual information.

33. Improving Context-Aware Neural Machine Translation Using Self-Attentive Sentence Embedding [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Hyeongu Yun, Yongkeun Hwang, Kyomin Jung
Fully Attentional Networks (FAN) like Transformer (Vaswani et al. 2017) has shown superior results in Neural Machine Translation (NMT) tasks and has become a solid baseline for translation tasks. More recent studies also have reported experimental results that additional contextual sentences improve translation qualities of NMT models (Voita et al. 2018; Müller et al. 2018; Zhang et al. 2018). However, those studies have exploited multiple context sentences as a single long concatenated sentence, that may cause the models to suffer from inefficient computational complexities and long-range dependencies. In this paper, we propose Hierarchical Context Encoder (HCE) that is able to exploit multiple context sentences separately using the hierarchical FAN structure. Our proposed encoder first abstracts sentence-level information from preceding sentences in a self-attentive way, and then hierarchically encodes context-level information. Through extensive experiments, we observe that our HCE records the best performance measured in BLEU score on English-German, English-Turkish, and English-Korean corpus. In addition, we observe that our HCE records the best performance in a crowd-sourced test set which is designed to evaluate how well an encoder can exploit contextual information. Finally, evaluation on English-Korean pronoun resolution test suite also shows that our HCE can properly exploit contextual information.

34. Reinforced Curriculum Learning on Pre-Trained Neural Machine Translation Models [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Mingjun Zhao, Haijiang Wu, Di Niu, Xiaoli Wang
The competitive performance of neural machine translation (NMT) critically relies on large amounts of training data. However, acquiring high-quality translation pairs requires expert knowledge and is costly. Therefore, how to best utilize a given dataset of samples with diverse quality and characteristics becomes an important yet understudied question in NMT. Curriculum learning methods have been introduced to NMT to optimize a model's performance by prescribing the data input order, based on heuristics such as the assessment of noise and difficulty levels. However, existing methods require training from scratch, while in practice most NMT models are pre-trained on big data already. Moreover, as heuristics, they do not generalize well. In this paper, we aim to learn a curriculum for improving a pre-trained NMT model by re-selecting influential data samples from the original training set and formulate this task as a reinforcement learning problem. Specifically, we propose a data selection framework based on Deterministic Actor-Critic, in which a critic network predicts the expected change of model performance due to a certain sample, while an actor network learns to select the best sample out of a random batch of samples presented to it. Experiments on several translation datasets show that our method can further improve the performance of NMT when original batch training reaches its ceiling, without using additional new training data, and significantly outperforms several strong baseline methods.

35. Balancing Quality and Human Involvement: An Effective Approach to Interactive Neural Machine Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Tianxiang Zhao, Lemao Liu, Guoping Huang, Huayang Li, Yingling Liu, Guiquan Liu, Shuming Shi
Conventional interactive machine translation typically requires a human translator to validate every generated target word, even though most of them are correct in the advanced neural machine translation (NMT) scenario. Previous studies have exploited confidence approaches to address the intensive human involvement issue, which request human guidance only for a few number of words with low confidences. However, such approaches do not take the history of human involvement into account, and optimize the models only for the translation quality while ignoring the cost of human involvement. In response to these pitfalls, we propose a novel interactive NMT model, which explicitly accounts the history of human involvements and particularly is optimized towards two objectives corresponding to the translation quality and the cost of human involvement, respectively. Specifically, the model jointly predicts a target word and a decision on whether to request human guidance, which is based on both the partial translation and the history of human involvements. Since there is no explicit signals on the decisions of requesting human guidance in the bilingual corpus, we optimize the model with the reinforcement learning technique which enables our model to accurately predict when to request human guidance. Simulated and real experiments show that the proposed model can achieve higher translation quality with similar or less human involvement over the confidence-based baseline.

36. Semi-Supervised Text Simplification with Back-Translation and Asymmetric Denoising Autoencoders [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Natural Language Processing
  Yanbin Zhao, Lu Chen, Zhi Chen, Kai Yu
Text simplification (TS) rephrases long sentences into simplified variants while preserving inherent semantics. Traditional sequence-to-sequence models heavily rely on the quantity and quality of parallel sentences, which limits their applicability in different languages and domains. This work investigates how to leverage large amounts of unpaired corpora in TS task. We adopt the back-translation architecture in unsupervised machine translation (NMT), including denoising autoencoders for language modeling and automatic generation of parallel data by iterative back-translation. However, it is non-trivial to generate appropriate complex-simple pair if we directly treat the set of simple and complex corpora as two different languages, since the two types of sentences are quite similar and it is hard for the model to capture the characteristics in different types of sentences. To tackle this problem, we propose asymmetric denoising methods for sentences with separate complexity. When modeling simple and complex sentences with autoencoders, we introduce different types of noise into the training process. Such a method can significantly improve the simplification performance. Our model can be trained in both unsupervised and semi-supervised manner. Automatic and human evaluations show that our unsupervised model outperforms the previous systems, and with limited supervision, our model can perform competitively with multiple state-of-the-art simplification systems.

37. Multimodal Structure-Consistent Image-to-Image Translation [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Vision
  Che-Tsung Lin, Yen-Yi Wu, Po-Hao Hsu, Shang-Hong Lai
Unpaired image-to-image translation is proven quite effective in boosting a CNN-based object detector for a different domain by means of data augmentation that can well preserve the image-objects in the translated images. Recently, multimodal GAN (Generative Adversarial Network) models have been proposed and were expected to further boost the detector accuracy by generating a diverse collection of images in the target domain, given only a single/labelled image in the source domain. However, images generated by multimodal GANs would achieve even worse detection accuracy than the ones by a unimodal GAN with better object preservation. In this work, we introduce cycle-structure consistency for generating diverse and structure-preserved translated images across complex domains, such as between day and night, for object detector training. Qualitative results show that our model, Multimodal AugGAN, can generate diverse and realistic images for the target domain. For quantitative comparisons, we evaluate other competing methods and ours by using the generated images to train YOLO, Faster R-CNN and FCN models and prove that our model achieves significant improvement and outperforms other methods on the detection accuracies and the FCN scores. Also, we demonstrate that our model could provide more diverse object appearances in the target domain through comparison on the perceptual distance metric.

38. Learning to Transfer: Unsupervised Domain Translation via Meta-Learning [PDF] 返回目录
  AAAI 2020. AAAI Technical Track: Vision
  Jianxin Lin, Yijun Wang, Zhibo Chen, Tianyu He
Unsupervised domain translation has recently achieved impressive performance with Generative Adversarial Network (GAN) and sufficient (unpaired) training data. However, existing domain translation frameworks form in a disposable way where the learning experiences are ignored and the obtained model cannot be adapted to a new coming domain. In this work, we take on unsupervised domain translation problems from a meta-learning perspective. We propose a model called Meta-Translation GAN (MT-GAN) to find good initialization of translation models. In the meta-training procedure, MT-GAN is explicitly trained with a primary translation task and a synthesized dual translation task. A cycle-consistency meta-optimization objective is designed to ensure the generalization ability. We demonstrate effectiveness of our model on ten diverse two-domain translation tasks and multiple face identity translation tasks. We show that our proposed approach significantly outperforms the existing domain translation methods when each domain contains no more than ten training samples.

39. Content Word Aware Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita
Neural machine translation (NMT) encodes the source sentence in a universal way to generate the target sentence word-by-word. However, NMT does not consider the importance of word in the sentence meaning, for example, some words (i.e., content words) express more important meaning than others (i.e., function words). To address this limitation, we first utilize word frequency information to distinguish between content and function words in a sentence, and then design a content word-aware NMT to improve translation performance. Empirical results on the WMT14 English-to-German, WMT14 English-to-French, and WMT17 Chinese-to-English translation tasks show that the proposed methods can significantly improve the performance of Transformer-based NMT.

40. Evaluating Explanation Methods for Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Jierui Li, Lemao Liu, Huayang Li, Guanlin Li, Guoping Huang, Shuming Shi
Recently many efforts have been devoted to interpreting the black-box NMT models, but little progress has been made on metrics to evaluate explanation methods. Word Alignment Error Rate can be used as such a metric that matches human understanding, however, it can not measure explanation methods on those target words that are not aligned to any source word. This paper thereby makes an initial attempt to evaluate explanation methods from an alternative viewpoint. To this end, it proposes a principled metric based on fidelity in regard to the predictive behavior of the NMT model. As the exact computation for this metric is intractable, we employ an efficient approach as its approximation. On six standard translation tasks, we quantitatively evaluate several explanation methods in terms of the proposed metric and we reveal some valuable findings for these explanation methods in our experiments.

41. Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Junliang Guo, Linli Xu, Enhong Chen
The masked language model has received remarkable attention due to its effectiveness on various natural language processing tasks. However, few works have adopted this technique in the sequence-to-sequence models. In this work, we introduce a jointly masked sequence-to-sequence model and explore its application on non-autoregressive neural machine translation~(NAT). Specifically, we first empirically study the functionalities of the encoder and the decoder in NAT models, and find that the encoder takes a more important role than the decoder regarding the translation quality. Therefore, we propose to train the encoder more rigorously by masking the encoder input while training. As for the decoder, we propose to train it based on the consecutive masking of the decoder input with an n-gram loss function to alleviate the problem of translating duplicate words. The two types of masks are applied to the model jointly at the training stage. We conduct experiments on five benchmark machine translation tasks, and our model can achieve 27.69/32.24 BLEU scores on WMT14 English-German/German-English tasks with 5+ times speed up compared with an autoregressive model.

42. Learning Source Phrase Representations for Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Hongfei Xu, Josef van Genabith, Deyi Xiong, Qiuhui Liu, Jingyi Zhang
The Transformer translation model (Vaswani et al., 2017) based on a multi-head attention mechanism can be computed effectively in parallel and has significantly pushed forward the performance of Neural Machine Translation (NMT). Though intuitively the attentional network can connect distant words via shorter network paths than RNNs, empirical analysis demonstrates that it still has difficulty in fully capturing long-distance dependencies (Tang et al., 2018). Considering that modeling phrases instead of words has significantly improved the Statistical Machine Translation (SMT) approach through the use of larger translation blocks (“phrases”) and its reordering ability, modeling NMT at phrase level is an intuitive proposal to help the model capture long-distance relationships. In this paper, we first propose an attentive phrase representation generation mechanism which is able to generate phrase representations from corresponding token representations. In addition, we incorporate the generated phrase representations into the Transformer translation model to enhance its ability to capture long-distance relationships. In our experiments, we obtain significant improvements on the WMT 14 English-German and English-French tasks on top of the strong Transformer baseline, which shows the effectiveness of our approach. Our approach helps Transformer Base models perform at the level of Transformer Big models, and even significantly better for long sentences, but with substantially fewer parameters and training steps. The fact that phrase representations help even in the big setting further supports our conjecture that they make a valuable contribution to long-distance relations.

43. Multiscale Collaborative Deep Models for Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Xiangpeng Wei, Heng Yu, Yue Hu, Yue Zhang, Rongxiang Weng, Weihua Luo
Recent evidence reveals that Neural Machine Translation (NMT) models with deeper neural networks can be more effective but are difficult to train. In this paper, we present a MultiScale Collaborative (MSC) framework to ease the training of NMT models that are substantially deeper than those used previously. We explicitly boost the gradient back-propagation from top to bottom levels by introducing a block-scale collaboration mechanism into deep NMT models. Then, instead of forcing the whole encoder stack directly learns a desired representation, we let each encoder block learns a fine-grained representation and enhance it by encoding spatial dependencies using a context-scale collaboration. We provide empirical evidence showing that the MSC nets are easy to optimize and can obtain improvements of translation quality from considerably increased depth. On IWSLT translation tasks with three translation directions, our extremely deep models (with 72-layer encoders) surpass strong baselines by +2.2~+3.1 BLEU points. In addition, our deep MSC achieves a BLEU score of 30.56 on WMT14 English-to-German task that significantly outperforms state-of-the-art deep NMT models. We have included the source code in supplementary materials.

44. Norm-Based Curriculum Learning for Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Xuebo Liu, Houtim Lai, Derek F. Wong, Lidia S. Chao
A neural machine translation (NMT) system is expensive to train, especially with high-resource settings. As the NMT architectures become deeper and wider, this issue gets worse and worse. In this paper, we aim to improve the efficiency of training an NMT by introducing a novel norm-based curriculum learning method. We use the norm (aka length or module) of a word embedding as a measure of 1) the difficulty of the sentence, 2) the competence of the model, and 3) the weight of the sentence. The norm-based sentence difficulty takes the advantages of both linguistically motivated and model-based sentence difficulties. It is easy to determine and contains learning-dependent features. The norm-based model competence makes NMT learn the curriculum in a fully automated way, while the norm-based sentence weight further enhances the learning of the vector representation of the NMT. Experimental results for the WMT’14 English-German and WMT’17 Chinese-English translation tasks demonstrate that the proposed method outperforms strong baselines in terms of BLEU score (+1.17/+1.56) and training speedup (2.22x/3.33x).

45. Opportunistic Decoding with Timely Correction for Simultaneous Translation [PDF] 返回目录
  ACL 2020.
  Renjie Zheng, Mingbo Ma, Baigong Zheng, Kaibo Liu, Liang Huang
Simultaneous translation has many important application scenarios and attracts much attention from both academia and industry recently. Most existing frameworks, however, have difficulties in balancing between the translation quality and latency, i.e., the decoding policy is usually either too aggressive or too conservative. We propose an opportunistic decoding technique with timely correction ability, which always (over-)generates a certain mount of extra words at each step to keep the audience on track with the latest information. At the same time, it also corrects, in a timely fashion, the mistakes in the former overgenerated words when observing more source context to ensure high translation quality. Experiments show our technique achieves substantial reduction in latency and up to +3.1 increase in BLEU, with revision rate under 8% in Chinese-to-English and English-to-Chinese translation.

46. Multi-Hypothesis Machine Translation Evaluation [PDF] 返回目录
  ACL 2020.
  Marina Fomicheva, Lucia Specia, Francisco Guzmán
Reliably evaluating Machine Translation (MT) through automated metrics is a long-standing problem. One of the main challenges is the fact that multiple outputs can be equally valid. Attempts to minimise this issue include metrics that relax the matching of MT output and reference strings, and the use of multiple references. The latter has been shown to significantly improve the performance of evaluation metrics. However, collecting multiple references is expensive and in practice a single reference is generally used. In this paper, we propose an alternative approach: instead of modelling linguistic variation in human reference we exploit the MT model uncertainty to generate multiple diverse translations and use these: (i) as surrogates to reference translations; (ii) to obtain a quantification of translation variability to either complement existing metric scores or (iii) replace references altogether. We show that for a number of popular evaluation metrics our variability estimates lead to substantial improvements in correlation with human judgements of quality by up 15%.

47. Multimodal Quality Estimation for Machine Translation [PDF] 返回目录
  ACL 2020.
  Shu Okabe, Frédéric Blain, Lucia Specia
We propose approaches to Quality Estimation (QE) for Machine Translation that explore both text and visual modalities for Multimodal QE. We compare various multimodality integration and fusion strategies. For both sentence-level and document-level predictions, we show that state-of-the-art neural and feature-based QE frameworks obtain better results when using the additional modality.

48. Bilingual Dictionary Based Neural Machine Translation without Using Parallel Sentences [PDF] 返回目录
  ACL 2020.
  Xiangyu Duan, Baijun Ji, Hao Jia, Min Tan, Min Zhang, Boxing Chen, Weihua Luo, Yue Zhang
In this paper, we propose a new task of machine translation (MT), which is based on no parallel sentences but can refer to a ground-truth bilingual dictionary. Motivated by the ability of a monolingual speaker learning to translate via looking up the bilingual dictionary, we propose the task to see how much potential an MT system can attain using the bilingual dictionary and large scale monolingual corpora, while is independent on parallel sentences. We propose anchored training (AT) to tackle the task. AT uses the bilingual dictionary to establish anchoring points for closing the gap between source language and target language. Experiments on various language pairs show that our approaches are significantly better than various baselines, including dictionary-based word-by-word translation, dictionary-supervised cross-lingual word embedding transformation, and unsupervised MT. On distant language pairs that are hard for unsupervised MT to perform well, AT performs remarkably better, achieving performances comparable to supervised SMT trained on more than 4M parallel sentences.

49. Boosting Neural Machine Translation with Similar Translations [PDF] 返回目录
  ACL 2020.
  Jitao Xu, Josep Crego, Jean Senellart
This paper explores data augmentation methods for training Neural Machine Translation to make use of similar translations, in a comparable way a human translator employs fuzzy matches. In particular, we show how we can simply present the neural model with information of both source and target sides of the fuzzy matches, we also extend the similarity to include semantically related translations retrieved using sentence distributed representations. We show that translations based on fuzzy matching provide the model with “copy” information while translations based on embedding similarities tend to extend the translation “context”. Results indicate that the effect from both similar sentences are adding up to further boost accuracy, combine naturally with model fine-tuning and are providing dynamic adaptation for unseen translation pairs. Tests on multiple data sets and domains show consistent accuracy improvements. To foster research around these techniques, we also release an Open-Source toolkit with efficient and flexible fuzzy-match implementation.

50. Character-Level Translation with Self-attention [PDF] 返回目录
  ACL 2020.
  Yingqiang Gao, Nikola I. Nikolov, Yuhuang Hu, Richard H.R. Hahnloser
We explore the suitability of self-attention models for character-level neural machine translation. We test the standard transformer model, as well as a novel variant in which the encoder block combines information from nearby characters using convolutions. We perform extensive experiments on WMT and UN datasets, testing both bilingual and multilingual translation to English using up to three input languages (French, Spanish, and Chinese). Our transformer variant consistently outperforms the standard transformer at the character-level and converges faster while learning more robust character-level alignments.

51. Enhancing Machine Translation with Dependency-Aware Self-Attention [PDF] 返回目录
  ACL 2020.
  Emanuele Bugliarello, Naoaki Okazaki
Most neural machine translation models only rely on pairs of parallel sentences, assuming syntactic information is automatically learned by an attention mechanism. In this work, we investigate different approaches to incorporate syntactic knowledge in the Transformer model and also propose a novel, parameter-free, dependency-aware self-attention mechanism that improves its translation quality, especially for long sentences and in low-resource scenarios. We show the efficacy of each approach on WMT English-German and English-Turkish, and WAT English-Japanese translation tasks.

52. Improving Massively Multilingual Neural Machine Translation and Zero-Shot Translation [PDF] 返回目录
  ACL 2020.
  Biao Zhang, Philip Williams, Ivan Titov, Rico Sennrich
Massively multilingual models for neural machine translation (NMT) are theoretically attractive, but often underperform bilingual models and deliver poor zero-shot translations. In this paper, we explore ways to improve them. We argue that multilingual NMT requires stronger modeling capacity to support language pairs with varying typological characteristics, and overcome this bottleneck via language-specific components and deepening NMT architectures. We identify the off-target translation issue (i.e. translating into a wrong target language) as the major source of the inferior zero-shot performance, and propose random online backtranslation to enforce the translation of unseen training language pairs. Experiments on OPUS-100 (a novel multilingual dataset with 100 languages) show that our approach substantially narrows the performance gap with bilingual models in both one-to-many and many-to-many settings, and improves zero-shot performance by ~10 BLEU, approaching conventional pivot-based methods.

53. It’s Easier to Translate out of English than into it: Measuring Neural Translation Difficulty by Cross-Mutual Information [PDF] 返回目录
  ACL 2020.
  Emanuele Bugliarello, Sabrina J. Mielke, Antonios Anastasopoulos, Ryan Cotterell, Naoaki Okazaki
The performance of neural machine translation systems is commonly evaluated in terms of BLEU. However, due to its reliance on target language properties and generation, the BLEU metric does not allow an assessment of which translation directions are more difficult to model. In this paper, we propose cross-mutual information (XMI): an asymmetric information-theoretic metric of machine translation difficulty that exploits the probabilistic nature of most neural machine translation models. XMI allows us to better evaluate the difficulty of translating text into the target language while controlling for the difficulty of the target-side generation component independent of the translation task. We then present the first systematic and controlled study of cross-lingual translation difficulties using modern neural translation systems. Code for replicating our experiments is available online at https://github.com/e-bug/nmt-difficulty.

54. Language-aware Interlingua for Multilingual Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Changfeng Zhu, Heng Yu, Shanbo Cheng, Weihua Luo
Multilingual neural machine translation (NMT) has led to impressive accuracy improvements in low-resource scenarios by sharing common linguistic information across languages. However, the traditional multilingual model fails to capture the diversity and specificity of different languages, resulting in inferior performance compared with individual models that are sufficiently trained. In this paper, we incorporate a language-aware interlingua into the Encoder-Decoder architecture. The interlingual network enables the model to learn a language-independent representation from the semantic spaces of different languages, while still allowing for language-specific specialization of a particular language-pair. Experiments show that our proposed method achieves remarkable improvements over state-of-the-art multilingual NMT baselines and produces comparable performance with strong individual models.

55. On the Limitations of Cross-lingual Encoders as Exposed by Reference-Free Machine Translation Evaluation [PDF] 返回目录
  ACL 2020.
  Wei Zhao, Goran Glavaš, Maxime Peyrard, Yang Gao, Robert West, Steffen Eger
Evaluation of cross-lingual encoders is usually performed either via zero-shot cross-lingual transfer in supervised downstream tasks or via unsupervised cross-lingual textual similarity. In this paper, we concern ourselves with reference-free machine translation (MT) evaluation where we directly compare source texts to (sometimes low-quality) system translations, which represents a natural adversarial setup for multilingual encoders. Reference-free evaluation holds the promise of web-scale comparison of MT systems. We systematically investigate a range of metrics based on state-of-the-art cross-lingual semantic representations obtained with pretrained M-BERT and LASER. We find that they perform poorly as semantic encoders for reference-free MT evaluation and identify their two key limitations, namely, (a) a semantic mismatch between representations of mutual translations and, more prominently, (b) the inability to punish “translationese”, i.e., low-quality literal translations. We propose two partial remedies: (1) post-hoc re-alignment of the vector spaces and (2) coupling of semantic-similarity based metrics with target-side language modeling. In segment-level MT evaluation, our best metric surpasses reference-based BLEU by 5.7 correlation points.

56. “You Sound Just Like Your Father” Commercial Machine Translation Systems Include Stylistic Biases [PDF] 返回目录
  ACL 2020.
  Dirk Hovy, Federico Bianchi, Tommaso Fornaciari
The main goal of machine translation has been to convey the correct content. Stylistic considerations have been at best secondary. We show that as a consequence, the output of three commercial machine translation systems (Bing, DeepL, Google) make demographically diverse samples from five languages “sound” older and more male than the original. Our findings suggest that translation models reflect demographic bias in the training data. This opens up interesting new research avenues in machine translation to take stylistic considerations into account.

57. MMPE: A Multi-Modal Interface for Post-Editing Machine Translation [PDF] 返回目录
  ACL 2020.
  Nico Herbig, Tim Düwel, Santanu Pal, Kalliopi Meladaki, Mahsa Monshizadeh, Antonio Krüger, Josef van Genabith
Current advances in machine translation (MT) increase the need for translators to switch from traditional translation to post-editing (PE) of machine-translated text, a process that saves time and reduces errors. This affects the design of translation interfaces, as the task changes from mainly generating text to correcting errors within otherwise helpful translation proposals. Since this paradigm shift offers potential for modalities other than mouse and keyboard, we present MMPE, the first prototype to combine traditional input modes with pen, touch, and speech modalities for PE of MT. The results of an evaluation with professional translators suggest that pen and touch interaction are suitable for deletion and reordering tasks, while they are of limited use for longer insertions. On the other hand, speech and multi-modal combinations of select & speech are considered suitable for replacements and insertions but offer less potential for deletion and reordering. Overall, participants were enthusiastic about the new modalities and saw them as good extensions to mouse & keyboard, but not as a complete substitute.

58. Multi-Domain Neural Machine Translation with Word-Level Adaptive Layer-wise Domain Mixing [PDF] 返回目录
  ACL 2020.
  Haoming Jiang, Chen Liang, Chong Wang, Tuo Zhao
Many multi-domain neural machine translation (NMT) models achieve knowledge transfer by enforcing one encoder to learn shared embedding across domains. However, this design lacks adaptation to individual domains. To overcome this limitation, we propose a novel multi-domain NMT model using individual modules for each domain, on which we apply word-level, adaptive and layer-wise domain mixing. We first observe that words in a sentence are often related to multiple domains. Hence, we assume each word has a domain proportion, which indicates its domain preference. Then word representations are obtained by mixing their embedding in individual domains based on their domain proportions. We show this can be achieved by carefully designing multi-head dot-product attention modules for different domains, and eventually taking weighted averages of their parameters by word-level layer-wise domain proportions. Through this, we can achieve effective domain knowledge sharing and capture fine-grained domain-specific knowledge as well. Our experiments show that our proposed model outperforms existing ones in several NMT tasks.

59. Improving Non-autoregressive Neural Machine Translation with Monolingual Data [PDF] 返回目录
  ACL 2020.
  Jiawei Zhou, Phillip Keung
Non-autoregressive (NAR) neural machine translation is usually done via knowledge distillation from an autoregressive (AR) model. Under this framework, we leverage large monolingual corpora to improve the NAR model’s performance, with the goal of transferring the AR model’s generalization ability while preventing overfitting. On top of a strong NAR baseline, our experimental results on the WMT14 En-De and WMT16 En-Ro news translation tasks confirm that monolingual data augmentation consistently improves the performance of the NAR model to approach the teacher AR model’s performance, yields comparable or better results than the best non-iterative NAR methods in the literature and helps reduce overfitting in the training process.

60. Phone Features Improve Speech Translation [PDF] 返回目录
  ACL 2020.
  Elizabeth Salesky, Alan W Black
End-to-end models for speech translation (ST) more tightly couple speech recognition (ASR) and machine translation (MT) than a traditional cascade of separate ASR and MT models, with simpler model architectures and the potential for reduced error propagation. Their performance is often assumed to be superior, though in many conditions this is not yet the case. We compare cascaded and end-to-end models across high, medium, and low-resource conditions, and show that cascades remain stronger baselines. Further, we introduce two methods to incorporate phone features into ST models. We show that these features improve both architectures, closing the gap between end-to-end models and cascades, and outperforming previous academic work – by up to 9 BLEU on our low-resource setting.

61. ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation [PDF] 返回目录
  ACL 2020.
  Lifu Tu, Richard Yuanzhe Pang, Sam Wiseman, Kevin Gimpel
We propose to train a non-autoregressive machine translation model to minimize the energy defined by a pretrained autoregressive model. In particular, we view our non-autoregressive translation system as an inference network (Tu and Gimpel, 2018) trained to minimize the autoregressive teacher energy. This contrasts with the popular approach of training a non-autoregressive model on a distilled corpus consisting of the beam-searched outputs of such a teacher model. Our approach, which we call ENGINE (ENerGy-based Inference NEtworks), achieves state-of-the-art non-autoregressive results on the IWSLT 2014 DE-EN and WMT 2016 RO-EN datasets, approaching the performance of autoregressive models.

62. Leveraging Monolingual Data with Self-Supervision for Multilingual Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Aditya Siddhant, Ankur Bapna, Yuan Cao, Orhan Firat, Mia Chen, Sneha Kudugunta, Naveen Arivazhagan, Yonghui Wu
Over the last few years two promising research directions in low-resource neural machine translation (NMT) have emerged. The first focuses on utilizing high-resource languages to improve the quality of low-resource languages via multilingual NMT. The second direction employs monolingual data with self-supervision to pre-train translation models, followed by fine-tuning on small amounts of supervised data. In this work, we join these two lines of research and demonstrate the efficacy of monolingual data with self-supervision in multilingual NMT. We offer three major results: (i) Using monolingual data significantly boosts the translation quality of low-resource languages in multilingual models. (ii) Self-supervision improves zero-shot translation quality in multilingual models. (iii) Leveraging monolingual data with self-supervision provides a viable path towards adding new languages to multilingual models, getting up to 33 BLEU on ro-en translation without any parallel data or back-translation.

63. On The Evaluation of Machine Translation Systems Trained With Back-Translation [PDF] 返回目录
  ACL 2020.
  Sergey Edunov, Myle Ott, Marc’Aurelio Ranzato, Michael Auli
Back-translation is a widely used data augmentation technique which leverages target monolingual data. However, its effectiveness has been challenged since automatic metrics such as BLEU only show significant improvements for test examples where the source itself is a translation, or translationese. This is believed to be due to translationese inputs better matching the back-translated training data. In this work, we show that this conjecture is not empirically supported and that back-translation improves translation quality of both naturally occurring text as well as translationese according to professional human translators. We provide empirical evidence to support the view that back-translation is preferred by humans because it produces more fluent outputs. BLEU cannot capture human preferences because references are translationese when source sentences are natural text. We recommend complementing BLEU with a language model score to measure fluency.

64. Simultaneous Translation Policies: From Fixed to Adaptive [PDF] 返回目录
  ACL 2020.
  Baigong Zheng, Kaibo Liu, Renjie Zheng, Mingbo Ma, Hairong Liu, Liang Huang
Adaptive policies are better than fixed policies for simultaneous translation, since they can flexibly balance the tradeoff between translation quality and latency based on the current context information. But previous methods on obtaining adaptive policies either rely on complicated training process, or underperform simple fixed policies. We design an algorithm to achieve adaptive policies via a simple heuristic composition of a set of fixed policies. Experiments on Chinese -> English and German -> English show that our adaptive policies can outperform fixed ones by up to 4 BLEU points for the same latency, and more surprisingly, it even surpasses the BLEU score of full-sentence translation in the greedy mode (and very close to beam mode), but with much lower latency.

65. A Novel Graph-based Multi-modal Fusion Encoder for Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Yongjing Yin, Fandong Meng, Jinsong Su, Chulun Zhou, Zhengyuan Yang, Jie Zhou, Jiebo Luo
Multi-modal neural machine translation (NMT) aims to translate source sentences into a target language paired with images. However, dominant multi-modal NMT models do not fully exploit fine-grained semantic correspondences between semantic units of different modalities, which have potential to refine multi-modal representation learning. To deal with this issue, in this paper, we propose a novel graph-based multi-modal fusion encoder for NMT. Specifically, we first represent the input sentence and image using a unified multi-modal graph, which captures various semantic relationships between multi-modal semantic units (words and visual objects). We then stack multiple graph-based multi-modal fusion layers that iteratively perform semantic interactions to learn node representations. Finally, these representations provide an attention-based context vector for the decoder. We evaluate our proposed encoder on the Multi30K datasets. Experimental results and in-depth analysis show the superiority of our multi-modal NMT model.

66. Dynamic Programming Encoding for Subword Segmentation in Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Xuanli He, Gholamreza Haffari, Mohammad Norouzi
This paper introduces Dynamic Programming Encoding (DPE), a new segmentation algorithm for tokenizing sentences into subword units. We view the subword segmentation of output sentences as a latent variable that should be marginalized out for learning and inference. A mixed character-subword transformer is proposed, which enables exact log marginal likelihood estimation and exact MAP inference to find target segmentations with maximum posterior probability. DPE uses a lightweight mixed character-subword transformer as a means of pre-processing parallel data to segment output sentences using dynamic programming. Empirical results on machine translation suggest that DPE is effective for segmenting output sentences and can be combined with BPE dropout for stochastic segmentation of source sentences. DPE achieves an average improvement of 0.9 BLEU over BPE (Sennrich et al., 2016) and an average improvement of 0.55 BLEU over BPE dropout (Provilkov et al., 2019) on several WMT datasets including English <=> (German, Romanian, Estonian, Finnish, Hungarian).

67. Learning to Recover from Multi-Modality Errors for Non-Autoregressive Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Qiu Ran, Yankai Lin, Peng Li, Jie Zhou
Non-autoregressive neural machine translation (NAT) predicts the entire target sequence simultaneously and significantly accelerates inference process. However, NAT discards the dependency information in a sentence, and thus inevitably suffers from the multi-modality problem: the target tokens may be provided by different possible translations, often causing token repetitions or missing. To alleviate this problem, we propose a novel semi-autoregressive model RecoverSAT in this work, which generates a translation as a sequence of segments. The segments are generated simultaneously while each segment is predicted token-by-token. By dynamically determining segment length and deleting repetitive segments, RecoverSAT is capable of recovering from repetitive and missing token errors. Experimental results on three widely-used benchmark datasets show that our proposed model achieves more than 4 times speedup while maintaining comparable performance compared with the corresponding autoregressive model.

68. On the Inference Calibration of Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Shuo Wang, Zhaopeng Tu, Shuming Shi, Yang Liu
Confidence calibration, which aims to make model predictions equal to the true correctness measures, is important for neural machine translation (NMT) because it is able to offer useful indicators of translation errors in the generated output. While prior studies have shown that NMT models trained with label smoothing are well-calibrated on the ground-truth training data, we find that miscalibration still remains a severe challenge for NMT during inference due to the discrepancy between training and inference. By carefully designing experiments on three language pairs, our work provides in-depth analyses of the correlation between calibration and translation performance as well as linguistic properties of miscalibration and reports a number of interesting findings that might help humans better analyze, understand and improve NMT models. Based on these observations, we further propose a new graduated label smoothing method that can improve both inference calibration and translation performance.

69. A Reinforced Generation of Adversarial Examples for Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  wei zou, Shujian Huang, Jun Xie, Xinyu Dai, Jiajun Chen
Neural machine translation systems tend to fail on less decent inputs despite its significant efficacy, which may significantly harm the credibility of these systems—fathoming how and when neural-based systems fail in such cases is critical for industrial maintenance. Instead of collecting and analyzing bad cases using limited handcrafted error features, here we investigate this issue by generating adversarial examples via a new paradigm based on reinforcement learning. Our paradigm could expose pitfalls for a given performance metric, e.g., BLEU, and could target any given neural machine translation architecture. We conduct experiments of adversarial attacks on two mainstream neural machine translation architectures, RNN-search, and Transformer. The results show that our method efficiently produces stable attacks with meaning-preserving adversarial examples. We also present a qualitative and quantitative analysis for the preference pattern of the attack, demonstrating its capability of pitfall exposure.

70. A Retrieve-and-Rewrite Initialization Method for Unsupervised Machine Translation [PDF] 返回目录
  ACL 2020.
  Shuo Ren, Yu Wu, Shujie Liu, Ming Zhou, Shuai Ma
The commonly used framework for unsupervised machine translation builds initial translation models of both translation directions, and then performs iterative back-translation to jointly boost their translation performance. The initialization stage is very important since bad initialization may wrongly squeeze the search space, and too much noise introduced in this stage may hurt the final performance. In this paper, we propose a novel retrieval and rewriting based method to better initialize unsupervised translation models. We first retrieve semantically comparable sentences from monolingual corpora of two languages and then rewrite the target side to minimize the semantic gap between the source and retrieved targets with a designed rewriting model. The rewritten sentence pairs are used to initialize SMT models which are used to generate pseudo data for two NMT models, followed by the iterative back-translation. Experiments show that our method can build better initial unsupervised translation models and improve the final translation performance by over 4 BLEU scores. Our code is released at https://github.com/Imagist-Shuo/RRforUNMT.git.

71. A Simple and Effective Unified Encoder for Document-Level Machine Translation [PDF] 返回目录
  ACL 2020.
  Shuming Ma, Dongdong Zhang, Ming Zhou
Most of the existing models for document-level machine translation adopt dual-encoder structures. The representation of the source sentences and the document-level contexts are modeled with two separate encoders. Although these models can make use of the document-level contexts, they do not fully model the interaction between the contexts and the source sentences, and can not directly adapt to the recent pre-training models (e.g., BERT) which encodes multiple sentences with a single encoder. In this work, we propose a simple and effective unified encoder that can outperform the baseline models of dual-encoder models in terms of BLEU and METEOR scores. Moreover, the pre-training models can further boost the performance of our proposed model.

72. Does Multi-Encoder Help? A Case Study on Context-Aware Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Bei Li, Hui Liu, Ziyang Wang, Yufan Jiang, Tong Xiao, Jingbo Zhu, Tongran Liu, changliang Li
In encoder-decoder neural models, multiple encoders are in general used to represent the contextual information in addition to the individual sentence. In this paper, we investigate multi-encoder approaches in document-level neural machine translation (NMT). Surprisingly, we find that the context encoder does not only encode the surrounding sentences but also behaves as a noise generator. This makes us rethink the real benefits of multi-encoder in context-aware translation - some of the improvements come from robust training. We compare several methods that introduce noise and/or well-tuned dropout setup into the training of these encoders. Experimental results show that noisy training plays an important role in multi-encoder-based NMT, especially when the training data is small. Also, we establish a new state-of-the-art on IWSLT Fr-En task by careful use of noise generation and dropout methods.

73. Knowledge Distillation for Multilingual Unsupervised Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Haipeng Sun, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Tiejun Zhao
Unsupervised neural machine translation (UNMT) has recently achieved remarkable results for several language pairs. However, it can only translate between a single language pair and cannot produce translation results for multiple language pairs at the same time. That is, research on multilingual UNMT has been limited. In this paper, we empirically introduce a simple method to translate between thirteen languages using a single encoder and a single decoder, making use of multilingual data to improve UNMT for all language pairs. On the basis of the empirical findings, we propose two knowledge distillation methods to further enhance multilingual UNMT performance. Our experiments on a dataset with English translated to and from twelve other languages (including three language families and six language branches) show remarkable results, surpassing strong unsupervised individual baselines while achieving promising performance between non-English language pairs in zero-shot translation scenarios and alleviating poor performance in low-resource language pairs.

74. Lexically Constrained Neural Machine Translation with Levenshtein Transformer [PDF] 返回目录
  ACL 2020.
  Raymond Hendy Susanto, Shamil Chollampatt, Liling Tan
This paper proposes a simple and effective algorithm for incorporating lexical constraints in neural machine translation. Previous work either required re-training existing models with the lexical constraints or incorporating them during beam search decoding with significantly higher computational overheads. Leveraging the flexibility and speed of a recently proposed Levenshtein Transformer model (Gu et al., 2019), our method injects terminology constraints at inference time without any impact on decoding speed. Our method does not require any modification to the training procedure and can be easily applied at runtime with custom dictionaries. Experiments on English-German WMT datasets show that our approach improves an unconstrained baseline and previous approaches.

75. On Exposure Bias, Hallucination and Domain Shift in Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Chaojun Wang, Rico Sennrich
The standard training algorithm in neural machine translation (NMT) suffers from exposure bias, and alternative algorithms have been proposed to mitigate this. However, the practical impact of exposure bias is under debate. In this paper, we link exposure bias to another well-known problem in NMT, namely the tendency to generate hallucinations under domain shift. In experiments on three datasets with multiple test domains, we show that exposure bias is partially to blame for hallucinations, and that training with Minimum Risk Training, which avoids exposure bias, can mitigate this. Our analysis explains why exposure bias is more problematic under domain shift, and also links exposure bias to the beam search problem, i.e. performance deterioration with increasing beam size. Our results provide a new justification for methods that reduce exposure bias: even if they do not increase performance on in-domain test sets, they can increase model robustness to domain shift.

76. Automatic Machine Translation Evaluation using Source Language Inputs and Cross-lingual Language Model [PDF] 返回目录
  ACL 2020.
  Kosuke Takahashi, Katsuhito Sudoh, Satoshi Nakamura
We propose an automatic evaluation method of machine translation that uses source language sentences regarded as additional pseudo references. The proposed method evaluates a translation hypothesis in a regression model. The model takes the paired source, reference, and hypothesis sentence all together as an input. A pretrained large scale cross-lingual language model encodes the input to sentence-pair vectors, and the model predicts a human evaluation score with those vectors. Our experiments show that our proposed method using Cross-lingual Language Model (XLM) trained with a translation language modeling (TLM) objective achieves a higher correlation with human judgments than a baseline method that uses only hypothesis and reference sentences. Additionally, using source sentences in our proposed method is confirmed to improve the evaluation performance.

77. Curriculum Pre-training for End-to-End Speech Translation [PDF] 返回目录
  ACL 2020.
  Chengyi Wang, Yu Wu, Shujie Liu, Ming Zhou, Zhenglu Yang
End-to-end speech translation poses a heavy burden on the encoder because it has to transcribe, understand, and learn cross-lingual semantics simultaneously. To obtain a powerful encoder, traditional methods pre-train it on ASR data to capture speech features. However, we argue that pre-training the encoder only through simple speech recognition is not enough, and high-level linguistic knowledge should be considered. Inspired by this, we propose a curriculum pre-training method that includes an elementary course for transcription learning and two advanced courses for understanding the utterance and mapping words in two languages. The difficulty of these courses is gradually increasing. Experiments show that our curriculum pre-training method leads to significant improvements on En-De and En-Fr speech translation benchmarks.

78. SimulSpeech: End-to-End Simultaneous Speech to Text Translation [PDF] 返回目录
  ACL 2020.
  Yi Ren, Jinglin Liu, Xu Tan, Chen Zhang, Tao Qin, Zhou Zhao, Tie-Yan Liu
In this work, we develop SimulSpeech, an end-to-end simultaneous speech to text translation system which translates speech in source language to text in target language concurrently. SimulSpeech consists of a speech encoder, a speech segmenter and a text decoder, where 1) the segmenter builds upon the encoder and leverages a connectionist temporal classification (CTC) loss to split the input streaming speech in real time, 2) the encoder-decoder attention adopts a wait-k strategy for simultaneous translation. SimulSpeech is more challenging than previous cascaded systems (with simultaneous automatic speech recognition (ASR) and simultaneous neural machine translation (NMT)). We introduce two novel knowledge distillation methods to ensure the performance: 1) Attention-level knowledge distillation transfers the knowledge from the multiplication of the attention matrices of simultaneous NMT and ASR models to help the training of the attention mechanism in SimulSpeech; 2) Data-level knowledge distillation transfers the knowledge from the full-sentence NMT model and also reduces the complexity of data distribution to help on the optimization of SimulSpeech. Experiments on MuST-C English-Spanish and English-German spoken language translation datasets show that SimulSpeech achieves reasonable BLEU scores and lower delay compared to full-sentence end-to-end speech to text translation (without simultaneous translation), and better performance than the two-stage cascaded simultaneous translation model in terms of BLEU scores and translation delay.

79. Selecting Backtranslated Data from Multiple Sources for Improved Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Xabier Soto, Dimitar Shterionov, Alberto Poncelas, Andy Way
Machine translation (MT) has benefited from using synthetic training data originating from translating monolingual corpora, a technique known as backtranslation. Combining backtranslated data from different sources has led to better results than when using such data in isolation. In this work we analyse the impact that data translated with rule-based, phrase-based statistical and neural MT systems has on new MT systems. We use a real-world low-resource use-case (Basque-to-Spanish in the clinical domain) as well as a high-resource language pair (German-to-English) to test different scenarios with backtranslation and employ data selection to optimise the synthetic corpora. We exploit different data selection strategies in order to reduce the amount of data used, while at the same time maintaining high-quality MT systems. We further tune the data selection method by taking into account the quality of the MT systems used for backtranslation and lexical diversity of the resulting corpora. Our experiments show that incorporating backtranslated data from different sources can be beneficial, and that availing of data selection can yield improved performance.

80. Modeling Word Formation in English–German Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Marion Weller-Di Marco, Alexander Fraser
This paper studies strategies to model word formation in NMT using rich linguistic information, namely a word segmentation approach that goes beyond splitting into substrings by considering fusional morphology. Our linguistically sound segmentation is combined with a method for target-side inflection to accommodate modeling word formation. The best system variants employ source-side morphological analysis and model complex target-side words, improving over a standard system.

81. Multimodal Transformer for Multimodal Machine Translation [PDF] 返回目录
  ACL 2020.
  Shaowei Yao, Xiaojun Wan
Multimodal Machine Translation (MMT) aims to introduce information from other modality, generally static images, to improve the translation quality. Previous works propose various incorporation methods, but most of them do not consider the relative importance of multiple modalities. Equally treating all modalities may encode too much useless information from less important modalities. In this paper, we introduce the multimodal self-attention in Transformer to solve the issues above in MMT. The proposed method learns the representation of images based on the text, which avoids encoding irrelevant information in images. Experiments and visualization analysis demonstrate that our model benefits from visual information and substantially outperforms previous works and competitive baselines in terms of various metrics.

82. Tangled up in BLEU: Reevaluating the Evaluation of Automatic Machine Translation Evaluation Metrics [PDF] 返回目录
  ACL 2020.
  Nitika Mathur, Timothy Baldwin, Trevor Cohn
Automatic metrics are fundamental for the development and evaluation of machine translation systems. Judging whether, and to what extent, automatic metrics concur with the gold standard of human evaluation is not a straightforward problem. We show that current methods for judging metrics are highly sensitive to the translations used for assessment, particularly the presence of outliers, which often leads to falsely confident conclusions about a metric’s efficacy. Finally, we turn to pairwise system ranking, developing a method for thresholding performance improvement under an automatic metric against human judgements, which allows quantification of type I versus type II errors incurred, i.e., insignificant human differences in system quality that are accepted, and significant human differences that are rejected. Together, these findings suggest improvements to the protocols for metric evaluation and system performance evaluation in machine translation.

83. AdvAug: Robust Adversarial Augmentation for Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Yong Cheng, Lu Jiang, Wolfgang Macherey, Jacob Eisenstein
In this paper, we propose a new adversarial augmentation method for Neural Machine Translation (NMT). The main idea is to minimize the vicinal risk over virtual sentences sampled from two vicinity distributions, in which the crucial one is a novel vicinity distribution for adversarial sentences that describes a smooth interpolated embedding space centered around observed training sentence pairs. We then discuss our approach, AdvAug, to train NMT models using the embeddings of virtual sentences in sequence-to-sequence learning. Experiments on Chinese-English, English-French, and English-German translation benchmarks show that AdvAug achieves significant improvements over theTransformer (up to 4.9 BLEU points), and substantially outperforms other data augmentation techniques (e.g.back-translation) without using extra corpora.

84. Contextual Neural Machine Translation Improves Translation of Cataphoric Pronouns [PDF] 返回目录
  ACL 2020.
  KayYen Wong, Sameen Maruf, Gholamreza Haffari
The advent of context-aware NMT has resulted in promising improvements in the overall translation quality and specifically in the translation of discourse phenomena such as pronouns. Previous works have mainly focused on the use of past sentences as context with a focus on anaphora translation. In this work, we investigate the effect of future sentences as context by comparing the performance of a contextual NMT model trained with the future context to the one trained with the past context. Our experiments and evaluation, using generic and pronoun-focused automatic metrics, show that the use of future context not only achieves significant improvements over the context-agnostic Transformer, but also demonstrates comparable and in some cases improved performance over its counterpart trained on past context. We also perform an evaluation on a targeted cataphora test suite and report significant gains over the context-agnostic Transformer in terms of BLEU.

85. Improving Neural Machine Translation with Soft Template Prediction [PDF] 返回目录
  ACL 2020.
  Jian Yang, Shuming Ma, Dongdong Zhang, Zhoujun Li, Ming Zhou
Although neural machine translation (NMT) has achieved significant progress in recent years, most previous NMT models only depend on the source text to generate translation. Inspired by the success of template-based and syntax-based approaches in other fields, we propose to use extracted templates from tree structures as soft target templates to guide the translation procedure. In order to learn the syntactic structure of the target sentences, we adopt constituency-based parse tree to generate candidate templates. We incorporate the template information into the encoder-decoder framework to jointly utilize the templates and source text. Experiments show that our model significantly outperforms the baseline models on four benchmarks and demonstrates the effectiveness of soft target templates.

86. Tagged Back-translation Revisited: Why Does It Really Work? [PDF] 返回目录
  ACL 2020.
  Benjamin Marie, Raphael Rubino, Atsushi Fujita
In this paper, we show that neural machine translation (NMT) systems trained on large back-translated data overfit some of the characteristics of machine-translated texts. Such NMT systems better translate human-produced translations, i.e., translationese, but may largely worsen the translation quality of original texts. Our analysis reveals that adding a simple tag to back-translations prevents this quality degradation and improves on average the overall translation quality by helping the NMT system to distinguish back-translated data from original parallel data during training. We also show that, in contrast to high-resource configurations, NMT systems trained in low-resource settings are much less vulnerable to overfit back-translations. We conclude that the back-translations in the training data should always be tagged especially when the origin of the text to be translated is unknown.

87. Worse WER, but Better BLEU? Leveraging Word Embedding as Intermediate in Multitask End-to-End Speech Translation [PDF] 返回目录
  ACL 2020.
  Shun-Po Chuang, Tzu-Wei Sung, Alexander H. Liu, Hung-yi Lee
Speech translation (ST) aims to learn transformations from speech in the source language to the text in the target language. Previous works show that multitask learning improves the ST performance, in which the recognition decoder generates the text of the source language, and the translation decoder obtains the final translations based on the output of the recognition decoder. Because whether the output of the recognition decoder has the correct semantics is more critical than its accuracy, we propose to improve the multitask ST model by utilizing word embedding as the intermediate.

88. Are we Estimating or Guesstimating Translation Quality? [PDF] 返回目录
  ACL 2020.
  Shuo Sun, Francisco Guzmán, Lucia Specia
Recent advances in pre-trained multilingual language models lead to state-of-the-art results on the task of quality estimation (QE) for machine translation. A carefully engineered ensemble of such models won the QE shared task at WMT19. Our in-depth analysis, however, shows that the success of using pre-trained language models for QE is over-estimated due to three issues we observed in current QE datasets: (i) The distributions of quality scores are imbalanced and skewed towards good quality scores; (iii) QE models can perform well on these datasets while looking at only source or translated sentences; (iii) They contain statistical artifacts that correlate well with human-annotated QE labels. Our findings suggest that although QE models might capture fluency of translated sentences and complexity of source sentences, they cannot model adequacy of translations effectively.

89. Document Translation vs. Query Translation for Cross-Lingual Information Retrieval in the Medical Domain [PDF] 返回目录
  ACL 2020.
  Shadi Saleh, Pavel Pecina
We present a thorough comparison of two principal approaches to Cross-Lingual Information Retrieval: document translation (DT) and query translation (QT). Our experiments are conducted using the cross-lingual test collection produced within the CLEF eHealth information retrieval tasks in 2013–2015 containing English documents and queries in several European languages. We exploit the Statistical Machine Translation (SMT) and Neural Machine Translation (NMT) paradigms and train several domain-specific and task-specific machine translation systems to translate the non-English queries into English (for the QT approach) and the English documents to all the query languages (for the DT approach). The results show that the quality of QT by SMT is sufficient enough to outperform the retrieval results of the DT approach for all the languages. NMT then further boosts translation quality and retrieval quality for both QT and DT for most languages, but still, QT provides generally better retrieval results than DT.

90. Gender in Danger? Evaluating Speech Translation Technology on the MuST-SHE Corpus [PDF] 返回目录
  ACL 2020.
  Luisa Bentivogli, Beatrice Savoldi, Matteo Negri, Mattia A. Di Gangi, Roldano Cattoni, Marco Turchi
Translating from languages without productive grammatical gender like English into gender-marked languages is a well-known difficulty for machines. This difficulty is also due to the fact that the training data on which models are built typically reflect the asymmetries of natural languages, gender bias included. Exclusively fed with textual data, machine translation is intrinsically constrained by the fact that the input sentence does not always contain clues about the gender identity of the referred human entities. But what happens with speech translation, where the input is an audio signal? Can audio provide additional information to reduce gender bias? We present the first thorough investigation of gender bias in speech translation, contributing with: i) the release of a benchmark useful for future studies, and ii) the comparison of different technologies (cascade and end-to-end) on two language directions (English-Italian/French).

91. Uncertainty-Aware Curriculum Learning for Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Yikai Zhou, Baosong Yang, Derek F. Wong, Yu Wan, Lidia S. Chao
Neural machine translation (NMT) has proven to be facilitated by curriculum learning which presents examples in an easy-to-hard order at different training stages. The keys lie in the assessment of data difficulty and model competence. We propose uncertainty-aware curriculum learning, which is motivated by the intuition that: 1) the higher the uncertainty in a translation pair, the more complex and rarer the information it contains; and 2) the end of the decline in model uncertainty indicates the completeness of current training stage. Specifically, we serve cross-entropy of an example as its data difficulty and exploit the variance of distributions over the weights of the network to present the model uncertainty. Extensive experiments on various translation tasks reveal that our approach outperforms the strong baseline and related methods on both translation quality and convergence speed. Quantitative analyses reveal that the proposed strategy offers NMT the ability to automatically govern its learning schedule.

92. Speech Translation and the End-to-End Promise: Taking Stock of Where We Are [PDF] 返回目录
  ACL 2020.
  Matthias Sperber, Matthias Paulik
Over its three decade history, speech translation has experienced several shifts in its primary research themes; moving from loosely coupled cascades of speech recognition and machine translation, to exploring questions of tight coupling, and finally to end-to-end models that have recently attracted much attention. This paper provides a brief survey of these developments, along with a discussion of the main challenges of traditional approaches which stem from committing to intermediate representations from the speech recognizer, and from training cascaded models separately towards different objectives. Recent end-to-end modeling techniques promise a principled way of overcoming these issues by allowing joint training of all model components and removing the need for explicit intermediate representations. However, a closer look reveals that many end-to-end models fall short of solving these issues, due to compromises made to address data scarcity. This paper provides a unifying categorization and nomenclature that covers both traditional and recent approaches and that may help researchers by highlighting both trade-offs and open research questions.

93. Hard-Coded Gaussian Attention for Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Weiqiu You, Simeng Sun, Mohit Iyyer
Recent work has questioned the importance of the Transformer’s multi-headed attention for achieving high translation quality. We push further in this direction by developing a “hard-coded” attention variant without any learned parameters. Surprisingly, replacing all learned self-attention heads in the encoder and decoder with fixed, input-agnostic Gaussian distributions minimally impacts BLEU scores across four different language pairs. However, additionally, hard-coding cross attention (which connects the decoder to the encoder) significantly lowers BLEU, suggesting that it is more important than self-attention. Much of this BLEU drop can be recovered by adding just a single learned cross attention head to an otherwise hard-coded Transformer. Taken as a whole, our results offer insight into which components of the Transformer are actually important, which we hope will guide future work into the development of simpler and more efficient attention-based models.

94. In Neural Machine Translation, What Does Transfer Learning Transfer? [PDF] 返回目录
  ACL 2020.
  Alham Fikri Aji, Nikolay Bogoychev, Kenneth Heafield, Rico Sennrich
Transfer learning improves quality for low-resource machine translation, but it is unclear what exactly it transfers. We perform several ablation studies that limit information transfer, then measure the quality impact across three language pairs to gain a black-box understanding of transfer learning. Word embeddings play an important role in transfer learning, particularly if they are properly aligned. Although transfer learning can be performed without embeddings, results are sub-optimal. In contrast, transferring only the embeddings but nothing else yields catastrophic results. We then investigate diagonal alignments with auto-encoders over real languages and randomly generated sequences, finding even randomly generated sequences as parents yield noticeable but smaller gains. Finally, transfer learning can eliminate the need for a warm-up phase when training transformer models in high resource language pairs.

95. Learning a Multi-Domain Curriculum for Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Wei Wang, Ye Tian, Jiquan Ngiam, Yinfei Yang, Isaac Caswell, Zarana Parekh
Most data selection research in machine translation focuses on improving a single domain. We perform data selection for multiple domains at once. This is achieved by carefully introducing instance-level domain-relevance features and automatically constructing a training curriculum to gradually concentrate on multi-domain relevant and noise-reduced data batches. Both the choice of features and the use of curriculum are crucial for balancing and improving all domains, including out-of-domain. In large-scale experiments, the multi-domain curriculum simultaneously reaches or outperforms the individual performance and brings solid gains over no-curriculum training.

96. Reducing Gender Bias in Neural Machine Translation as a Domain Adaptation Problem [PDF] 返回目录
  ACL 2020.
  Danielle Saunders, Bill Byrne
Training data for NLP tasks often exhibits gender bias in that fewer sentences refer to women than to men. In Neural Machine Translation (NMT) gender bias has been shown to reduce translation quality, particularly when the target language has grammatical gender. The recent WinoMT challenge set allows us to measure this effect directly (Stanovsky et al, 2019) Ideally we would reduce system bias by simply debiasing all data prior to training, but achieving this effectively is itself a challenge. Rather than attempt to create a ‘balanced’ dataset, we use transfer learning on a small set of trusted, gender-balanced examples. This approach gives strong and consistent improvements in gender debiasing with much less computational cost than training from scratch. A known pitfall of transfer learning on new domains is ‘catastrophic forgetting’, which we address at adaptation and inference time. During adaptation we show that Elastic Weight Consolidation allows a performance trade-off between general translation quality and bias reduction. At inference time we propose a lattice-rescoring scheme which outperforms all systems evaluated in Stanovsky et al, 2019 on WinoMT with no degradation of general test set BLEU. We demonstrate our approach translating from English into three languages with varied linguistic properties and data availability.

97. Translationese as a Language in “Multilingual” NMT [PDF] 返回目录
  ACL 2020.
  Parker Riley, Isaac Caswell, Markus Freitag, David Grangier
Machine translation has an undesirable propensity to produce “translationese” artifacts, which can lead to higher BLEU scores while being liked less by human raters. Motivated by this, we model translationese and original (i.e. natural) text as separate languages in a multilingual model, and pose the question: can we perform zero-shot translation between original source text and original target text? There is no data with original source and original target, so we train a sentence-level classifier to distinguish translationese from original target text, and use this classifier to tag the training data for an NMT model. Using this technique we bias the model to produce more natural outputs at test time, yielding gains in human evaluation scores on both accuracy and fluency. Additionally, we demonstrate that it is possible to bias the model to produce translationese and game the BLEU score, increasing it while decreasing human-rated quality. We analyze these outputs using metrics measuring the degree of translationese, and present an analysis of the volatility of heuristic-based train-data tagging.

98. Using Context in Neural Machine Translation Training Objectives [PDF] 返回目录
  ACL 2020.
  Danielle Saunders, Felix Stahlberg, Bill Byrne
We present Neural Machine Translation (NMT) training using document-level metrics with batch-level documents. Previous sequence-objective approaches to NMT training focus exclusively on sentence-level metrics like sentence BLEU which do not correspond to the desired evaluation metric, typically document BLEU. Meanwhile research into document-level NMT training focuses on data or model architecture rather than training procedure. We find that each of these lines of research has a clear space in it for the other, and propose merging them with a scheme that allows a document-level evaluation metric to be used in the NMT training objective. We first sample pseudo-documents from sentence samples. We then approximate the expected document BLEU gradient with Monte Carlo sampling for use as a cost function in Minimum Risk Training (MRT). This two-level sampling procedure gives NMT performance gains over sequence MRT and maximum-likelihood training. We demonstrate that training is more robust for document-level metrics than with sequence metrics. We further demonstrate improvements on NMT with TER and Grammatical Error Correction (GEC) using GLEU, both metrics used at the document level for evaluations.

99. Variational Neural Machine Translation with Normalizing Flows [PDF] 返回目录
  ACL 2020.
  Hendra Setiawan, Matthias Sperber, Udhyakumar Nallasamy, Matthias Paulik
Variational Neural Machine Translation (VNMT) is an attractive framework for modeling the generation of target translations, conditioned not only on the source sentence but also on some latent random variables. The latent variable modeling may introduce useful statistical dependencies that can improve translation accuracy. Unfortunately, learning informative latent variables is non-trivial, as the latent space can be prohibitively large, and the latent codes are prone to be ignored by many translation models at training time. Previous works impose strong assumptions on the distribution of the latent code and limit the choice of the NMT architecture. In this paper, we propose to apply the VNMT framework to the state-of-the-art Transformer and introduce a more flexible approximate posterior based on normalizing flows. We demonstrate the efficacy of our proposal under both in-domain and out-of-domain conditions, significantly outperforming strong baselines.

100. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension [PDF] 返回目录
  ACL 2020.
  Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer
We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and other recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa on GLUE and SQuAD, and achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 3.5 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also replicate other pretraining schemes within the BART framework, to understand their effect on end-task performance.

101. Unsupervised Multimodal Neural Machine Translation with Pseudo Visual Pivoting [PDF] 返回目录
  ACL 2020.
  Po-Yao Huang, Junjie Hu, Xiaojun Chang, Alexander Hauptmann
Unsupervised machine translation (MT) has recently achieved impressive results with monolingual corpora only. However, it is still challenging to associate source-target sentences in the latent space. As people speak different languages biologically share similar visual systems, the potential of achieving better alignment through visual content is promising yet under-explored in unsupervised multimodal MT (MMT). In this paper, we investigate how to utilize visual content for disambiguation and promoting latent space alignment in unsupervised MMT. Our model employs multimodal back-translation and features pseudo visual pivoting in which we learn a shared multilingual visual-semantic embedding space and incorporate visually-pivoted captioning as additional weak supervision. The experimental results on the widely used Multi30K dataset show that the proposed model significantly improves over the state-of-the-art methods and generalizes well when images are not available at the testing time.

102. Addressing Posterior Collapse with Mutual Information for Improved Variational Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Arya D. McCarthy, Xian Li, Jiatao Gu, Ning Dong
This paper proposes a simple and effective approach to address the problem of posterior collapse in conditional variational autoencoders (CVAEs). It thus improves performance of machine translation models that use noisy or monolingual data, as well as in conventional settings. Extending Transformer and conditional VAEs, our proposed latent variable model measurably prevents posterior collapse by (1) using a modified evidence lower bound (ELBO) objective which promotes mutual information between the latent variable and the target, and (2) guiding the latent variable with an auxiliary bag-of-words prediction task. As a result, the proposed model yields improved translation quality compared to existing variational NMT models on WMT Ro↔En and De↔En. With latent variables being effectively utilized, our model demonstrates improved robustness over non-latent Transformer in handling uncertainty: exploiting noisy source-side monolingual data (up to +3.2 BLEU), and training with weakly aligned web-mined parallel data (up to +4.7 BLEU).

103. Balancing Training for Multilingual Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Xinyi Wang, Yulia Tsvetkov, Graham Neubig
When training multilingual machine translation (MT) models that can translate to/from multiple languages, we are faced with imbalanced training sets: some languages have much more training data than others. Standard practice is to up-sample less resourced languages to increase representation, and the degree of up-sampling has a large effect on the overall performance. In this paper, we propose a method that instead automatically learns how to weight training data through a data scorer that is optimized to maximize performance on all test languages. Experiments on two sets of languages under both one-to-many and many-to-one MT settings show our method not only consistently outperforms heuristic baselines in terms of average performance, but also offers flexible control over the performance of which languages are optimized.

104. Evaluating Robustness to Input Perturbations for Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Xing Niu, Prashant Mathur, Georgiana Dinu, Yaser Al-Onaizan
Neural Machine Translation (NMT) models are sensitive to small perturbations in the input. Robustness to such perturbations is typically measured using translation quality metrics such as BLEU on the noisy input. This paper proposes additional metrics which measure the relative degradation and changes in translation when small perturbations are added to the input. We focus on a class of models employing subword regularization to address robustness and perform extensive evaluations of these models using the robustness measures proposed. Results show that our proposed metrics reveal a clear trend of improved robustness to perturbations when subword regularization methods are used.

105. Regularized Context Gates on Transformer for Machine Translation [PDF] 返回目录
  ACL 2020.
  Xintong Li, Lemao Liu, Rui Wang, Guoping Huang, Max Meng
Context gates are effective to control the contributions from the source and target contexts in the recurrent neural network (RNN) based neural machine translation (NMT). However, it is challenging to extend them into the advanced Transformer architecture, which is more complicated than RNN. This paper first provides a method to identify source and target contexts and then introduce a gate mechanism to control the source and target contributions in Transformer. In addition, to further reduce the bias problem in the gate mechanism, this paper proposes a regularization method to guide the learning of the gates with supervision automatically generated using pointwise mutual information. Extensive experiments on 4 translation datasets demonstrate that the proposed model obtains an averaged gain of 1.0 BLEU score over a strong Transformer baseline.

106. CLIReval: Evaluating Machine Translation as a Cross-Lingual Information Retrieval Task [PDF] 返回目录
  ACL 2020. System Demonstrations
  Shuo Sun, Suzanna Sia, Kevin Duh
We present CLIReval, an easy-to-use toolkit for evaluating machine translation (MT) with the proxy task of cross-lingual information retrieval (CLIR). Contrary to what the project name might suggest, CLIReval does not actually require any annotated CLIR dataset. Instead, it automatically transforms translations and references used in MT evaluations into a synthetic CLIR dataset; it then sets up a standard search engine (Elasticsearch) and computes various information retrieval metrics (e.g., mean average precision) by treating the translations as documents to be retrieved. The idea is to gauge the quality of MT by its impact on the document translation approach to CLIR. As a case study, we run CLIReval on the “metrics shared task” of WMT2019; while this extrinsic metric is not intended to replace popular intrinsic metrics such as BLEU, results suggest CLIReval is competitive in many language pairs in terms of correlation to human judgments of quality. CLIReval is publicly available at https://github.com/ssun32/CLIReval.

107. ESPnet-ST: All-in-One Speech Translation Toolkit [PDF] 返回目录
  ACL 2020. System Demonstrations
  Hirofumi Inaguma, Shun Kiyono, Kevin Duh, Shigeki Karita, Nelson Yalta, Tomoki Hayashi, Shinji Watanabe
We present ESPnet-ST, which is designed for the quick development of speech-to-speech translation systems in a single framework. ESPnet-ST is a new project inside end-to-end speech processing toolkit, ESPnet, which integrates or newly implements automatic speech recognition, machine translation, and text-to-speech functions for speech translation. We provide all-in-one recipes including data pre-processing, feature extraction, training, and decoding pipelines for a wide range of benchmark datasets. Our reproducible results can match or even outperform the current state-of-the-art performances; these pre-trained models are downloadable. The toolkit is publicly available at https://github.com/espnet/espnet.

108. MMPE: A Multi-Modal Interface using Handwriting, Touch Reordering, and Speech Commands for Post-Editing Machine Translation [PDF] 返回目录
  ACL 2020. System Demonstrations
  Nico Herbig, Santanu Pal, Tim Düwel, Kalliopi Meladaki, Mahsa Monshizadeh, Vladislav Hnatovskiy, Antonio Krüger, Josef van Genabith
The shift from traditional translation to post-editing (PE) of machine-translated (MT) text can save time and reduce errors, but it also affects the design of translation interfaces, as the task changes from mainly generating text to correcting errors within otherwise helpful translation proposals. Since this paradigm shift offers potential for modalities other than mouse and keyboard, we present MMPE, the first prototype to combine traditional input modes with pen, touch, and speech modalities for PE of MT. Users can directly cross out or hand-write new text, drag and drop words for reordering, or use spoken commands to update the text in place. All text manipulations are logged in an easily interpretable format to simplify subsequent translation process research. The results of an evaluation with professional translators suggest that pen and touch interaction are suitable for deletion and reordering tasks, while speech and multi-modal combinations of select & speech are considered suitable for replacements and insertions. Overall, experiment participants were enthusiastic about the new modalities and saw them as useful extensions to mouse & keyboard, but not as a complete substitute.

109. Zero-shot North Korean to English Neural Machine Translation by Character Tokenization and Phoneme Decomposition [PDF] 返回目录
  ACL 2020. Student Research Workshop
  Hwichan Kim, Tosho Hirasawa, Mamoru Komachi
The primary limitation of North Korean to English translation is the lack of a parallel corpus; therefore, high translation accuracy cannot be achieved. To address this problem, we propose a zero-shot approach using South Korean data, which are remarkably similar to North Korean data. We train a neural machine translation model after tokenizing a South Korean text at the character level and decomposing characters into phonemes.We demonstrate that our method can effectively learn North Korean to English translation and improve the BLEU scores by +1.01 points in comparison with the baseline.

110. Multi-Task Neural Model for Agglutinative Language Translation [PDF] 返回目录
  ACL 2020. Student Research Workshop
  Yirong Pan, Xiao Li, Yating Yang, Rui Dong
Neural machine translation (NMT) has achieved impressive performance recently by using large-scale parallel corpora. However, it struggles in the low-resource and morphologically-rich scenarios of agglutinative language translation task. Inspired by the finding that monolingual data can greatly improve the NMT performance, we propose a multi-task neural model that jointly learns to perform bi-directional translation and agglutinative language stemming. Our approach employs the shared encoder and decoder to train a single model without changing the standard NMT architecture but instead adding a token before each source-side sentence to specify the desired target outputs of the two different tasks. Experimental results on Turkish-English and Uyghur-Chinese show that our proposed approach can significantly improve the translation performance on agglutinative languages by using a small amount of monolingual data.

111. Efficient Neural Machine Translation for Low-Resource Languages via Exploiting Related Languages [PDF] 返回目录
  ACL 2020. Student Research Workshop
  Vikrant Goyal, Sourav Kumar, Dipti Misra Sharma
A large percentage of the world’s population speaks a language of the Indian subcontinent, comprising languages from both Indo-Aryan (e.g. Hindi, Punjabi, Gujarati, etc.) and Dravidian (e.g. Tamil, Telugu, Malayalam, etc.) families. A universal characteristic of Indian languages is their complex morphology, which, when combined with the general lack of sufficient quantities of high-quality parallel data, can make developing machine translation (MT) systems for these languages difficult. Neural Machine Translation (NMT) is a rapidly advancing MT paradigm and has shown promising results for many language pairs, especially in large training data scenarios. Since the condition of large parallel corpora is not met for Indian-English language pairs, we present our efforts towards building efficient NMT systems between Indian languages (specifically Indo-Aryan languages) and English via efficiently exploiting parallel data from the related languages. We propose a technique called Unified Transliteration and Subword Segmentation to leverage language similarity while exploiting parallel data from related language pairs. We also propose a Multilingual Transfer Learning technique to leverage parallel data from multiple related languages to assist translation for low resource language pair of interest. Our experiments demonstrate an overall average improvement of 5 BLEU points over the standard Transformer-based NMT baselines.

112. Pre-training via Leveraging Assisting Languages for Neural Machine Translation [PDF] 返回目录
  ACL 2020. Student Research Workshop
  Haiyue Song, Raj Dabre, Zhuoyuan Mao, Fei Cheng, Sadao Kurohashi, Eiichiro Sumita
Sequence-to-sequence (S2S) pre-training using large monolingual data is known to improve performance for various S2S NLP tasks. However, large monolingual corpora might not always be available for the languages of interest (LOI). Thus, we propose to exploit monolingual corpora of other languages to complement the scarcity of monolingual corpora for the LOI. We utilize script mapping (Chinese to Japanese) to increase the similarity (number of cognates) between the monolingual corpora of helping languages and LOI. An empirical case study of low-resource Japanese-English neural machine translation (NMT) reveals that leveraging large Chinese and French monolingual corpora can help overcome the shortage of Japanese and English monolingual corpora, respectively, for S2S pre-training. Using only Chinese and French monolingual corpora, we were able to improve Japanese-English translation quality by up to 8.5 BLEU in low-resource scenarios.

113. Checkpoint Reranking: An Approach to Select Better Hypothesis for Neural Machine Translation Systems [PDF] 返回目录
  ACL 2020. Student Research Workshop
  Vinay Pandramish, Dipti Misra Sharma
In this paper, we propose a method of re-ranking the outputs of Neural Machine Translation (NMT) systems. After the decoding process, we select a few last iteration outputs in the training process as the N-best list. After training a Neural Machine Translation (NMT) baseline system, it has been observed that these iteration outputs have an oracle score higher than baseline up to 1.01 BLEU points compared to the last iteration of the trained system.We come up with a ranking mechanism by solely focusing on the decoder’s ability to generate distinct tokens and without the usage of any language model or data. With this method, we achieved a translation improvement up to +0.16 BLEU points over baseline.We also evaluate our approach by applying the coverage penalty to the training process.In cases of moderate coverage penalty, the oracle scores are higher than the final iteration up to +0.99 BLEU points, and our algorithm gives an improvement up to +0.17 BLEU points.With excessive penalty, there is a decrease in translation quality compared to the baseline system. Still, an increase in oracle scores up to +1.30 is observed with the re-ranking algorithm giving an improvement up to +0.15 BLEU points is found in case of excessive penalty.The proposed re-ranking method is a generic one and can be extended to other language pairs as well.

114. Compositional Generalization by Factorizing Alignment and Translation [PDF] 返回目录
  ACL 2020. Student Research Workshop
  Jacob Russin, Jason Jo, Randall O’Reilly, Yoshua Bengio
Standard methods in deep learning for natural language processing fail to capture the compositional structure of human language that allows for systematic generalization outside of the training distribution. However, human learners readily generalize in this way, e.g. by applying known grammatical rules to novel words. Inspired by work in cognitive science suggesting a functional distinction between systems for syntactic and semantic processing, we implement a modification to an existing approach in neural machine translation, imposing an analogous separation between alignment and translation. The resulting architecture substantially outperforms standard recurrent networks on the SCAN dataset, a compositional generalization task, without any additional supervision. Our work suggests that learning to align and to translate in separate modules may be a useful heuristic for capturing compositional structure.

115. Proceedings of the First Workshop on Automatic Simultaneous Translation [PDF] 返回目录
  ACL 2020. the First Workshop on Automatic Simultaneous Translation
  Hua Wu, Collin Cherry, Liang Huang, Zhongjun He, Mark Liberman, James Cross, Yang Liu


116. Dynamic Sentence Boundary Detection for Simultaneous Translation [PDF] 返回目录
  ACL 2020. the First Workshop on Automatic Simultaneous Translation
  Ruiqing Zhang, Chuanqiang Zhang
Simultaneous Translation is a great challenge in which translation starts before the source sentence finished. Most studies take transcription as input and focus on balancing translation quality and latency for each sentence. However, most ASR systems can not provide accurate sentence boundaries in realtime. Thus it is a key problem to segment sentences for the word streaming before translation. In this paper, we propose a novel method for sentence boundary detection that takes it as a multi-class classification task under the end-to-end pre-training framework. Experiments show significant improvements both in terms of translation quality and latency.

117. End-to-End Speech Translation with Adversarial Training [PDF] 返回目录
  ACL 2020. the First Workshop on Automatic Simultaneous Translation
  Xuancai Li, Chen Kehai, Tiejun Zhao, Muyun Yang
End-to-End speech translation usually leverages audio-to-text parallel data to train an available speech translation model which has shown impressive results on various speech translation tasks. Due to the artificial cost of collecting audio-to-text parallel data, the speech translation is a natural low-resource translation scenario, which greatly hinders its improvement. In this paper, we proposed a new adversarial training method to leverage target monolingual data to relieve the low-resource shortcoming of speech translation. In our method, the existing speech translation model is considered as a Generator to gain a target language output, and another neural Discriminator is used to guide the distinction between outputs of speech translation model and true target monolingual sentences. Experimental results on the CCMT 2019-BSTC dataset speech translation task demonstrate that the proposed methods can significantly improve the performance of the End-to-End speech translation system.

118. Robust Neural Machine Translation with ASR Errors [PDF] 返回目录
  ACL 2020. the First Workshop on Automatic Simultaneous Translation
  Haiyang Xue, Yang Feng, Shuhao Gu, Wei Chen
In many practical applications, neural machine translation systems have to deal with the input from automatic speech recognition (ASR) systems which may contain a certain number of errors. This leads to two problems which degrade translation performance. One is the discrepancy between the training and testing data and the other is the translation error caused by the input errors may ruin the whole translation. In this paper, we propose a method to handle the two problems so as to generate robust translation to ASR errors. First, we simulate ASR errors in the training data so that the data distribution in the training and test is consistent. Second, we focus on ASR errors on homophone words and words with similar pronunciation and make use of their pronunciation information to help the translation model to recover from the input errors. Experiments on two Chinese-English data sets show that our method is more robust to input errors and can outperform the strong Transformer baseline significantly.

119. Modeling Discourse Structure for Document-level Neural Machine Translation [PDF] 返回目录
  ACL 2020. the First Workshop on Automatic Simultaneous Translation
  Junxuan Chen, Xiang Li, Jiarui Zhang, Chulun Zhou, Jianwei Cui, Bin Wang, Jinsong Su
Recently, document-level neural machine translation (NMT) has become a hot topic in the community of machine translation. Despite its success, most of existing studies ignored the discourse structure information of the input document to be translated, which has shown effective in other tasks. In this paper, we propose to improve document-level NMT with the aid of discourse structure information. Our encoder is based on a hierarchical attention network (HAN) (Miculicich et al., 2018). Specifically, we first parse the input document to obtain its discourse structure. Then, we introduce a Transformer-based path encoder to embed the discourse structure information of each word. Finally, we combine the discourse structure information with the word embedding before it is fed into the encoder. Experimental results on the English-to-German dataset show that our model can significantly outperform both Transformer and Transformer+HAN.

120. Proceedings of the 17th International Conference on Spoken Language Translation [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Marcello Federico, Alex Waibel, Kevin Knight, Satoshi Nakamura, Hermann Ney, Jan Niehues, Sebastian Stüker, Dekai Wu, Joseph Mariani, Francois Yvon


121. ON-TRAC Consortium for End-to-End and Simultaneous Speech Translation Challenge Tasks at IWSLT 2020 [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Maha Elbayad, Ha Nguyen, Fethi Bougares, Natalia Tomashenko, Antoine Caubrière, Benjamin Lecouteux, Yannick Estève, Laurent Besacier
This paper describes the ON-TRAC Consortium translation systems developed for two challenge tracks featured in the Evaluation Campaign of IWSLT 2020, offline speech translation and simultaneous speech translation. ON-TRAC Consortium is composed of researchers from three French academic laboratories: LIA (Avignon Université), LIG (Université Grenoble Alpes), and LIUM (Le Mans Université). Attention-based encoder-decoder models, trained end-to-end, were used for our submissions to the offline speech translation track. Our contributions focused on data augmentation and ensembling of multiple models. In the simultaneous speech translation track, we build on Transformer-based wait-k models for the text-to-text subtask. For speech-to-text simultaneous translation, we attach a wait-k MT system to a hybrid ASR system. We propose an algorithm to control the latency of the ASR+MT cascade and achieve a good latency-quality trade-off on both subtasks.

122. Start-Before-End and End-to-End: Neural Speech Translation by AppTek and RWTH Aachen University [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Parnia Bahar, Patrick Wilken, Tamer Alkhouli, Andreas Guta, Pavel Golik, Evgeny Matusov, Christian Herold
AppTek and RWTH Aachen University team together to participate in the offline and simultaneous speech translation tracks of IWSLT 2020. For the offline task, we create both cascaded and end-to-end speech translation systems, paying attention to careful data selection and weighting. In the cascaded approach, we combine high-quality hybrid automatic speech recognition (ASR) with the Transformer-based neural machine translation (NMT). Our end-to-end direct speech translation systems benefit from pretraining of adapted encoder and decoder components, as well as synthetic data and fine-tuning and thus are able to compete with cascaded systems in terms of MT quality. For simultaneous translation, we utilize a novel architecture that makes dynamic decisions, learned from parallel data, to determine when to continue feeding on input or generate output words. Experiments with speech and text input show that even at low latency this architecture leads to superior translation results.

123. KIT’s IWSLT 2020 SLT Translation System [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Ngoc-Quan Pham, Felix Schneider, Tuan-Nam Nguyen, Thanh-Le Ha, Thai Son Nguyen, Maximilian Awiszus, Sebastian Stüker, Alexander Waibel
This paper describes KIT’s submissions to the IWSLT2020 Speech Translation evaluation campaign. We first participate in the simultaneous translation task, in which our simultaneous models are Transformer based and can be efficiently trained to obtain low latency with minimized compromise in quality. On the offline speech translation task, we applied our new Speech Transformer architecture to end-to-end speech translation. The obtained model can provide translation quality which is competitive to a complicated cascade. The latter still has the upper hand, thanks to the ability to transparently access to the transcription, and resegment the inputs to avoid fragmentation.

124. End-to-End Simultaneous Translation System for IWSLT2020 Using Modality Agnostic Meta-Learning [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Hou Jeung Han, Mohd Abbas Zaidi, Sathish Reddy Indurthi, Nikhil Kumar Lakumarapu, Beomseok Lee, Sangha Kim
In this paper, we describe end-to-end simultaneous speech-to-text and text-to-text translation systems submitted to IWSLT2020 online translation challenge. The systems are built by adding wait-k and meta-learning approaches to the Transformer architecture. The systems are evaluated on different latency regimes. The simultaneous text-to-text translation achieved a BLEU score of 26.38 compared to the competition baseline score of 14.17 on the low latency regime (Average latency ≤ 3). The simultaneous speech-to-text system improves the BLEU score by 7.7 points over the competition baseline for the low latency regime (Average Latency ≤ 1000).

125. DiDi Labs’ End-to-end System for the IWSLT 2020 Offline Speech TranslationTask [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Arkady Arkhangorodsky, Yiqi Huang, Amittai Axelrod
This paper describes the system that was submitted by DiDi Labs to the offline speech translation task for IWSLT 2020. We trained an end-to-end system that translates audio from English TED talks to German text, without producing intermediate English text. We use the S-Transformer architecture and train using the MuSTC dataset. We also describe several additional experiments that were attempted, but did not yield improved results.

126. End-to-End Offline Speech Translation System for IWSLT 2020 using Modality Agnostic Meta-Learning [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Nikhil Kumar Lakumarapu, Beomseok Lee, Sathish Reddy Indurthi, Hou Jeung Han, Mohd Abbas Zaidi, Sangha Kim
In this paper, we describe the system submitted to the IWSLT 2020 Offline Speech Translation Task. We adopt the Transformer architecture coupled with the meta-learning approach to build our end-to-end Speech-to-Text Translation (ST) system. Our meta-learning approach tackles the data scarcity of the ST task by leveraging the data available from Automatic Speech Recognition (ASR) and Machine Translation (MT) tasks. The meta-learning approach combined with synthetic data augmentation techniques improves the model performance significantly and achieves BLEU scores of 24.58, 27.51, and 27.61 on IWSLT test 2015, MuST-C test, and Europarl-ST test sets respectively.

127. End-to-End Speech-Translation with Knowledge Distillation: FBK@IWSLT2020 [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Marco Gaido, Mattia A. Di Gangi, Matteo Negri, Marco Turchi
This paper describes FBK’s participation in the IWSLT 2020 offline speech translation (ST) task. The task evaluates systems’ ability to translate English TED talks audio into German texts. The test talks are provided in two versions: one contains the data already segmented with automatic tools and the other is the raw data without any segmentation. Participants can decide whether to work on custom segmentation or not. We used the provided segmentation. Our system is an end-to-end model based on an adaptation of the Transformer for speech data. Its training process is the main focus of this paper and it is based on: i) transfer learning (ASR pretraining and knowledge distillation), ii) data augmentation (SpecAugment, time stretch and synthetic data), iii)combining synthetic and real data marked as different domains, and iv) multi-task learning using the CTC loss. Finally, after the training with word-level knowledge distillation is complete, our ST models are fine-tuned using label smoothed cross entropy. Our best model scored 29 BLEU on the MuST-CEn-De test set, which is an excellent result compared to recent papers, and 23.7 BLEU on the same data segmented with VAD, showing the need for researching solutions addressing this specific data condition.

128. SRPOL’s System for the IWSLT 2020 End-to-End Speech Translation Task [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Tomasz Potapczyk, Pawel Przybysz
We took part in the offline End-to-End English to German TED lectures translation task. We based our solution on our last year’s submission. We used a slightly altered Transformer architecture with ResNet-like convolutional layer preparing the audio input to Transformer encoder. To improve the model’s quality of translation we introduced two regularization techniques and trained on machine translated Librispeech corpus in addition to iwslt-corpus, TEDLIUM2 andMust_C corpora. Our best model scored almost 3 BLEU higher than last year’s model. To segment 2020 test set we used exactly the same procedure as last year.

129. The University of Helsinki Submission to the IWSLT2020 Offline SpeechTranslation Task [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Raúl Vázquez, Mikko Aulamo, Umut Sulubacak, Jörg Tiedemann
This paper describes the University of Helsinki Language Technology group’s participation in the IWSLT 2020 offline speech translation task, addressing the translation of English audio into German text. In line with this year’s task objective, we train both cascade and end-to-end systems for spoken language translation. We opt for an end-to-end multitasking architecture with shared internal representations and a cascade approach that follows a standard procedure consisting of ASR, correction, and MT stages. We also describe the experiments that served as a basis for the submitted systems. Our experiments reveal that multitasking training with shared internal representations is not only possible but allows for knowledge-transfer across modalities.

130. LIT Team’s System Description for Japanese-Chinese Machine Translation Task in IWSLT 2020 [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Yimeng Zhuang, Yuan Zhang, Lijie Wang
This paper describes the LIT Team’s submission to the IWSLT2020 open domain translation task, focusing primarily on Japanese-to-Chinese translation direction. Our system is based on the organizers’ baseline system, but we do more works on improving the Transform baseline system by elaborate data pre-processing. We manage to obtain significant improvements, and this paper aims to share some data processing experiences in this translation task. Large-scale back-translation on monolingual corpus is also investigated. In addition, we also try shared and exclusive word embeddings, compare different granularity of tokens like sub-word level. Our Japanese-to-Chinese translation system achieves a performance of BLEU=34.0 and ranks 2nd among all participating systems.

131. OPPO’s Machine Translation System for the IWSLT 2020 Open Domain Translation Task [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Qian Zhang, Xiaopu Li, Dawei Dang, Tingxun Shi, Di Ai, Zhengshan Xue, Jie Hao
In this paper, we demonstrate our machine translation system applied for the Chinese-Japanese bidirectional translation task (aka. open domain translation task) for the IWSLT 2020. Our model is based on Transformer (Vaswani et al., 2017), with the help of many popular, widely proved effective data preprocessing and augmentation methods. Experiments show that these methods can improve the baseline model steadily and significantly.

132. Character Mapping and Ad-hoc Adaptation: Edinburgh’s IWSLT 2020 Open Domain Translation System [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Pinzhen Chen, Nikolay Bogoychev, Ulrich Germann
This paper describes the University of Edinburgh’s neural machine translation systems submitted to the IWSLT 2020 open domain Japanese↔Chinese translation task. On top of commonplace techniques like tokenisation and corpus cleaning, we explore character mapping and unsupervised decoding-time adaptation. Our techniques focus on leveraging the provided data, and we show the positive impact of each technique through the gradual improvement of BLEU.

133. CASIA’s System for IWSLT 2020 Open Domain Translation [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Qian Wang, Yuchen Liu, Cong Ma, Yu Lu, Yining Wang, Long Zhou, Yang Zhao, Jiajun Zhang, Chengqing Zong
This paper describes the CASIA’s system for the IWSLT 2020 open domain translation task. This year we participate in both Chinese→Japanese and Japanese→Chinese translation tasks. Our system is neural machine translation system based on Transformer model. We augment the training data with knowledge distillation and back translation to improve the translation performance. Domain data classification and weighted domain model ensemble are introduced to generate the final translation result. We compare and analyze the performance on development data with different model settings and different data processing techniques.

134. Deep Blue Sonics’ Submission to IWSLT 2020 Open Domain Translation Task [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Enmin Su, Yi Ren
We present in this report our submission to IWSLT 2020 Open Domain Translation Task. We built a data pre-processing pipeline to efficiently handle large noisy web-crawled corpora, which boosts the BLEU score of a widely used transformer model in this translation task. To tackle the open-domain nature of this task, back- translation is applied to further improve the translation performance.

135. University of Tsukuba’s Machine Translation System for IWSLT20 Open Domain Translation Task [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Hongyi Cui, Yizhen Wei, Shohei Iida, Takehito Utsuro, Masaaki Nagata
In this paper, we introduce University of Tsukuba’s submission to the IWSLT20 Open Domain Translation Task. We participate in both Chinese→Japanese and Japanese→Chinese directions. For both directions, our machine translation systems are based on the Transformer architecture. Several techniques are integrated in order to boost the performance of our models: data filtering, large-scale noised training, model ensemble, reranking and postprocessing. Consequently, our efforts achieve 33.0 BLEU scores for Chinese→Japanese translation and 32.3 BLEU scores for Japanese→Chinese translation.

136. Xiaomi’s Submissions for IWSLT 2020 Open Domain Translation Task [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Yuhui Sun, Mengxue Guo, Xiang Li, Jianwei Cui, Bin Wang
This paper describes the Xiaomi’s submissions to the IWSLT20 shared open domain translation task for Chinese<->Japanese language pair. We explore different model ensembling strategies based on recent Transformer variants. We also further strengthen our systems via some effective techniques, such as data filtering, data selection, tagged back translation, domain adaptation, knowledge distillation, and re-ranking. Our resulting Chinese->Japanese primary system ranked second in terms of character-level BLEU score among all submissions. Our resulting Japanese->Chinese primary system also achieved a competitive performance.

137. ISTIC’s Neural Machine Translation System for IWSLT’2020 [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  jiaze wei, wenbin liu, zhenfeng wu, you pan, yanqing he
This paper introduces technical details of machine translation system of Institute of Scientific and Technical Information of China (ISTIC) for the 17th International Conference on Spoken Language Translation (IWSLT 2020). ISTIC participated in both translation tasks of the Open Domain Translation track: Japanese-to-Chinese MT task and Chinese-to-Japanese MT task. The paper mainly elaborates on the model framework, data preprocessing methods and decoding strategies adopted in our system. In addition, the system performance on the development set are given under different settings.

138. Octanove Labs’ Japanese-Chinese Open Domain Translation System [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Masato Hagiwara
This paper describes Octanove Labs’ submission to the IWSLT 2020 open domain translation challenge. In order to build a high-quality Japanese-Chinese neural machine translation (NMT) system, we use a combination of 1) parallel corpus filtering and 2) back-translation. We have shown that, by using heuristic rules and learned classifiers, the size of the parallel data can be reduced by 70% to 90% without much impact on the final MT performance. We have also shown that including the artificially generated parallel data through back-translation further boosts the metric by 17% to 27%, while self-training contributes little. Aside from a small number of parallel sentences annotated for filtering, no external resources have been used to build our system.

139. NAIST’s Machine Translation Systems for IWSLT 2020 Conversational Speech Translation Task [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Ryo Fukuda, Katsuhito Sudoh, Satoshi Nakamura
This paper describes NAIST’s NMT system submitted to the IWSLT 2020 conversational speech translation task. We focus on the translation disfluent speech transcripts that include ASR errors and non-grammatical utterances. We tried a domain adaptation method by transferring the styles of out-of-domain data (United Nations Parallel Corpus) to be like in-domain data (Fisher transcripts). Our system results showed that the NMT model with domain adaptation outperformed a baseline. In addition, slight improvement by the style transfer was observed.

140. Generating Fluent Translations from Disfluent Text Without Access to Fluent References: IIT Bombay@IWSLT2020 [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Nikhil Saini, Jyotsana Khatri, Preethi Jyothi, Pushpak Bhattacharyya
Machine translation systems perform reasonably well when the input is well-formed speech or text. Conversational speech is spontaneous and inherently consists of many disfluencies. Producing fluent translations of disfluent source text would typically require parallel disfluent to fluent training data. However, fluent translations of spontaneous speech are an additional resource that is tedious to obtain. This work describes the submission of IIT Bombay to the Conversational Speech Translation challenge at IWSLT 2020. We specifically tackle the problem of disfluency removal in disfluent-to-fluent text-to-text translation assuming no access to fluent references during training. Common patterns of disfluency are extracted from disfluent references and a noise induction model is used to simulate them starting from a clean monolingual corpus. This synthetically constructed dataset is then considered as a proxy for labeled data during training. We also make use of additional fluent text in the target language to help generate fluent translations. This work uses no fluent references during training and beats a baseline model by a margin of 4.21 and 3.11 BLEU points where the baseline uses disfluent and fluent references, respectively. Index Terms- disfluency removal, machine translation, noise induction, leveraging monolingual data, denoising for disfluency removal.

141. The HW-TSC Video Speech Translation System at IWSLT 2020 [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Minghan Wang, Hao Yang, Yao Deng, Ying Qin, Lizhi Lei, Daimeng Wei, Hengchao Shang, Ning Xie, Xiaochun Li, Jiaxian Guo
The paper presents details of our system in the IWSLT Video Speech Translation evaluation. The system works in a cascade form, which contains three modules: 1) A proprietary ASR system. 2) A disfluency correction system aims to remove interregnums or other disfluent expressions with a fine-tuned BERT and a series of rule-based algorithms. 3) An NMT System based on the Transformer and trained with massive publicly available corpus.

142. ELITR Non-Native Speech Translation at IWSLT 2020 [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Dominik Macháček, Jonáš Kratochvíl, Sangeet Sagar, Matúš Žilinec, Ondřej Bojar, Thai-Son Nguyen, Felix Schneider, Philip Williams, Yuekun Yao
This paper is an ELITR system submission for the non-native speech translation task at IWSLT 2020. We describe systems for offline ASR, real-time ASR, and our cascaded approach to offline SLT and real-time SLT. We select our primary candidates from a pool of pre-existing systems, develop a new end-to-end general ASR system, and a hybrid ASR trained on non-native speech. The provided small validation set prevents us from carrying out a complex validation, but we submit all the unselected candidates for contrastive evaluation on the test set.

143. Is 42 the Answer to Everything in Subtitling-oriented Speech Translation? [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Alina Karakanta, Matteo Negri, Marco Turchi
Subtitling is becoming increasingly important for disseminating information, given the enormous amounts of audiovisual content becoming available daily. Although Neural Machine Translation (NMT) can speed up the process of translating audiovisual content, large manual effort is still required for transcribing the source language, and for spotting and segmenting the text into proper subtitles. Creating proper subtitles in terms of timing and segmentation highly depends on information present in the audio (utterance duration, natural pauses). In this work, we explore two methods for applying Speech Translation (ST) to subtitling, a) a direct end-to-end and b) a classical cascade approach. We discuss the benefit of having access to the source language speech for improving the conformity of the generated subtitles to the spatial and temporal subtitling constraints and show that length is not the answer to everything in the case of subtitling-oriented ST.

144. Re-translation versus Streaming for Simultaneous Translation [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, George Foster
There has been great progress in improving streaming machine translation, a simultaneous paradigm where the system appends to a growing hypothesis as more source content becomes available. We study a related problem in which revisions to the hypothesis beyond strictly appending words are permitted. This is suitable for applications such as live captioning an audio feed. In this setting, we compare custom streaming approaches to re-translation, a straightforward strategy where each new source token triggers a distinct translation from scratch. We find re-translation to be as good or better than state-of-the-art streaming systems, even when operating under constraints that allow very few revisions. We attribute much of this success to a previously proposed data-augmentation technique that adds prefix-pairs to the training data, which alongside wait-k inference forms a strong baseline for streaming translation. We also highlight re-translation’s ability to wrap arbitrarily powerful MT systems with an experiment showing large improvements from an upgrade to its base model.

145. Towards Stream Translation: Adaptive Computation Time for Simultaneous Machine Translation [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Felix Schneider, Alexander Waibel
Simultaneous machine translation systems rely on a policy to schedule read and write operations in order to begin translating a source sentence before it is complete. In this paper, we demonstrate the use of Adaptive Computation Time (ACT) as an adaptive, learned policy for simultaneous machine translation using the transformer model and as a more numerically stable alternative to Monotonic Infinite Lookback Attention (MILk). We achieve state-of-the-art results in terms of latency-quality tradeoffs. We also propose a method to use our model on unsegmented input, i.e. without sentence boundaries, simulating the condition of translating output from automatic speech recognition. We present first benchmark results on this task.

146. Neural Simultaneous Speech Translation Using Alignment-Based Chunking [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Patrick Wilken, Tamer Alkhouli, Evgeny Matusov, Pavel Golik
In simultaneous machine translation, the objective is to determine when to produce a partial translation given a continuous stream of source words, with a trade-off between latency and quality. We propose a neural machine translation (NMT) model that makes dynamic decisions when to continue feeding on input or generate output words. The model is composed of two main components: one to dynamically decide on ending a source chunk, and another that translates the consumed chunk. We train the components jointly and in a manner consistent with the inference conditions. To generate chunked training data, we propose a method that utilizes word alignment while also preserving enough context. We compare models with bidirectional and unidirectional encoders of different depths, both on real speech and text input. Our results on the IWSLT 2020 English-to-German task outperform a wait-k baseline by 2.6 to 3.7% BLEU absolute.

147. From Speech-to-Speech Translation to Automatic Dubbing [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Marcello Federico, Robert Enyedi, Roberto Barra-Chicote, Ritwik Giri, Umut Isik, Arvindh Krishnaswamy, Hassan Sawaf
We present enhancements to a speech-to-speech translation pipeline in order to perform automatic dubbing. Our architecture features neural machine translation generating output of preferred length, prosodic alignment of the translation with the original speech segments, neural text-to-speech with fine tuning of the duration of each utterance, and, finally, audio rendering to enriches text-to-speech output with background noise and reverberation extracted from the original audio. We report and discuss results of a first subjective evaluation of automatic dubbing of excerpts of TED Talks from English into Italian, which measures the perceived naturalness of automatic dubbing and the relative importance of each proposed enhancement.

148. Joint Translation and Unit Conversion for End-to-end Localization [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Georgiana Dinu, Prashant Mathur, Marcello Federico, Stanislas Lauly, Yaser Al-Onaizan
A variety of natural language tasks require processing of textual data which contains a mix of natural language and formal languages such as mathematical expressions. In this paper, we take unit conversions as an example and propose a data augmentation technique which lead to models learning both translation and conversion tasks as well as how to adequately switch between them for end-to-end localization.

149. How Human is Machine Translationese? Comparing Human and Machine Translations of Text and Speech [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Yuri Bizzoni, Tom S Juzek, Cristina España-Bonet, Koel Dutta Chowdhury, Josef van Genabith, Elke Teich
Translationese is a phenomenon present in human translations, simultaneous interpreting, and even machine translations. Some translationese features tend to appear in simultaneous interpreting with higher frequency than in human text translation, but the reasons for this are unclear. This study analyzes translationese patterns in translation, interpreting, and machine translation outputs in order to explore possible reasons. In our analysis we – (i) detail two non-invasive ways of detecting translationese and (ii) compare translationese across human and machine translations from text and speech. We find that machine translation shows traces of translationese, but does not reproduce the patterns found in human translation, offering support to the hypothesis that such patterns are due to the model (human vs machine) rather than to the data (written vs spoken).

150. Proceedings of the Fourth Workshop on Neural Generation and Translation [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Alexandra Birch, Andrew Finch, Hiroaki Hayashi, Kenneth Heafield, Marcin Junczys-Dowmunt, Ioannis Konstas, Xian Li, Graham Neubig, Yusuke Oda


151. Findings of the Fourth Workshop on Neural Generation and Translation [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Kenneth Heafield, Hiroaki Hayashi, Yusuke Oda, Ioannis Konstas, Andrew Finch, Graham Neubig, Xian Li, Alexandra Birch
We describe the finding of the Fourth Workshop on Neural Generation and Translation, held in concert with the annual conference of the Association for Computational Linguistics (ACL 2020). First, we summarize the research trends of papers presented in the proceedings. Second, we describe the results of the three shared tasks 1) efficient neural machine translation (NMT) where participants were tasked with creating NMT systems that are both accurate and efficient, and 2) document-level generation and translation (DGT) where participants were tasked with developing systems that generate summaries from structured data, potentially with assistance from text in another language and 3) STAPLE task: creation of as many possible translations of a given input text. This last shared task was organised by Duolingo.

152. Compressing Neural Machine Translation Models with 4-bit Precision [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Alham Fikri Aji, Kenneth Heafield
Neural Machine Translation (NMT) is resource-intensive. We design a quantization procedure to compress fit NMT models better for devices with limited hardware capability. We use logarithmic quantization, instead of the more commonly used fixed-point quantization, based on the empirical fact that parameters distribution is not uniform. We find that biases do not take a lot of memory and show that biases can be left uncompressed to improve the overall quality without affecting the compression rate. We also propose to use an error-feedback mechanism during retraining, to preserve the compressed model as a stale gradient. We empirically show that NMT models based on Transformer or RNN architecture can be compressed up to 4-bit precision without any noticeable quality degradation. Models can be compressed up to binary precision, albeit with lower quality. RNN architecture seems to be more robust towards compression, compared to the Transformer.

153. The Unreasonable Volatility of Neural Machine Translation Models [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Marzieh Fadaee, Christof Monz
Recent works have shown that Neural Machine Translation (NMT) models achieve impressive performance, however, questions about understanding the behavior of these models remain unanswered. We investigate the unexpected volatility of NMT models where the input is semantically and syntactically correct. We discover that with trivial modifications of source sentences, we can identify cases where unexpected changes happen in the translation and in the worst case lead to mistranslations. This volatile behavior of translating extremely similar sentences in surprisingly different ways highlights the underlying generalization problem of current NMT models. We find that both RNN and Transformer models display volatile behavior in 26% and 19% of sentence variations, respectively.

154. Distill, Adapt, Distill: Training Small, In-Domain Models for Neural Machine Translation [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Mitchell Gordon, Kevin Duh
We explore best practices for training small, memory efficient machine translation models with sequence-level knowledge distillation in the domain adaptation setting. While both domain adaptation and knowledge distillation are widely-used, their interaction remains little understood. Our large-scale empirical results in machine translation (on three language pairs with three domains each) suggest distilling twice for best performance: once using general-domain data and again using in-domain data with an adapted teacher.

155. Training and Inference Methods for High-Coverage Neural Machine Translation [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Michael Yang, Yixin Liu, Rahul Mayuranath
In this paper, we introduce a system built for the Duolingo Simultaneous Translation And Paraphrase for Language Education (STAPLE) shared task at the 4th Workshop on Neural Generation and Translation (WNGT 2020). We participated in the English-to-Japanese track with a Transformer model pretrained on the JParaCrawl corpus and fine-tuned in two steps on the JESC corpus and then the (smaller) Duolingo training corpus. First, during training, we find it is essential to deliberately expose the model to higher-quality translations more often during training for optimal translation performance. For inference, encouraging a small amount of diversity with Diverse Beam Search to improve translation coverage yielded marginal improvement over regular Beam Search. Finally, using an auxiliary filtering model to filter out unlikely candidates from Beam Search improves performance further. We achieve a weighted F1 score of 27.56% on our own test set, outperforming the STAPLE AWS translations baseline score of 4.31%.

156. English-to-Japanese Diverse Translation by Combining Forward and Backward Outputs [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Masahiro Kaneko, Aizhan Imankulova, Tosho Hirasawa, Mamoru Komachi
We introduce our TMU system that is submitted to The 4th Workshop on Neural Generation and Translation (WNGT2020) to English-to-Japanese (En→Ja) track on Simultaneous Translation And Paraphrase for Language Education (STAPLE) shared task. In most cases machine translation systems generate a single output from the input sentence, however, in order to assist language learners in their journey with better and more diverse feedback, it is helpful to create a machine translation system that is able to produce diverse translations of each input sentence. However, creating such systems would require complex modifications in a model to ensure the diversity of outputs. In this paper, we investigated if it is possible to create such systems in a simple way and whether it can produce desired diverse outputs. In particular, we combined the outputs from forward and backward neural translation models (NMT). Our system achieved third place in En→Ja track, despite adopting only a simple approach.

157. The ADAPT System Description for the STAPLE 2020 English-to-Portuguese Translation Task [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Rejwanul Haque, Yasmin Moslem, Andy Way
This paper describes the ADAPT Centre’s submission to STAPLE (Simultaneous Translation and Paraphrase for Language Education) 2020, a shared task of the 4th Workshop on Neural Generation and Translation (WNGT), for the English-to-Portuguese translation task. In this shared task, the participants were asked to produce high-coverage sets of plausible translations given English prompts (input source sentences). We present our English-to-Portuguese machine translation (MT) models that were built applying various strategies, e.g. data and sentence selection, monolingual MT for generating alternative translations, and combining multiple n-best translations. Our experiments show that adding the aforementioned techniques to the baseline yields an excellent performance in the English-to-Portuguese translation task.

158. Exploring Model Consensus to Generate Translation Paraphrases [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Zhenhao Li, Marina Fomicheva, Lucia Specia
This paper describes our submission to the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE). This task focuses on improving the ability of neural MT systems to generate diverse translations. Our submission explores various methods, including N-best translation, Monte Carlo dropout, Diverse Beam Search, Mixture of Experts, Ensembling, and Lexical Substitution. Our main submission is based on the integration of multiple translations from multiple methods using Consensus Voting. Experiments show that the proposed approach achieves a considerable degree of diversity without introducing noisy translations. Our final submission achieves a 0.5510 weighted F1 score on the blind test set for the English-Portuguese track.

159. Growing Together: Modeling Human Language Learning With n-Best Multi-Checkpoint Machine Translation [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  El Moatez Billah Nagoudi, Muhammad Abdul-Mageed, Hasan Cavusoglu
We describe our submission to the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE). We view MT models at various training stages (i.e., checkpoints) as human learners at different levels. Hence, we employ an ensemble of multi-checkpoints from the same model to generate translation sequences with various levels of fluency. From each checkpoint, for our best model, we sample n-Best sequences (n=10) with a beam width =100. We achieve an 37.57 macro F1 with a 6 checkpoint model ensemble on the official shared task test data, outperforming a baseline Amazon translation system of 21.30 macro F1 and ultimately demonstrating the utility of our intuitive method.

160. Generating Diverse Translations via Weighted Fine-tuning and Hypotheses Filtering for the Duolingo STAPLE Task [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Sweta Agrawal, Marine Carpuat
This paper describes the University of Maryland’s submission to the Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE). Unlike the standard machine translation task, STAPLE requires generating a set of outputs for a given input sequence, aiming to cover the space of translations produced by language learners. We adapt neural machine translation models to this requirement by (a) generating n-best translation hypotheses from a model fine-tuned on learner translations, oversampled to reflect the distribution of learner responses, and (b) filtering hypotheses using a feature-rich binary classifier that directly optimizes a close approximation of the official evaluation metric. Combination of systems that use these two strategies achieves F1 scores of 53.9% and 52.5% on Vietnamese and Portuguese, respectively ranking 2nd and 4th on the leaderboard.

161. The JHU Submission to the 2020 Duolingo Shared Task on Simultaneous Translation and Paraphrase for Language Education [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Huda Khayrallah, Jacob Bremerman, Arya D. McCarthy, Kenton Murray, Winston Wu, Matt Post
This paper presents the Johns Hopkins University submission to the 2020 Duolingo Shared Task on Simultaneous Translation and Paraphrase for Language Education (STAPLE). We participated in all five language tasks, placing first in each. Our approach involved a language-agnostic pipeline of three components: (1) building strong machine translation systems on general-domain data, (2) fine-tuning on Duolingo-provided data, and (3) generating n-best lists which are then filtered with various score-based techniques. In addi- tion to the language-agnostic pipeline, we attempted a number of linguistically-motivated approaches, with, unfortunately, little success. We also find that improving BLEU performance of the beam-search generated translation does not necessarily improve on the task metric—weighted macro F1 of an n-best list.

162. Simultaneous paraphrasing and translation by fine-tuning Transformer models [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Rakesh Chada
This paper describes the third place submission to the shared task on simultaneous translation and paraphrasing for language education at the 4th workshop on Neural Generation and Translation (WNGT) for ACL 2020. The final system leverages pre-trained translation models and uses a Transformer architecture combined with an oversampling strategy to achieve a competitive performance. This system significantly outperforms the baseline on Hungarian (27% absolute improvement in Weighted Macro F1 score) and Portuguese (33% absolute improvement) languages.

163. Efficient and High-Quality Neural Machine Translation with OpenNMT [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Guillaume Klein, Dakun Zhang, Clément Chouteau, Josep Crego, Jean Senellart
This paper describes the OpenNMT submissions to the WNGT 2020 efficiency shared task. We explore training and acceleration of Transformer models with various sizes that are trained in a teacher-student setup. We also present a custom and optimized C++ inference engine that enables fast CPU and GPU decoding with few dependencies. By combining additional optimizations and parallelization techniques, we create small, efficient, and high-quality neural machine translation models.

164. Edinburgh’s Submissions to the 2020 Machine Translation Efficiency Task [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Nikolay Bogoychev, Roman Grundkiewicz, Alham Fikri Aji, Maximiliana Behnke, Kenneth Heafield, Sidharth Kashyap, Emmanouil-Ioannis Farsarakis, Mateusz Chudyk
We participated in all tracks of the Workshop on Neural Generation and Translation 2020 Efficiency Shared Task: single-core CPU, multi-core CPU, and GPU. At the model level, we use teacher-student training with a variety of student sizes, tie embeddings and sometimes layers, use the Simpler Simple Recurrent Unit, and introduce head pruning. On GPUs, we used 16-bit floating-point tensor cores. On CPUs, we customized 8-bit quantization and multiple processes with affinity for the multi-core setting. To reduce model size, we experimented with 4-bit log quantization but use floats at runtime. In the shared task, most of our submissions were Pareto optimal with respect the trade-off between time and quality.

165. Improving Document-Level Neural Machine Translation with Domain Adaptation [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Sami Ul Haq, Sadaf Abdul Rauf, Arslan Shoukat, Noor-e- Hira
Recent studies have shown that translation quality of NMT systems can be improved by providing document-level contextual information. In general sentence-based NMT models are extended to capture contextual information from large-scale document-level corpora which are difficult to acquire. Domain adaptation on the other hand promises adapting components of already developed systems by exploiting limited in-domain data. This paper presents FJWU’s system submission at WNGT, we specifically participated in Document level MT task for German-English translation. Our system is based on context-aware Transformer model developed on top of original NMT architecture by integrating contextual information using attention networks. Our experimental results show providing previous sentences as context significantly improves the BLEU score as compared to a strong NMT baseline. We also studied the impact of domain adaptation on document level translationand were able to improve results by adaptingthe systems according to the testing domain.

166. Simultaneous Translation and Paraphrase for Language Education [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Stephen Mayhew, Klinton Bicknell, Chris Brust, Bill McDowell, Will Monroe, Burr Settles
We present the task of Simultaneous Translation and Paraphrasing for Language Education (STAPLE). Given a prompt in one language, the goal is to generate a diverse set of correct translations that language learners are likely to produce. This is motivated by the need to create and maintain large, high-quality sets of acceptable translations for exercises in a language-learning application, and synthesizes work spanning machine translation, MT evaluation, automatic paraphrasing, and language education technology. We developed a novel corpus with unique properties for five languages (Hungarian, Japanese, Korean, Portuguese, and Vietnamese), and report on the results of a shared task challenge which attracted 20 teams to solve the task. In our meta-analysis, we focus on three aspects of the resulting systems: external training corpus selection, model architecture and training decisions, and decoding and filtering strategies. We find that strong systems start with a large amount of generic training data, and then fine-tune with in-domain data, sampled according to our provided learner response frequencies.

167. A Translation-Based Approach to Morphology Learning for Low Resource Languages [PDF] 返回目录
  ACL 2020.
  Tewodros Gebreselassie, Amanuel Mersha, Michael Gasser
“Low resource languages” usually refers to languages that lack corpora and basic tools such as part-of-speech taggers. But a significant number of such languages do benefit from the availability of relatively complex linguistic descriptions of phonology, morphology, and syntax, as well as dictionaries. A further category, probably the majority of the world’s languages, suffers from the lack of even these resources. In this paper, we investigate the possibility of learning the morphology of such a language by relying on its close relationship to a language with more resources. Specifically, we use a transfer-based approach to learn the morphology of the severely under-resourced language Gofa, starting with a neural morphological generator for the closely related language, Wolaytta. Both languages are members of the Omotic family, spoken and southwestern Ethiopia, and, like other Omotic languages, both are morphologically complex. We first create a finite- state transducer for morphological analysis and generation for Wolaytta, based on relatively complete linguistic descriptions and lexicons for the language. Next, we train an encoder-decoder neural network on the task of morphological generation for Wolaytta, using data generated by the FST. Such a network takes a root and a set of grammatical features as input and generates a word form as output. We then elicit Gofa translations of a small set of Wolaytta words from bilingual speakers. Finally, we retrain the decoder of the Wolaytta network, using a small set of Gofa target words that are translations of the Wolaytta outputs of the original network. The evaluation shows that the transfer network performs better than a separate encoder-decoder network trained on a larger set of Gofa words. We conclude with implications for the learning of morphology for severely under-resourced languages in regions where there are related languages with more resources.

168. FFR v1.1: Fon-French Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Chris Chinenye Emezue, Femi Pancrace Bonaventure Dossou
All over the world and especially in Africa, researchers are putting efforts into building Neural Machine Translation (NMT) systems to help tackle the language barriers in Africa, a continent of over 2000 different languages. However, the low-resourceness, diacritical, and tonal complexities of African languages are major issues being faced. The FFR project is a major step towards creating a robust translation model from Fon, a very low-resource and tonal language, to French, for research and public use. In this paper, we introduce FFR Dataset, a corpus of Fon-to-French translations, describe the diacritical encoding process, and introduce our FFR v1.1 model, trained on the dataset. The dataset and model are made publicly available, to promote collaboration and reproducibility.

169. Towards Mitigating Gender Bias in a decoder-based Neural Machine Translation model by Adding Contextual Information [PDF] 返回目录
  ACL 2020.
  Christine Basta, Marta R. Costa-jussà, José A. R. Fonollosa
Gender bias negatively impacts many natural language processing applications, including machine translation (MT). The motivation behind this work is to study whether recent proposed MT techniques are significantly contributing to attenuate biases in document-level and gender-balanced data. For the study, we consider approaches of adding the previous sentence and the speaker information, implemented in a decoder-based neural MT system. We show improvements both in translation quality (+1 BLEU point) as well as in gender bias mitigation on WinoMT (+5% accuracy).

170. Multitask Models for Controlling the Complexity of Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Sweta Agrawal, Marine Carpuat
We introduce a machine translation task where the output is aimed at audiences of different levels of target language proficiency. We collect a novel dataset of news articles available in English and Spanish and written for diverse reading grade levels. We leverage this dataset to train multitask sequence to sequence models that translate Spanish into English targeted at an easier reading grade level than the original Spanish. We show that multitask models outperform pipeline approaches that translate and simplify text independently.

171. HausaMT v1.0: Towards English–Hausa Neural Machine Translation [PDF] 返回目录
  ACL 2020.
  Adewale Akinfaderin
Neural Machine Translation (NMT) for low-resource languages suffers from low performance because of the lack of large amounts of parallel data and language diversity. To contribute to ameliorating this problem, we built a baseline model for English–Hausa machine translation, which is considered a task for low–resource language. The Hausa language is the second largest Afro–Asiatic language in the world after Arabic and it is the third largest language for trading across a larger swath of West Africa countries, after English and French. In this paper, we curated different datasets containing Hausa–English parallel corpus for our translation. We trained baseline models and evaluated the performance of our models using the Recurrent and Transformer encoder–decoder architecture with two tokenization approaches: standard word–level tokenization and Byte Pair Encoding (BPE) subword tokenization.

172. An Evaluation of Subword Segmentation Strategies for Neural Machine Translation of Morphologically Rich Languages [PDF] 返回目录
  ACL 2020.
  Aquia Richburg, Ramy Eskander, Smaranda Muresan, Marine Carpuat
Byte-Pair Encoding (BPE) (Sennrich et al., 2016) has become a standard pre-processing step when building neural machine translation systems. However, it is not clear whether this is an optimal strategy in all settings. We conduct a controlled comparison of subword segmentation strategies for translating two low-resource morphologically rich languages (Swahili and Turkish) into English. We show that segmentations based on a unigram language model (Kudo, 2018) yield comparable BLEU and better recall for translating rare source words than BPE.

173. On the Linguistic Representational Power of Neural Machine Translation Models [PDF] 返回目录
  CL 2020.
  Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, James Glass
Despite the recent success of deep neural networks in natural language processing and other spheres of artificial intelligence, their interpretability remains a challenge. We analyze the representations learned by neural machine translation (NMT) models at various levels of granularity and evaluate their quality through relevant extrinsic properties. In particular, we seek answers to the following questions: (i) How accurately is word structure captured within the learned representations, which is an important aspect in translating morphologically rich languages? (ii) Do the representations capture long-range dependencies, and effectively handle syntactically divergent languages? (iii) Do the representations capture lexical semantics? We conduct a thorough investigation along several parameters: (i) Which layers in the architecture capture each of these linguistic phenomena; (ii) How does the choice of translation unit (word, character, or subword unit) impact the linguistic properties captured by the underlying representations? (iii) Do the encoder and decoder learn differently and independently? (iv) Do the representations learned by multilingual NMT models capture the same amount of linguistic information as their bilingual counterparts? Our data-driven, quantitative evaluation illuminates important aspects in NMT models and their ability to capture various linguistic phenomena. We show that deep NMT models trained in an end-to-end fashion, without being provided any direct supervision during the training process, learn a non-trivial amount of linguistic information. Notable findings include the following observations: (i) Word morphology and part-of-speech information are captured at the lower layers of the model; (ii) In contrast, lexical semantics or non-local syntactic and semantic dependencies are better represented at the higher layers of the model; (iii) Representations learned using characters are more informed about word-morphology compared to those learned using subword units; and (iv) Representations learned by multilingual models are richer compared to bilingual models.

174. Unsupervised Word Translation with Adversarial Autoencoder [PDF] 返回目录
  CL 2020.
  Tasnim Mohiuddin, Shafiq Joty
Crosslingual word embeddings learned from monolingual embeddings have a crucial role in many downstream tasks, ranging from machine translation to transfer learning. Adversarial training has shown impressive success in learning crosslingual embeddings and the associated word translation task without any parallel data by mapping monolingual embeddings to a shared space. However, recent work has shown superior performance for non-adversarial methods in more challenging language pairs. In this article, we investigate adversarial autoencoder for unsupervised word translation and propose two novel extensions to it that yield more stable training and improved results. Our method includes regularization terms to enforce cycle consistency and input reconstruction, and puts the target encoders as an adversary against the corresponding discriminator. We use two types of refinement procedures sequentially after obtaining the trained encoders and mappings from the adversarial training, namely, refinement with Procrustes solution and refinement with symmetric re-weighting. Extensive experimentations with high- and low-resource languages from two different data sets show that our method achieves better performance than existing adversarial and non-adversarial approaches and is also competitive with the supervised system. Along with performing comprehensive ablation studies to understand the contribution of different components of our adversarial model, we also conduct a thorough analysis of the refinement procedures to understand their effects.

175. A Systematic Study of Inner-Attention-Based Sentence Representations in Multilingual Neural Machine Translation [PDF] 返回目录
  CL 2020.
  Raúl Vázquez, Alessandro Raganato, Mathias Creutz, Jörg Tiedemann
Neural machine translation has considerably improved the quality of automatic translations by learning good representations of input sentences. In this article, we explore a multilingual translation model capable of producing fixed-size sentence representations by incorporating an intermediate crosslingual shared layer, which we refer to as attention bridge. This layer exploits the semantics from each language and develops into a language-agnostic meaning representation that can be efficiently used for transfer learning. We systematically study the impact of the size of the attention bridge and the effect of including additional languages in the model. In contrast to related previous work, we demonstrate that there is no conflict between translation performance and the use of sentence representations in downstream tasks. In particular, we show that larger intermediate layers not only improve translation quality, especially for long sentences, but also push the accuracy of trainable classification tasks. Nevertheless, shorter representations lead to increased compression that is beneficial in non-trainable similarity tasks. Similarly, we show that trainable downstream tasks benefit from multilingual models, whereas additional language signals do not improve performance in non-trainable benchmarks. This is an important insight that helps to properly design models for specific applications. Finally, we also include an in-depth analysis of the proposed attention bridge and its ability to encode linguistic properties. We carefully analyze the information that is captured by individual attention heads and identify interesting patterns that explain the performance of specific settings in linguistic probing tasks.

176. Imitation Attacks and Defenses for Black-box Machine Translation Systems [PDF] 返回目录
  EMNLP 2020. Long Paper
  Eric Wallace, Mitchell Stern, Dawn Song
Adversaries may look to steal or attack black-box NLP systems, either for financial gain or to exploit model errors. One setting of particular interest is machine translation (MT), where models have high commercial value and errors can be costly. We investigate possible exploitations of black-box MT systems and explore a preliminary defense against such threats. We first show that MT systems can be stolen by querying them with monolingual sentences and training models to imitate their outputs. Using simulated experiments, we demonstrate that MT model stealing is possible even when imitation models have different input data or architectures than their target models. Applying these ideas, we train imitation models that reach within 0.6 BLEU of three production MT systems on both high-resource and low-resource language pairs. We then leverage the similarity of our imitation models to transfer adversarial examples to the production systems. We use gradient-based attacks that expose inputs which lead to semantically-incorrect translations, dropped content, and vulgar model outputs. To mitigate these vulnerabilities, we propose a defense that modifies translation outputs in order to misdirect the optimization of imitation models. This defense degrades the adversary's BLEU score and attack success rate at some cost in the defender's BLEU and inference speed.

177. Shallow-to-Deep Training for Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Bei Li, Ziyang Wang, Hui Liu, Yufan Jiang, Quan Du, Tong Xiao, Huizhen Wang, Jingbo Zhu
Deep encoders have been proven to be effective in improving neural machine translation (NMT) systems, but training an extremely deep encoder is time consuming. Moreover, why deep models help NMT is an open question. In this paper, we investigate the behavior of a well-tuned deep Transformer system. We find that stacking layers is helpful in improving the representation ability of NMT models and adjacent layers perform similarly. This inspires us to develop a shallow-to-deep training method that learns deep models by stacking shallow models. In this way, we successfully train a Transformer system with a 54-layer encoder. Experimental results on WMT’16 English-German and WMT’14 English-French translation tasks show that it is 1:4

178. CSP: Code-Switching Pre-training for Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Zhen Yang, Bojie Hu, ambyera han, shen huang, Qi Ju
This paper proposes a new pre-training method, called Code-Switching Pre-training (CSP for short) for Neural Machine Translation (NMT). Unlike traditional pre-training method which randomly masks some fragments of the input sentence, the proposed CSP randomly replaces some words in the source sentence with their translation words in the target language. Specifically, we firstly perform lexicon induction with unsupervised word embedding mapping between the source and target languages, and then randomly replace some words in the input sentence with their translation words according to the extracted translation lexicons. CSP adopts the encoder-decoder framework: its encoder takes the code-mixed sentence as input, and its decoder predicts the replaced fragment of the input sentence. In this way, CSP is able to pre-train the NMT model by explicitly making the most of the alignment information extracted from the source and target monolingual corpus. Additionally, we relieve the pretrain-finetune discrepancy caused by the artificial symbols like [mask]. To verify the effectiveness of the proposed method, we conduct extensive experiments on unsupervised and supervised NMT. Experimental results show that CSP achieves significant improvements over baselines without pre-training or with other pre-training methods.

179. Non-Autoregressive Machine Translation with Latent Alignments [PDF] 返回目录
  EMNLP 2020. Long Paper
  Chitwan Saharia, William Chan, Saurabh Saxena, Mohammad Norouzi
This paper presents two strong methods, CTC and Imputer, for non-autoregressivemachine translation that model latent alignments with dynamic programming. We revisit CTC for machine translation and demonstrate that a simple CTC model can achieve state-of-the-art for single-step non-autoregressive machine translation, contrary to what prior work indicates. In addition, we adapt the Imputer model for non-autoregressive machine translation and demonstrate that Imputer with just 4 generation steps can match the performance of an autoregressive Transformer baseline.Our latent alignment models are simpler than many existing non-autoregressive translation baselines; for example, we do not require target length prediction or re-scoring with an autoregressive model.On the competitive WMT'14 En$\rightarrow$De task, our CTC model achieves 25.7 BLEU with a single generation step, while Imputer achieves 27.5 BLEU with 2 generation steps, and 28.0 BLEU with 4 generation steps. This compares favourably to the autoregressive Transformer baseline at 27.8 BLEU.

180. Language Model Prior for Low-Resource Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Christos Baziotis, Barry Haddow, Alexandra Birch
The scarcity of large parallel corpora is an important obstacle for neural machine translation. A common solution is to exploit the knowledge of language models (LM) trained on abundant monolingual data. In this work, we propose a novel approach to incorporate a LM as prior in a neural translation model (TM). Specifically, we add a regularization term, which pushes the output distributions of the TM to be probable under the LM prior, while avoiding wrong predictions when the TM "disagrees" with the LM. This objective relates to knowledge distillation, where the LM can be viewed as teaching the TM about the target language. The proposed approach does not compromise decoding speed, because the LM is used only at training time, unlike previous work that requires it during inference. We present an analysis of the effects that different methods have on the distributions of the TM. Results on two low-resource machine translation datasets show clear improvements even with limited monolingual data.

181. Losing Heads in the Lottery: Pruning Transformer Attention in Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Maximiliana Behnke, Kenneth Heafield
The attention mechanism is the crucial component of the transformer architecture. Recent research shows that most attention heads are not confident in their decisions and can be pruned. However, removing them before training a model results in lower quality. In this paper, we apply the lottery ticket hypothesis to prune heads in the early stages of training. Our experiments on machine translation show that it is possible to remove up to three-quarters of attention heads from transformer-big during early training with an average -0.1 change in BLEU for Turkish→English. The pruned model is 1.5 times as fast at inference, albeit at the cost of longer training. Our method is complementary to other approaches, such as teacher-student, with English→German student model gaining an additional 10% speed-up with 75% encoder attention removed and 0.2 BLEU loss.

182. Translationese in Machine Translation Evaluation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Yvette Graham, Barry Haddow, Philipp Koehn
The term translationese has been used to describe features of translated text, and in this paper, we provide detailed analysis of potential adverse effects of translationese on machine translation evaluation.Our analysis shows differences in conclusions drawn from evaluations that include translationese in test data compared to experiments that tested only with text originally composed in that language. For this reason we recommend that reverse-created test data be omitted from future machine translation test sets. In addition, we provide a re-evaluation of a past machine translation evaluation claiming human-parity of MT. One important issue not previously considered is statistical power of significance tests applied to comparison of human and machine translation. Since the very aim of past evaluations was investigation of ties between human and MT systems, power analysis is of particular importance, to avoid, for example, claims of human parity simply corresponding to Type II error resulting from the application of a low powered test. We provide detailed analysis of tests used in such evaluations to provide an indication of a suitable minimum sample size for future studies.

183. Towards Enhancing Faithfulness for Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Rongxiang Weng, Heng Yu, Xiangpeng Wei, Weihua Luo
Neural machine translation (NMT) has achieved great success due to the ability to generate high-quality sentences. Compared with human translations, one of the drawbacks of current NMT is that translations are not usually faithful to the input, e.g., omitting information or generating unrelated fragments, which inevitably decreases the overall quality, especially for human readers. In this paper, we propose a novel training strategy with a multi-task learning paradigm to build a faithfulness enhanced NMT model (named \textscFEnmt). During the NMT training process, we sample a subset from the training set and translate them to get fragments that have been mistranslated. Afterward, the proposed multi-task learning paradigm is employed on both encoder and decoder to guide NMT to correctly translate these fragments. Both automatic and human evaluations verify that our \textscFEnmt could improve translation quality by effectively reducing unfaithful translations.

184. Automatic Machine Translation Evaluation in Many Languages via Zero-Shot Paraphrasing [PDF] 返回目录
  EMNLP 2020. Long Paper
  Brian Thompson, Matt Post
We frame the task of machine translation evaluation as one of scoring machine translation output with a sequence-to-sequence paraphraser, conditioned on a human reference. We propose training the paraphraser as a multilingual NMT system, treating paraphrasing as a zero-shot translation task (e.g., Czech to Czech). This results in the paraphraser’s output mode being centered around a copy of the input sequence, which represents the best case scenario where the MT system output matches a human reference. Our method is simple and intuitive, and does not require human judgements for training. Our single model (trained in 39 languages) outperforms or statistically ties with all prior metrics on the WMT 2019 segment-level shared metrics task in all languages (excluding Gujarati where the model had no training data). We also explore using our model for the task of quality estimation as a metric—conditioning on the source instead of the reference—and find that it significantly outperforms every submission to the WMT 2019 shared task on quality estimation in every language pair.

185. Uncertainty-Aware Semantic Augmentation for Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Xiangpeng Wei, Heng Yu, Yue Hu, Rongxiang Weng, Luxi Xing, Weihua Luo
As a sequence-to-sequence generation task, neural machine translation (NMT) naturally contains intrinsic uncertainty, where a single sentence in one language has multiple valid counterparts in the other. However, the dominant methods for NMT only observe one of them from the parallel corpora for the model training but have to deal with adequate variations under the same meaning at inference. This leads to a discrepancy of the data distribution between the training and the inference phases. To address this problem, we propose uncertainty-aware semantic augmentation, which explicitly captures the universal semantic information among multiple semantically-equivalent source sentences and enhances the hidden representations with this information for better translations. Extensive experiments on various translation tasks reveal that our approach significantly outperforms the strong baselines and the existing methods.

186. Iterative Domain-Repaired Back-Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Hao-Ran Wei, Zhirui Zhang, Boxing Chen, Weihua Luo
In this paper, we focus on the domain-specific translation with low resources, where in-domain parallel corpora are scarce or nonexistent.One common and effective strategy for this case is exploiting in-domain monolingual data with the back-translation method.However, the synthetic parallel data is very noisy because they are generated by imperfect out-of-domain systems, resulting in the poor performance of domain adaptation.To address this issue, we propose a novel iterative domain-repaired back-translation framework, which introduces the Domain-Repair (DR) model to refine translations in synthetic bilingual data. To this end, we construct corresponding data for the DR model training by round-trip translating the monolingual sentences, and then design the unified training framework to optimize paired DR and NMT models jointly.Experiments on adapting NMT models between specific domains and from the general domain to specific domains demonstrate the effectiveness of our proposed approach, achieving 15.79 and 4.47 BLEU improvements on average over unadapted models and back-translation.

187. Self-Induced Curriculum Learning in Self-Supervised Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Dana Ruiter, Josef van Genabith, Cristina España-Bonet
Self-supervised neural machine translation (SSNMT) jointly learns to identify and select suitable training data from comparable (rather than parallel) corpora and to translate, in a way that the two tasks support each other in a virtuous circle. In this study, we provide an in-depth analysis of the sampling choices the SSNMT model makes during training. We show how, without it having been told to do so, the model self-selects samples of increasing (i) complexity and(ii) task-relevance in combination with (iii) performing a denoising curriculum. We observe that the dynamics of the mutual-supervision signals of both system internal representation types are vital for the extraction and translation performance. We show that in terms of the Gunning-Fog Readability index, SSNMT starts extracting and learning from Wikipedia data suitable for high school students and quickly moves towards content suitable for first year undergraduate students.

188. Simultaneous Machine Translation with Visual Context [PDF] 返回目录
  EMNLP 2020. Long Paper
  Ozan Caglayan, Julia Ive, Veneta Haralampieva, Pranava Madhyastha, Loïc Barrault, Lucia Specia
Simultaneous machine translation (SiMT) aims to translate a continuous input text stream into another language with the lowest latency and highest quality possible. The translation thus has to start with an incomplete source text, which is read progressively, creating the need for anticipation. In this paper, we seek to understand whether the addition of visual information can compensate for the missing source context. To this end, we analyse the impact of different multimodal approaches and visual features on state-of-the-art SiMT frameworks. Our results show that visual context is helpful and that visually-grounded models based on explicit object region information are much better than commonly used global features, reaching up to 3 BLEU points improvement under low latency scenarios. Our qualitative analysis illustrates cases where only the multimodal systems are able to translate correctly from English into gender-marked languages, as well as deal with differences in word order, such as adjective-noun placement between English and French.

189. Iterative Refinement in the Continuous Space for Non-Autoregressive Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Jason Lee, Raphael Shu, Kyunghyun Cho
We propose an efficient inference procedure for non-autoregressive machine translation that iteratively refines translation purely in the continuous space. Given a continuous latent variable model for machine translation (Shu et al., 2020), we train an inference network to approximate the gradient of the marginal log probability of the target sentence, using the latent variable instead. This allows us to use gradient-based optimization to find the target sentence at inference time that approximately maximizes its marginal probability. As each refinement step only involves computation in the latent space of low dimensionality (we use 8 in our experiments), we avoid computational overhead incurred by existing non-autoregressive inference procedures that often refine in token space. We compare our approach to a recently proposed EM-like inference procedure (Shu et al., 2020) that optimizes in a hybrid space, consisting of both discrete and continuous variables. We evaluate our approach on WMT’14 En→De, WMT’16 Ro→En and IWSLT’16 De→En, and observe two advantages over the EM-like inference: (1) it is computationally efficient, i.e. each refinement step is twice as fast, and (2) it is more effective, resulting in higher marginal probabilities and BLEU scores with the same number of refinement steps. On WMT’14 En→De, for instance, our approach is able to decode 6.2 times faster than the autoregressive model with minimal degradation to translation quality (0.9 BLEU).

190. Dynamic Context Selection for Document-level Neural Machine Translation via Reinforcement Learning [PDF] 返回目录
  EMNLP 2020. Long Paper
  Xiaomian Kang, Yang Zhao, Jiajun Zhang, Chengqing Zong
Document-level neural machine translation has yielded attractive improvements. However, majority of existing methods roughly use all context sentences in a fixed scope. They neglect the fact that different source sentences need different sizes of context. To address this problem, we propose an effective approach to select dynamic context so that the document-level translation model can utilize the more useful selected context sentences to produce better translations. Specifically, we introduce a selection module that is independent of the translation module to score each candidate context sentence. Then, we propose two strategies to explicitly select a variable number of context sentences and feed them into the translation module. We train the two modules end-to-end via reinforcement learning. A novel reward is proposed to encourage the selection and utilization of dynamic context sentences. Experiments demonstrate that our approach can select adaptive context sentences for different source sentences, and significantly improves the performance of document-level translation methods.

191. Dynamic Data Selection and Weighting for Iterative Back-Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Zi-Yi Dou, Antonios Anastasopoulos, Graham Neubig
Back-translation has proven to be an effective method to utilize monolingual data in neural machine translation (NMT), and iteratively conducting back-translation can further improve the model performance. Selecting which monolingual data to back-translate is crucial, as we require that the resulting synthetic data are of high quality and reflect the target domain. To achieve these two goals, data selection and weighting strategies have been proposed, with a common practice being to select samples close to the target domain but also dissimilar to the average general-domain text. In this paper, we provide insights into this commonly used approach and generalize it to a dynamic curriculum learning strategy, which is applied to iterative back-translation models. In addition, we propose weighting strategies based on both the current quality of the sentence and its improvement over the previous iteration. We evaluate our models on domain adaptation, low-resource, and high-resource MT settings and on two language pairs. Experimental results demonstrate that our methods achieve improvements of up to~1.8 BLEU points over competitive baselines.

192. Multi-task Learning for Multilingual Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Yiren Wang, ChengXiang Zhai, Hany Hassan
While monolingual data has been shown to be useful in improving bilingual neural machine translation (NMT), effectively and efficiently leveraging monolingual data for Multilingual NMT (MNMT) systems is a less explored area. In this work, we propose a multi-task learning (MTL) framework that jointly trains the model with the translation task on bitext data and two denoising tasks on the monolingual data. We conduct extensive empirical studies on MNMT systems with $10$ language pairs from WMT datasets. We show that the proposed approach can effectively improve the translation quality for both high-resource and low-resource languages with large margin, achieving significantly better results than the individual bilingual models. We also demonstrate the efficacy of the proposed approach in the zero-shot setup for language pairs without bitext training data. Furthermore, we show the effectiveness of MTL over pre-training approaches for both NMT and cross-lingual transfer learning NLU tasks; the proposed approach outperforms massive scale models trained on single task.

193. Accurate Word Alignment Induction from Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Yun Chen, Yang Liu, Guanhua Chen, Xin Jiang, Qun Liu
Despite its original goal to jointly learn to align and translate, prior researches suggest that Transformer captures poor word alignments through its attention mechanism. In this paper, we show that attention weights do capture accurate word alignments and propose two novel word alignment induction methods Shift-Att and Shift-AET. The main idea is to induce alignments at the step when the to-be-aligned target token is the decoder input rather than the decoder output as in previous work. Shift-Att is an interpretation method that induces alignments from the attention weights of Transformer and does not require parameter update or architecture change. Shift-AET extracts alignments from an additional alignment module which is tightly integrated into Transformer and trained in isolation with supervision from symmetrized Shift-Att alignments. Experiments on three publicly available datasets demonstrate that both methods perform better than their corresponding neural baselines and Shift-AET significantly outperforms GIZA++ by 1.4-4.8 AER points.

194. Token-level Adaptive Training for Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Shuhao Gu, Jinchao Zhang, Fandong Meng, Yang Feng, Wanying Xie, Jie Zhou, Dong Yu
There exists a token imbalance phenomenon in natural language as different tokens appear with different frequencies, which leads to different learning difficulties for tokens in Neural Machine Translation (NMT). The vanilla NMT model usually adopts trivial equal-weighted objectives for target tokens with different frequencies and tends to generate more high-frequency tokens and less low-frequency tokens compared with the golden token distribution. However, low-frequency tokens may carry critical semantic information that will affect the translation quality once they are neglected. In this paper, we explored target token-level adaptive objectives based on token frequencies to assign appropriate weights for each target token during training. We aimed that those meaningful but relatively low-frequency words could be assigned with larger weights in objectives to encourage the model to pay more attention to these tokens. Our method yields consistent improvements in translation quality on ZH-EN, EN-RO, and EN-DE translation tasks, especially on sentences that contain more low-frequency tokens where we can get 1.68, 1.02, and 0.52 BLEU increases compared with baseline, respectively. Further analyses show that our method can also improve the lexical diversity of translation.

195. Multi-Unit Transformers for Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Jianhao Yan, Fandong Meng, Jie Zhou
Transformer models achieve remarkable success in Neural Machine Translation. Many efforts have been devoted to deepening the Transformer by stacking several units (i.e., a combination of Multihead Attentions and FFN) in a cascade, while the investigation over multiple parallel units draws little attention. In this paper, we propose the Multi-Unit Transformer (MUTE) , which aim to promote the expressiveness of the Transformer by introducing diverse and complementary units. Specifically, we use several parallel units and show that modeling with multiple units improves model performance and introduces diversity. Further, to better leverage the advantage of the multi-unit setting, we design biased module and sequential dependency that guide and encourage complementariness among different units. Experimental results on three machine translation tasks, the NIST Chinese-to-English, WMT'14 English-to-German and WMT'18 Chinese-to-English, show that the MUTE models significantly outperform the Transformer-Base, by up to +1.52, +1.90 and +1.10 BLEU points, with only a mild drop in inference speed (about 3.1\%). In addition, our methods also surpass the Transformer-Big model, with only 54\% of its parameters. These results demonstrate the effectiveness of the MUTE, as well as its efficiency in both the inference process and parameter usage.

196. Translation Artifacts in Cross-lingual Transfer Learning [PDF] 返回目录
  EMNLP 2020. Long Paper
  Mikel Artetxe, Gorka Labaka, Eneko Agirre
Both human and machine translation play a central role in cross-lingual transfer learning: many multilingual datasets have been created through professional translation services, and using machine translation to translate either the test set or the training set is a widely used transfer technique. In this paper, we show that such translation process can introduce subtle artifacts that have a notable impact in existing cross-lingual models. For instance, in natural language inference, translating the premise and the hypothesis independently can reduce the lexical overlap between them, which current models are highly sensitive to. We show that some previous findings in cross-lingual transfer learning need to be reconsidered in the light of this phenomenon. Based on the gained insights, we also improve the state-of-the-art in XNLI for the translate-test and zero-shot approaches by 4.3 and 2.8 points, respectively.

197. Towards Detecting and Exploiting Disambiguation Biases in Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Denis Emelin, Ivan Titov, Rico Sennrich
Word sense disambiguation is a well-known source of translation errors in NMT. We posit that some of the incorrect disambiguation choices are due to models' over-reliance on dataset artifacts found in training data, specifically superficial word co-occurrences, rather than a deeper understanding of the source text. We introduce a method for the prediction of disambiguation errors based on statistical data properties, demonstrating its effectiveness across several domains and model types. Moreover, we develop a simple adversarial attack strategy that minimally perturbs sentences in order to elicit disambiguation errors to further probe the robustness of translation models. Our findings indicate that disambiguation robustness varies substantially between domains and that different models trained on the same data are vulnerable to different attacks.

198. Direct Segmentation Models for Streaming Speech Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Javier Iranzo-Sánchez, Adrià Giménez Pastor, Joan Albert Silvestre-Cerdà, Pau Baquero-Arnal, Jorge Civera Saiz, Alfons Juan
The cascade approach to Speech Translation (ST) is based on a pipeline that concatenates an Automatic Speech Recognition (ASR) system followed by a Machine Translation (MT) system. These systems are usually connected by a segmenter that splits the ASR output into hopefully, semantically self-contained chunks to be fed into the MT system. This is specially challenging in the case of streaming ST, where latency requirements must also be taken into account. This work proposes novel segmentation models for streaming ST that incorporate not only textual, but also acoustic information to decide when the ASR output is split into a chunk. An extensive and throughly experimental setup is carried out on the Europarl-ST dataset to prove the contribution of acoustic information to the performance of the segmentation model in terms of BLEU score in a streaming ST scenario. Finally, comparative results with previous work also show the superiority of the segmentation models proposed in this work.

199. Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for Bengali-English Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Tahmid Hasan, Abhik Bhattacharjee, Kazi Samin, Masum Hasan, Madhusudan Basak, M. Sohel Rahman, Rifat Shahriyar
Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.

200. An Empirical Study of Generation Order for Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  William Chan, Mitchell Stern, Jamie Kiros, Jakob Uszkoreit
In this work, we present an empirical study of generation order for machine translation. Building on recent advances in insertion-based modeling, we first introduce a soft order-reward framework that enables us to train models to follow arbitrary oracle generation policies. We then make use of this framework to explore a large variety of generation orders, including uninformed orders, location-based orders, frequency-based orders, content-based orders, and model-based orders. Curiously, we find that for the WMT'14 English $\to$ German and WMT'18 English $\to$ Chinese translation tasks, order does not have a substantial impact on output quality. Moreover, for English $\to$ German, we even discover that unintuitive orderings such as alphabetical and shortest-first can match the performance of a standard Transformer, suggesting that traditional left-to-right generation may not be necessary to achieve high performance.

201. Distilling Multiple Domains for Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Anna Currey, Prashant Mathur, Georgiana Dinu
Neural machine translation achieves impressive results in high-resource conditions, but performance often suffers when the input domain is low-resource. The standard practice of adapting a separate model for each domain of interest does not scale well in practice from both a quality perspective (brittleness under domain shift) as well as a cost perspective (added maintenance and inference complexity). In this paper, we propose a framework for training a single multi-domain neural machine translation model that is able to translate several domains without increasing inference time or memory usage. We show that this model can improve translation on both high- and low-resource domains over strong multi-domain baselines. In addition, our proposed model is effective when domain labels are unknown during training, as well as robust under noisy data conditions.

202. Data Rejuvenation: Exploiting Inactive Training Examples for Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Wenxiang Jiao, Xing Wang, Shilin He, Irwin King, Michael Lyu, Zhaopeng Tu
Large-scale training datasets lie at the core of the recent success of neural machine translation (NMT) models. However, the complex patterns and potential noises in the large-scale data make training NMT models difficult. In this work, we explore to identify the inactive training examples which contribute less to the model performance, and show that the existence of inactive examples depends on the data distribution. We further introduce data rejuvenation to improve the training of NMT models on large-scale datasets by exploiting inactive examples. The proposed framework consists of three phases. First, we train an identification model on the original training data, and use it to distinguish inactive examples and active examples by their sentence-level output probabilities. Then, we train a rejuvenation model on the active examples, which is used to re-label the inactive examples with forward- translation. Finally, the rejuvenated examples and the active examples are combined to train the final NMT model. Experimental results on WMT14 English-German and English-French datasets show that the proposed data rejuvenation consistently and significantly improves performance for several strong NMT models. Extensive analyses reveal that our approach stabilizes and accelerates the training process of NMT models, resulting in final models with better generalization capability.

203. Bridging Linguistic Typology and Multilingual Machine Translation with Multi-view Language Representations [PDF] 返回目录
  EMNLP 2020. Long Paper
  Arturo Oncevay, Barry Haddow, Alexandra Birch
Sparse language vectors from linguistic typology databases and learned embeddings from tasks like multilingual machine translation have been investigated in isolation, without analysing how they could benefit from each other's language characterisation. We propose to fuse both views using singular vector canonical correlation analysis and study what kind of information is induced from each source. By inferring typological features and language phylogenies, we observe that our representations embed typology and strengthen correlations with language relationships. We then take advantage of our multi-view language vector space for multilingual machine translation, where we achieve competitive overall translation accuracy in tasks that require information about language similarities, such as language clustering and ranking candidates for multilingual transfer. With our method, we can easily project and assess new languages without expensive retraining of massive multilingual or ranking models, which are major disadvantages of related approaches.

204. Improving Word Sense Disambiguation with Translations [PDF] 返回目录
  EMNLP 2020. Long Paper
  Yixing Luan, Bradley Hauer, Lili Mou, Grzegorz Kondrak
It has been conjectured that multilingual information can help monolingual word sense disambiguation (WSD). However, existing WSD systems rarely consider multilingual information, and no effective method has been proposed for improving WSD by generating translations. In this paper, we present a novel approach that improves the performance of a base WSD system using machine translation. Since our approach is language independent, we perform WSD experiments on several languages. The results demonstrate that our methods can consistently improve the performance of WSD systems, and obtain state-ofthe-art results in both English and multilingual WSD. To facilitate the use of lexical translation information, we also propose BABALIGN, an precise bitext alignment algorithm which is guided by multilingual lexical correspondences from BabelNet.

205. PyMT5: Multi-mode Translation of Natural Language and Python Code with Transformers [PDF] 返回目录
  EMNLP 2020. Long Paper
  Colin Clement, Dawn Drain, Jonathan Timcheck, Alexey Svyatkovskiy, Neel Sundaresan
Simultaneously modeling source code and natural language has many exciting applications in automated software development and understanding. Pursuant to achieving such technology, we introduce PyMT5, the Python method text-to-text transfer transformer, which is trained to translate between all pairs of Python method feature combinations: a single model that can both predict whole methods from natural language documentation strings (docstrings) and summarize code into docstrings of any common style. We present an analysis and modeling effort of a large-scale parallel corpus of 26 million Python methods and 7.7 million method-docstring pairs, demonstrating that for docstring and method generation, PyMT5~ outperforms similarly-sized auto-regressive language models (GPT2) which were English pre-trained or randomly initialized. On the CodeSearchNet test set, our best model predicts 92.1% syntactically correct method bodies, achieved a BLEU score of 8.59 for method generation and 16.3 for docstring generation (summarization), and achieved a ROUGE-L F-score of 24.8 for method generation and 36.7 for docstring generation.

206. Learning Adaptive Segmentation Policy for Simultaneous Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, Haifeng Wang
Balancing accuracy and latency is a great challenge for simultaneous translation. To achieve high accuracy, the model usually needs to wait for more streaming text before translation, which results in increased latency. However, keeping low latency would probably hurt accuracy. Therefore, it is essential to segment the ASR output into appropriate units for translation. Inspired by human interpreters, we propose a novel adaptive segmentation policy for simultaneous translation. The policy learns to segment the source text by considering possible translations produced by the translation model, maintaining consistency between the segmentation and translation. Experimental results on Chinese-English and German-English translation show that our method achieves a better accuracy-latency trade-off over recently proposed state-of-the-art methods.

207. ChrEn: Cherokee-English Machine Translation for Endangered Language Revitalization [PDF] 返回目录
  EMNLP 2020. Long Paper
  Shiyue Zhang, Benjamin Frey, Mohit Bansal
Cherokee is a highly endangered Native American language spoken by the Cherokee people. The Cherokee culture is deeply embedded in its language. However, there are approximately only 2,000 fluent first language Cherokee speakers remaining in the world and the number is declining every year. To help save this endangered language, we introduce ChrEn, a Cherokee-English parallel dataset, to facilitate machine translation research between Cherokee and English. Compared to some popular machine translation language pairs, ChrEn is extremely low-resource, only containing 14k sentence pairs in total. We split our parallel data in ways that facilitate both in-domain and out-of-domain evaluation. We also collect 5k Cherokee monolingual data to enable semi-supervised learning. Besides these datasets, we propose several Cherokee-English and English-Cherokee machine translation systems. We compare SMT (phrase-based) versus NMT (RNN-based and Transformer-based) systems; supervised versus semi-supervised (via language model, back-translation, and BERT/Multilingual-BERT) methods; as well as transfer learning versus multilingual joint training with 4 other languages. Our best results are 15.8/12.7 BLEU for in-domain and 6.5/5.0 BLEU for out-of-domain Chr-En/EnChr translations, respectively; and we hope that our dataset and systems will encourage future work by the community for Cherokee language revitalization.

208. Generating Diverse Translation from Model Distribution with Dropout [PDF] 返回目录
  EMNLP 2020. Long Paper
  Xuanfu Wu, Yang Feng, Chenze Shao
Despite the improvement of translation quality, neural machine translation (NMT) often suffers from the lack of diversity in its generation. In this paper, we propose to generate diverse translations by deriving a large number of possible models with Bayesian modelling and sampling models from them for inference. The possible models are obtained by applying concrete dropout to the NMT model and each of them has specific confidence for its prediction, which corresponds to a posterior model distribution under specific training data in the principle of Bayesian modeling. With variational inference, the posterior model distribution can be approximated with a variational distribution, from which the final models for inference are sampled. We conducted experiments on Chinese-English and English-German translation tasks and the results shows that our method makes a better trade-off between diversity and accuracy.

209. Pre-training Multilingual Neural Machine Translation by Leveraging Alignment Information [PDF] 返回目录
  EMNLP 2020. Long Paper
  Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, Lei Li
We investigate the following question for machine translation (MT): can we develop a single universal MT model to serve as the common seed and obtain derivative and improved models on arbitrary language pairs? We propose mRASP, an approach to pre-train a universal multilingual neural machine translation model. Our key idea in mRASP is its novel technique of random aligned substitution, which brings words and phrases with similar meanings across multiple languages closer in the representation space. We pre-train a mRASP model on 32 language pairs jointly with only public datasets. The model is then fine-tuned on downstream language pairs to obtain specialized MT models. We carry out extensive experiments on 42 translation directions across a diverse settings, including low, medium, rich resource, and as well as transferring to exotic language pairs. Experimental results demonstrate that mRASP achieves significant performance improvement compared to directly training on those target pairs. It is the first time to verify that multiple lowresource language pairs can be utilized to improve rich resource MT. Surprisingly, mRASP is even able to improve the translation quality on exotic languages that never occur in the pretraining corpus. Code, data, and pre-trained models are available at https://github. com/linzehui/mRASP.

210. Self-Paced Learning for Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Short Paper
  Yu Wan, Baosong Yang, Derek F. Wong, Yikai Zhou, Lidia S. Chao, Haibo Zhang, Boxing Chen
Recent studies have proven that the training of neural machine translation (NMT) can be facilitated by mimicking the learning process of humans. Nevertheless, achievements of such kind of curriculum learning rely on the quality of artificial schedule drawn up with the handcrafted features, e.g. sentence length or word rarity. We ameliorate this procedure with a more flexible manner by proposing self-paced learning, where NMT model is allowed to 1) automatically quantify the learning confidence over training examples; and 2) flexibly govern its learning via regulating the loss in each iteration step. Experimental results over multiple translation tasks demonstrate that the proposed model yields better performance than strong baselines and those models trained with human-designed curricula on both translation quality and convergence speed.

211. Simulated Multiple Reference Training Improves Low-Resource Machine Translation [PDF] 返回目录
  EMNLP 2020. Short Paper
  Huda Khayrallah, Brian Thompson, Matt Post, Philipp Koehn
Many valid translations exist for a given sentence, yet machine translation (MT) is trained with a single reference translation, exacerbating data sparsity in low-resource settings. We introduce Simulated Multiple Reference Training (SMRT), a novel MT training method that approximates the full space of possible translations by sampling a paraphrase of the reference sentence from a paraphraser and training the MT model to predict the paraphraser’s distribution over possible tokens. We demonstrate the effectiveness of SMRT in low-resource settings when translating to English, with improvements of 1.2 to 7.0 BLEU. We also find SMRT is complementary to back-translation.

212. On the Sparsity of Neural Machine Translation Models [PDF] 返回目录
  EMNLP 2020. Short Paper
  Yong Wang, Longyue Wang, Victor Li, Zhaopeng Tu
Modern neural machine translation (NMT) models employ a large number of parameters, which leads to serious over-parameterization and typically causes the underutilization of computational resources. In response to this problem, we empirically investigate whether the redundant parameters can be reused to achieve better performance. Experiments and analyses are systematically conducted on different datasets and NMT architectures. We show that: 1) the pruned parameters can be rejuvenated to improve the baseline model by up to +0.8 BLEU points; 2) the rejuvenated parameters are reallocated to enhance the ability of modeling low-level lexical information.

213. Translation Quality Estimation by Jointly Learning to Score and Rank [PDF] 返回目录
  EMNLP 2020. Short Paper
  Jingyi Zhang, Josef van Genabith
The translation quality estimation (QE) task, particularly the QE as a Metric task, aims to evaluate the general quality of a translation based on the translation and the source sentence without using reference translations. Supervised learning of this QE task requires human evaluation of translation quality as training data. Human evaluation of translation quality can be performed in different ways, including assigning an absolute score to a translation or ranking different translations. In order to make use of different types of human evaluation data for supervised learning, we present a multi-task learning QE model that jointly learns two tasks: score a translation and rank two translations. Our QE model exploits cross-lingual sentence embeddings from pre-trained multilingual language models. We obtain new state-of-the-art results on the WMT 2019 QE as a Metric task and outperform sentBLEU on the WMT 2019 Metrics task.

214. Incorporating a Local Translation Mechanism into Non-autoregressive Translation [PDF] 返回目录
  EMNLP 2020. Short Paper
  Xiang Kong, Zhisong Zhang, Eduard Hovy
In this work, we introduce a novel local autoregressive translation (LAT) mechanism into non-autoregressive translation (NAT) models so as to capture local dependencies among target outputs. Specifically, for each target decoding position, instead of only one token, we predict a short sequence of tokens in an autoregressive way. We further design an efficient merging algorithm to align and merge the output pieces into one final output sequence. We integrate LAT into the conditional masked language model (CMLM) (Ghazvininejad et al.,2019) and similarly adopt iterative decoding. Empirical results on five translation tasks show that compared with CMLM, our method achieves comparable or better performance with fewer decoding iterations, bringing a 2.5x speedup. Further analysis indicates that our method reduces repeated translations and performs better at longer sentences. Our code will be released to the public.

215. Language Adapters for Zero Shot Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Short Paper
  Jerin Philip, Alexandre Berard, Matthias Gallé, Laurent Besacier
We propose a novel adapter layer formalism for adapting multilingual models. They are more parameter-efficient than existing adapter layers while obtaining as good or better performance. The layers are specific to one language (as opposed to bilingual adapters) allowing to compose them and generalize to unseen language-pairs. In this zero-shot setting, they obtain a median improvement of +2.77 BLEU points over a strong 20-language multilingual Transformer baseline trained on TED talks.

216. Effectively Pretraining a Speech Translation Decoder with Machine Translation Data [PDF] 返回目录
  EMNLP 2020. Short Paper
  Ashkan Alinejad, Anoop Sarkar
Directly translating from speech to text using an end-to-end approach is still challenging for many language pairs due to insufficient data. Although pretraining the encoder parameters using the Automatic Speech Recognition (ASR) task improves the results in low resource settings, attempting to use pretrained parameters from the Neural Machine Translation (NMT) task has been largely unsuccessful in previous works. In this paper, we will show that by using an adversarial regularizer, we can bring the encoder representations of the ASR and NMT tasks closer even though they are in different modalities, and how this helps us effectively use a pretrained NMT decoder for speech translation.

217. Long-Short Term Masking Transformer: A Simple but Effective Baseline for Document-level Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Short Paper
  Pei Zhang, Boxing Chen, Niyu Ge, Kai Fan
Many document-level neural machine translation (NMT) systems have explored the utility of context-aware architecture, usually requiring an increasing number of parameters and computational complexity. However, few attention is paid to the baseline model. In this paper, we research extensively the pros and cons of the standard transformer in document-level translation, and find that the auto-regressive property can simultaneously bring both the advantage of the consistency and the disadvantage of error accumulation. Therefore, we propose a surprisingly simple long-short term masking self-attention on top of the standard transformer to both effectively capture the long-range dependence and reduce the propagation of errors. We examine our approach on the two publicly available document-level datasets. We can achieve a strong result in BLEU and capture discourse phenomena.

218. Fully Quantized Transformer for Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Gabriele Prato, Ella Charlaix, Mehdi Rezagholizadeh
State-of-the-art neural machine translation methods employ massive amounts of parameters. Drastically reducing computational costs of such methods without affecting performance has been up to this point unsuccessful. To this end, we propose FullyQT: an all-inclusive quantization strategy for the Transformer. To the best of our knowledge, we are the first to show that it is possible to avoid any loss in translation quality with a fully quantized Transformer. Indeed, compared to full-precision, our 8-bit models score greater or equal BLEU on most tasks. Comparing ourselves to all previously proposed methods, we achieve state-of-the-art quantization results.

219. Improving Grammatical Error Correction with Machine Translation Pairs [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Wangchunshu Zhou, Tao Ge, Chang Mu, Ke Xu, Furu Wei, Ming Zhou
We propose a novel data synthesis method to generate diverse error-corrected sentence pairs for improving grammatical error correction, which is based on a pair of machine translation models (e.g., Chinese to English) of different qualities (i.e., poor and good). The poor translation model can resemble the ESL (English as a second language) learner and tends to generate translations of low quality in terms of fluency and grammaticality, while the good translation model generally generates fluent and grammatically correct translations. With the pair of translation models, we can generate unlimited numbers of poor to good English sentence pairs from text in the source language (e.g., Chinese) of the translators. Our approach can generate various error-corrected patterns and nicely complement the other data synthesis approaches for GEC. Experimental results demonstrate the data generated by our approach can effectively help a GEC model to improve the performance and achieve the state-of-the-art single-model performance in BEA-19 and CoNLL-14 benchmark datasets.

220. Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Alessandro Raganato, Yves Scherrer, Jörg Tiedemann
Transformer-based models have brought a radical change to neural machine translation. A key feature of the Transformer architecture is the so-called multi-head attention mechanism, which allows the model to focus simultaneously on different parts of the input. However, recent works have shown that most attention heads learn simple, and often redundant, positional patterns. In this paper, we propose to replace all but one attention head of each encoder layer with simple fixed – non-learnable – attentive patterns that are solely based on position and do not require any external knowledge. Our experiments with different data sizes and multiple language pairs show that fixing the attention heads on the encoder side of the Transformer at training time does not impact the translation quality and even increases BLEU scores by up to 3 points in low-resource scenarios.

221. Multi-Agent Mutual Learning at Sentence-Level and Token-Level for Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Baohao Liao, Yingbo Gao, Hermann Ney
Mutual learning, where multiple agents learn collaboratively and teach one another, has been shown to be an effective way to distill knowledge for image classification tasks. In this paper, we extend mutual learning to the machine translation task and operate at both the sentence-level and the token-level. Firstly, we co-train multiple agents by using the same parallel corpora. After convergence, each agent selects and learns its poorly predicted tokens from other agents. The poorly predicted tokens are determined by the acceptance-rejection sampling algorithm. Our experiments show that sequential mutual learning at the sentence-level and the token-level improves the results cumulatively. Absolute improvements compared to strong baselines are obtained on various translation tasks. On the IWSLT’14 German-English task, we get a new state-of-the-art BLEU score of 37.0. We also report a competitive result, 29.9 BLEU score, on the WMT’14 English-German task.

222. Active Learning Approaches to Enhancing Neural Machine Translation: An Empirical Study [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Yuekai Zhao, Haoran Zhang, Shuchang Zhou, Zhihua Zhang
Active learning is an efficient approach for mitigating data dependency when training neural machine translation (NMT) models. In this paper, we explore new training frameworks by incorporating active learning into various techniques such as transfer learning and iterative back-translation (IBT) under a limited human translation budget. We design a word frequency based acquisition function and combine it with a strong uncertainty based method. The combined method steadily outperforms all other acquisition functions in various scenarios. As far as we know, we are the first to do a large-scale study on actively training Transformer for NMT. Specifically, with a human translation budget of only 20% of the original parallel corpus, we manage to surpass Transformer trained on the entire parallel corpus in three language pairs.

223. Adversarial Subword Regularization for Robust Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Jungsoo Park, Mujeen Sung, Jinhyuk Lee, Jaewoo Kang
Exposing diverse subword segmentations to neural machine translation (NMT) models often improves the robustness of machine translation as NMT models can experience various subword candidates. However, the diversification of subword segmentations mostly relies on the pre-trained subword language models from which erroneous segmentations of unseen words are less likely to be sampled. In this paper, we present adversarial subword regularization (ADVSR) to study whether gradient signals during training can be a substitute criterion for exposing diverse subword segmentations. We experimentally show that our model-based adversarial samples effectively encourage NMT models to be less sensitive to segmentation errors and improve the performance of NMT models in low-resource and out-domain datasets.

224. Automatically Identifying Gender Issues in Machine Translation using Perturbations [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Hila Gonen, Kellie Webster
The successful application of neural methods to machine translation has realized huge quality advances for the community. With these improvements, many have noted outstanding challenges, including the modeling and treatment of gendered language. While previous studies have identified issues using synthetic examples, we develop a novel technique to mine examples from real world data to explore challenges for deployed systems. We use our method to compile an evaluation benchmark spanning examples for four languages from three language families, which we publicly release to facilitate research. The examples in our benchmark expose where model representations are gendered, and the unintended consequences these gendered representations can have in downstream application.

225. Dual Reconstruction: a Unifying Objective for Semi-Supervised Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Weijia Xu, Xing Niu, Marine Carpuat
While Iterative Back-Translation and Dual Learning effectively incorporate monolingual training data in neural machine translation, they use different objectives and heuristic gradient approximation strategies, and have not been extensively compared. We introduce a novel dual reconstruction objective that provides a unified view of Iterative Back-Translation and Dual Learning. It motivates a theoretical analysis and controlled empirical study on German-English and Turkish-English tasks, which both suggest that Iterative Back-Translation is more effective than Dual Learning despite its relative simplicity.

226. Participatory Research for Low-resourced Machine Translation: A Case Study in African Languages [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Taiwo Fagbohungbe, Solomon Oluwole Akinola, Shamsuddeen Muhammad, Salomon Kabongo Kabenamualu, Salomey Osei, Freshia Sackey, Rubungo Andre Niyongabo, Ricky Macharm, Perez Ogayo, Orevaoghene Ahia, Musie Meressa Berhe, Mofetoluwa Adeyemi, Masabata Mokgesi-Selinga, Lawrence Okegbemi, Laura Martinus, Kolawole Tajudeen, Kevin Degila, Kelechi Ogueji, Kathleen Siminyu, Julia Kreutzer, Jason Webster, Jamiil Toure Ali, Jade Abbott, Iroro Orife, Ignatius Ezeani, Idris Abdulkadir Dangana, Herman Kamper, Hady Elsahar, Goodness Duru, Ghollah Kioko, Murhabazi Espoir, Elan van Biljon, Daniel Whitenack, Christopher Onyefuluchi, Chris Chinenye Emezue, Bonaventure F. P. Dossou, Blessing Sibanda, Blessing Bassey, Ayodele Olabiyi, Arshath Ramkilowan, Alp Öktem, Adewale Akinfaderin, Abdallah Bashir
Research in NLP lacks geographic diversity, and the question of how NLP can be scaled to low-resourced languages has not yet been adequately solved. ‘Low-resourced’-ness is a complex problem going beyond data availability and reflects systemic problems in society. In this paper, we focus on the task of Machine Translation (MT), that plays a crucial role for information accessibility and communication worldwide. Despite immense improvements in MT over the past decade, MT is centered around a few high-resourced languages. As MT researchers cannot solve the problem of low-resourcedness alone, we propose participatory research as a means to involve all necessary agents required in the MT development process. We demonstrate the feasibility and scalability of participatory research with a case study on MT for African languages. Its implementation leads to a collection of novel translation datasets, MT benchmarks for over 30 languages, with human evaluations for a third of them, and enables participants without formal training to make a unique scientific contribution. Benchmarks, models, data, code, and evaluation results are released at https://github.com/masakhane-io/masakhane-mt.

227. Computer Assisted Translation with Neural Quality Estimation and Auotmatic Post-Editing [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Ke Wang, Jiayi Wang, Niyu Ge, Yangbin Shi, Yu Zhao, Kai Fan
With the advent of neural machine translation, there has been a marked shift towards leveraging and consuming the machine translation results. However, the gap between machine translation systems and human translators needs to be manually closed by post-editing. In this paper, we propose an end-to-end deep learning framework of the quality estimation and automatic post-editing of the machine translation output. Our goal is to provide error correction suggestions and to further relieve the burden of human translators through an interpretable model. To imitate the behavior of human translators, we design three efficient delegation modules – quality estimation, generative post-editing, and atomic operation post-editing and construct a hierarchical model based on them. We examine this approach with the English–German dataset from WMT 2017 APE shared task and our experimental results can achieve the state-of-the-art performance. We also verify that the certified translators can significantly expedite their post-editing processing with our model in human evaluation.

228. On Romanization for Model Transfer Between Scripts in Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Chantal Amrhein, Rico Sennrich
Transfer learning is a popular strategy to improve the quality of low-resource machine translation. For an optimal transfer of the embedding layer, the child and parent model should share a substantial part of the vocabulary. This is not the case when transferring to languages with a different script. We explore the benefit of romanization in this scenario. Our results show that romanization entails information loss and is thus not always superior to simpler vocabulary transfer methods, but can improve the transfer between related languages with different scripts. We compare two romanization tools and find that they exhibit different degrees of information loss, which affects translation quality. Finally, we extend romanization to the target side, showing that this can be a successful strategy when coupled with a simple deromanization model.

229. Adaptive Feature Selection for End-to-End Speech Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Biao Zhang, Ivan Titov, Barry Haddow, Rico Sennrich
Information in speech signals is not evenly distributed, making it an additional challenge for end-to-end (E2E) speech translation (ST) to learn to focus on informative features. In this paper, we propose adaptive feature selection (AFS) for encoder-decoder based E2E ST. We first pre-train an ASR encoder and apply AFS to dynamically estimate the importance of each encoded speech feature to ASR. A ST encoder, stacked on top of the ASR encoder, then receives the filtered features from the (frozen) ASR encoder. We take L0DROP (Zhang et al., 2020) as the backbone for AFS, and adapt it to sparsify speech features with respect to both temporal and feature dimensions. Results on LibriSpeech EnFr and MuST-C benchmarks show that AFS facilitates learning of ST by pruning out ~84% temporal features, yielding an average translation gain of ~1.3-1.6 BLEU and a decoding speedup of ~1.4x. In particular, AFS reduces the performance gap compared to the cascade baseline, and outperforms it on LibriSpeech En-Fr with a BLEU score of 18.56 (without data augmentation).

230. PharmMT: A Neural Machine Translation Approach to Simplify Prescription Directions [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Jiazhao Li, Corey Lester, Xinyan Zhao, Yuting Ding, Yun Jiang, V.G.Vinod Vydiswaran
The language used by physicians and health professionals in prescription directions includes medical jargon and implicit directives and causes much confusion among patients. Human intervention to simplify the language at the pharmacies may introduce additional errors that can lead to potentially severe health outcomes. We propose a novel machine translation-based approach, PharmMT, to automatically and reliably simplify prescription directions into patient-friendly language, thereby significantly reducing pharmacist workload. We evaluate the proposed approach over a dataset consisting of over 530K prescriptions obtained from a large mail-order pharmacy. The end-to-end system achieves a BLEU score of 60.27 against the reference directions generated by pharmacists, a 39.6% relative improvement over the rule-based normalization. Pharmacists judged 94.3% of the simplified directions as usable as-is or with minimal changes. This work demonstrates the feasibility of a machine translation-based tool for simplifying prescription directions in real-life.

231. Graph-to-Tree Neural Networks for Learning Structured Input-Output Translation with Applications to Semantic Parsing and Math Word Problem [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Shucheng Li, Lingfei Wu, Shiwei Feng, Fangli Xu, Fengyuan Xu, Sheng Zhong
The celebrated Seq2Seq technique and its numerous variants achieve excellent performance on many tasks such as neural machine translation, semantic parsing, and math word problem solving. However, these models either only consider input objects as sequences while ignoring the important structural information for encoding, or they simply treat output objects as sequence outputs instead of structural objects for decoding. In this paper, we present a novel Graph-to-Tree Neural Networks, namely Graph2Tree consisting of a graph encoder and a hierarchical tree decoder, that encodes an augmented graph-structured input and decodes a tree-structured output. In particular, we investigated our model for solving two problems, neural semantic parsing and math word problem. Our extensive experiments demonstrate that our Graph2Tree model outperforms or matches the performance of other state-of-the-art models on these tasks.

232. On Long-Tailed Phenomena in Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Vikas Raunak, Siddharth Dalmia, Vivek Gupta, Florian Metze
State-of-the-art Neural Machine Translation (NMT) models struggle with generating low-frequency tokens, tackling which remains a major challenge. The analysis of long-tailed phenomena in the context of structured prediction tasks is further hindered by the added complexities of search during inference. In this work, we quantitatively characterize such long-tailed phenomena at two levels of abstraction, namely, token classification and sequence generation. We propose a new loss function, the Anti-Focal loss, to better adapt model training to the structural dependencies of conditional text generation by incorporating the inductive biases of beam search in the training process. We show the efficacy of the proposed technique on a number of Machine Translation (MT) datasets, demonstrating that it leads to significant gains over cross-entropy across different language pairs, especially on the generation of low-frequency words. We have released the code to reproduce our results.

233. A Multilingual View of Unsupervised Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Xavier Garcia, Pierre Foret, Thibault Sellam, Ankur Parikh
We present a probabilistic framework for multilingual neural machine translation that encompasses supervised and unsupervised setups, focusing on unsupervised translation. In addition to studying the vanilla case where there is only monolingual data available, we propose a novel setup where one language in the (source, target) pair is not associated with any parallel data, but there may exist auxiliary parallel data that contains the other. This auxiliary data can naturally be utilized in our probabilistic framework via a novel cross-translation loss term. Empirically, we show that our approach results in higher BLEU scores over state-of-the-art unsupervised models on the WMT’14 English-French, WMT’16 English-German, and WMT’16 English-Romanian datasets in most directions.

234. KoBE: Knowledge-Based Machine Translation Evaluation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Zorik Gekhman, Roee Aharoni, Genady Beryozkin, Markus Freitag, Wolfgang Macherey
We propose a simple and effective method for machine translation evaluation which does not require reference translations. Our approach is based on (1) grounding the entity mentions found in each source sentence and candidate translation against a large-scale multilingual knowledge base, and (2) measuring the recall of the grounded entities found in the candidate vs. those found in the source. Our approach achieves the highest correlation with human judgements on 9 out of the 18 language pairs from the WMT19 benchmark for evaluation without references, which is the largest number of wins for a single evaluation method on this task. On 4 language pairs, we also achieve higher correlation with human judgements than BLEU. To foster further research, we release a dataset containing 1.8 million grounded entity mentions across 18 language pairs from the WMT19 metrics track data.

235. Improving Target-side Lexical Transfer in Multilingual Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Luyu Gao, Xinyi Wang, Graham Neubig
To improve the performance of Neural Machine Translation (NMT) for low-resource languages (LRL), one effective strategy is to leverage parallel data from a related high-resource language (HRL). However, multilingual data has been found more beneficial for NMT models that translate from the LRL to a target language than the ones that translate into the LRLs. In this paper, we aim to improve the effectiveness of multilingual transfer for NMT models that translate into the LRL, by designing a better decoder word embedding. Extending upon a general-purpose multilingual encoding method Soft Decoupled Encoding (Wang et al., 2019), we propose DecSDE, an efficient character n-gram based embedding specifically designed for the NMT decoder. Our experiments show that DecSDE leads to consistent gains of up to 1.8 BLEU on translation from English to four different languages.

236. The Box is in the Pen: Evaluating Commonsense Reasoning in Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Jie He, Tao Wang, Deyi Xiong, Qun Liu
Does neural machine translation yield translations that are congenial with common sense? In this paper, we present a test suite to evaluate the commonsense reasoning capability of neural machine translation. The test suite consists of three test sets, covering lexical and contextless/contextual syntactic ambiguity that requires commonsense knowledge to resolve. We manually create 1,200 triples, each of which contain a source sentence and two contrastive translations, involving 7 different common sense types. Language models pretrained on large-scale corpora, such as BERT, GPT-2, achieve a commonsense reasoning accuracy of lower than 72% on target translations of this test suite. We conduct extensive experiments on the test suite to evaluate commonsense reasoning in neural machine translation and investigate factors that have impact on this capability. Our experiments and analyses demonstrate that neural machine translation performs poorly on commonsense reasoning of the three ambiguity types in terms of both reasoning accuracy ( 6 60.1%) and reasoning consistency (6 31%). We will release our test suite as a machine translation commonsense reasoning testbed to promote future work in this direction.

237. It’s not a Non-Issue: Negation as a Source of Error in Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Md Mosharaf Hossain, Antonios Anastasopoulos, Eduardo Blanco, Alexis Palmer
As machine translation (MT) systems progress at a rapid pace, questions of their adequacy linger. In this study we focus on negation, a universal, core property of human language that significantly affects the semantics of an utterance. We investigate whether translating negation is an issue for modern MT systems using 17 translation directions as test bed. Through thorough analysis, we find that indeed the presence of negation can significantly impact downstream quality, in some cases resulting in quality reductions of more than 60%. We also provide a linguistically motivated analysis that directly explains the majority of our findings. We release our annotations and code to replicate our analysis here: https://github.com/mosharafhossain/negation-mt.

238. Fluent and Low-latency Simultaneous Speech-to-Speech Translation with Self-adaptive Training [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Renjie Zheng, Mingbo Ma, Baigong Zheng, Kaibo Liu, Jiahong Yuan, Kenneth Church, Liang Huang
Simultaneous speech-to-speech translation is an extremely challenging but widely useful scenario that aims to generate target-language speech only a few seconds behind the source-language speech. In addition, we have to continuously translate a speech of multiple sentences, but all recent solutions merely focus on the single-sentence scenario. As a result, current approaches will accumulate more and more latencies in later sentences when the speaker talks faster and introduce unnatural pauses into translated speech when the speaker talks slower. To overcome these issues, we propose Self-Adaptive Translation which flexibly adjusts the length of translations to accommodate different source speech rates. At similar levels of translation quality (as measured by BLEU), our method generates more fluent target speech latency than the baseline, in both Zh<->En directions.

239. Finding the Optimal Vocabulary Size for Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Thamme Gowda, Jonathan May
We cast neural machine translation (NMT) as a classification task in an autoregressive setting and analyze the limitations of both classification and autoregression components. Classifiers are known to perform better with balanced class distributions during training. Since the Zipfian nature of languages causes imbalanced classes, we explore its effect on NMT. We analyze the effect of various vocabulary sizes on NMT performance on multiple languages with many data sizes, and reveal an explanation for why certain vocabulary sizes are better than others.

240. Reference Language based Unsupervised Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Zuchao Li, Hai Zhao, Rui Wang, Masao Utiyama, Eiichiro Sumita
Exploiting a common language as an auxiliary for better translation has a long tradition in machine translation and lets supervised learning-based machine translation enjoy the enhancement delivered by the well-used pivot language in the absence of a source language to target language parallel corpus. The rise of unsupervised neural machine translation (UNMT) almost completely relieves the parallel corpus curse, though UNMT is still subject to unsatisfactory performance due to the vagueness of the clues available for its core back-translation training. Further enriching the idea of pivot translation by extending the use of parallel corpora beyond the source-target paradigm, we propose a new reference language-based framework for UNMT, RUNMT, in which the reference language only shares a parallel corpus with the source, but this corpus still indicates a signal clear enough to help the reconstruction training of UNMT through a proposed reference agreement mechanism. Experimental results show that our methods improve the quality of UNMT over that of a strong baseline that uses only one auxiliary language, demonstrating the usefulness of the proposed reference language-based UNMT and establishing a good start for the community.

241. Assessing Human-Parity in Machine Translation on the Segment Level [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Yvette Graham, Christian Federmann, Maria Eskevich, Barry Haddow
Recent machine translation shared tasks have shown top-performing systems to tie or in some cases even outperform human translation. Such conclusions about system and human performance are, however, based on estimates aggregated from scores collected over large test sets of translations and unfortunately leave some remaining questions unanswered. For instance, simply because a system significantly outperforms the human translator on average may not necessarily mean that it has done so for every translation in the test set. Firstly, are there remaining source segments present in evaluation test sets that cause significant challenges for top-performing systems and can such challenging segments go unnoticed due to the opacity of current human evaluation procedures? To provide insight into these issues we carefully inspect the outputs of top-performing systems in the most recent WMT-19 news translation shared task for all language pairs in which a system either tied or outperformed human translation. Our analysis provides a new method of identifying the remaining segments for which either machine or human perform poorly. For example, in our close inspection of WMT-19 English to German and German to English we discover the segments that disjointly proved a challenge for human and machine. For English to Russian, there were no segments included in our sample of translations that caused a significant challenge for the human translator, while we again identify the set of segments that caused issues for the top-performing system.

242. Factorized Transformer for Multi-Domain Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Yongchao Deng, Hongfei Yu, Heng Yu, Xiangyu Duan, Weihua Luo
Multi-Domain Neural Machine Translation (NMT) aims at building a single system that performs well on a range of target domains. However, along with the extreme diversity of cross-domain wording and phrasing style, the imperfections of training data distribution and the inherent defects of the current sequential learning process all contribute to making the task of multi-domain NMT very challenging. To mitigate these problems, we propose the Factorized Transformer, which consists of an in-depth factorization of the parameters of an NMT model, namely Transformer in this paper, into two categories: domain-shared ones that encode common cross-domain knowledge and domain-specific ones that are private for each constituent domain. We experiment with various designs of our model and conduct extensive validations on English to French open multi-domain dataset. Our approach achieves state-of-the-art performance and opens up new perspectives for multi-domain and open-domain applications.

243. Vocabulary Adaptation for Domain Adaptation in Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Shoetsu Sato, Jin Sakuma, Naoki Yoshinaga, Masashi Toyoda, Masaru Kitsuregawa
Neural network methods exhibit strong performance only in a few resource-rich domains. Practitioners therefore employ domain adaptation from resource-rich domains that are, in most cases, distant from the target domain. Domain adaptation between distant domains (e.g., movie subtitles and research papers), however, cannot be performed effectively due to mismatches in vocabulary; it will encounter many domain-specific words (e.g., “angstrom”) and words whose meanings shift across domains (e.g., “conductor”). In this study, aiming to solve these vocabulary mismatches in domain adaptation for neural machine translation (NMT), we propose vocabulary adaptation, a simple method for effective fine-tuning that adapts embedding layers in a given pretrained NMT model to the target domain. Prior to fine-tuning, our method replaces the embedding layers of the NMT model by projecting general word embeddings induced from monolingual data in a target domain onto a source-domain embedding space. Experimental results indicate that our method improves the performance of conventional fine-tuning by 3.86 and 3.28 BLEU points in En-Ja and De-En translation, respectively.

244. Training Flexible Depth Model by Multi-Task Learning for Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Qiang Wang, Tong Xiao, Jingbo Zhu
The standard neural machine translation model can only decode with the same depth configuration as training. Restricted by this feature, we have to deploy models of various sizes to maintain the same translation latency, because the hardware conditions on different terminal devices (e.g., mobile phones) may vary greatly. Such individual training leads to increased model maintenance costs and slower model iterations, especially for the industry. In this work, we propose to use multi-task learning to train a flexible depth model that can adapt to different depth configurations during inference. Experimental results show that our approach can simultaneously support decoding in 24 depth configurations and is superior to the individual training and another flexible depth model training method——LayerDrop.

245. Extremely Low Bit Transformer Quantization for On-Device Neural Machine Translation [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Insoo Chung, Byeongwook Kim, Yoonjung Choi, Se Jung Kwon, Yongkweon Jeon, Baeseong Park, Sangha Kim, Dongsoo Lee
The deployment of widely used Transformer architecture is challenging because of heavy computation load and memory overhead during inference, especially when the target device is limited in computational resources such as mobile or edge devices. Quantization is an effective technique to address such challenges. Our analysis shows that for a given number of quantization bits, each block of Transformer contributes to translation quality and inference computations in different manners. Moreover, even inside an embedding block, each word presents vastly different contributions. Correspondingly, we propose a mixed precision quantization strategy to represent Transformer weights by an extremely low number of bits (e.g., under 3 bits). For example, for each word in an embedding block, we assign different quantization bits based on statistical property. Our quantized Transformer model achieves 11.8× smaller model size than the baseline model, with less than -0.5 BLEU. We achieve 8.3× reduction in run-time memory footprints and 3.5× speed up (Galaxy N10+) such that our proposed compression strategy enables efficient implementation for on-device NMT.

246. On the Weaknesses of Reinforcement Learning for Neural Machine Translation [PDF] 返回目录
  ICLR 2020.
  Leshem Choshen, Lior Fox, Zohar Aizenbud, Omri Abend
Reinforcement learning (RL) is frequently used to increase performance in text generation tasks,including machine translation (MT), notably through the use of Minimum Risk Training (MRT) and Generative Adversarial Networks (GAN). However, little is known about what and how these methods learn in the context of MT. We prove that one of the most common RL methods for MT does not optimize the expected reward, as well as show that other methods take an infeasibly long time to converge.In fact, our results suggest that RL practices in MT are likely to improve performanceonly where the pre-trained parameters are already close to yielding the correct translation.Our findings further suggest that observed gains may be due to effects unrelated to the training signal, concretely, changes in the shape of the distribution curve.

247. Understanding Knowledge Distillation in Non-autoregressive Machine Translation [PDF] 返回目录
  ICLR 2020.
  Chunting Zhou, Jiatao Gu, Graham Neubig
Non-autoregressive machine translation (NAT) systems predict a sequence of output tokens in parallel, achieving substantial improvements in generation speed compared to autoregressive models. Existing NAT models usually rely on the technique of knowledge distillation, which creates the training data from a pretrained autoregressive model for better performance. Knowledge distillation is empirically useful, leading to large gains in accuracy for NAT models, but the reason for this success has, as of yet, been unclear. In this paper, we first design systematic experiments to investigate why knowledge distillation is crucial to NAT training. We find that knowledge distillation can reduce the complexity of data sets and help NAT to model the variations in the output data. Furthermore, a strong correlation is observed between the capacity of an NAT model and the optimal complexity of the distilled data for the best translation quality. Based on these findings, we further propose several approaches that can alter the complexity of data sets to improve the performance of NAT models. We achieve the state-of-the-art performance for the NAT-based models, and close the gap with the autoregressive baseline on WMT14 En-De benchmark.

248. U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation [PDF] 返回目录
  ICLR 2020.
  Junho Kim, Minjae Kim, Hyeonwoo Kang, Kwanghee Lee
We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner. The attention module guides our model to focus on more important regions distinguishing between source and target domains based on the attention map obtained by the auxiliary classifier. Unlike previous attention-based method which cannot handle the geometric changes between domains, our model can translate both images requiring holistic changes and images requiring large shape changes. Moreover, our new AdaLIN (Adaptive Layer-Instance Normalization) function helps our attention-guided model to flexibly control the amount of change in shape and texture by learned parameters depending on datasets. Experimental results show the superiority of the proposed method compared to the existing state-of-the-art models with a fixed network architecture and hyper-parameters.

249. Incorporating BERT into Neural Machine Translation [PDF] 返回目录
  ICLR 2020.
  Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, Tie-Yan Liu
The recently proposed BERT (Devlin et al., 2019) has shown great power on a variety of natural language understanding tasks, such as text classification, reading comprehension, etc. However, how to effectively apply BERT to neural machine translation (NMT) lacks enough exploration. While BERT is more commonly used as fine-tuning instead of contextual embedding for downstream language understanding tasks, in NMT, our preliminary exploration of using BERT as contextual embedding is better than using for fine-tuning. This motivates us to think how to better leverage BERT for NMT along this direction. We propose a new algorithm named BERT-fused model, in which we first use BERT to extract representations for an input sequence, and then the representations are fused with each layer of the encoder and decoder of the NMT model through attention mechanisms. We conduct experiments on supervised (including sentence-level and document-level translations), semi-supervised and unsupervised machine translation, and achieve state-of-the-art results on seven benchmark datasets. Our code is available at https://github.com/bert-nmt/bert-nmt

250. Neural Machine Translation with Universal Visual Representation [PDF] 返回目录
  ICLR 2020.
  Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, Hai Zhao
Though visual information has been introduced for enhancing neural machine translation (NMT), its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations. In this paper, we present a universal visual representation learned over the monolingual corpora with image annotations, which overcomes the lack of large-scale bilingual sentence-image pairs, thereby extending image applicability in NMT. In detail, a group of images with similar topics to the source sentence will be retrieved from a light topic-image lookup table learned over the existing sentence-image pairs, and then is encoded as image representations by a pre-trained ResNet. An attention layer with a gated weighting is to fuse the visual information and text information as input to the decoder for predicting target translations. In particular, the proposed method enables the visual information to be integrated into large-scale text-only NMT in addition to the multimodel NMT. Experiments on four widely used translation datasets, including the WMT'16 English-to-Romanian, WMT'14 English-to-German, WMT'14 English-to-French, and Multi30K, show that the proposed approach achieves significant improvements over strong baselines.

251. A Latent Morphology Model for Open-Vocabulary Neural Machine Translation [PDF] 返回目录
  ICLR 2020.
  Duygu Ataman, Wilker Aziz, Alexandra Birch
Translation into morphologically-rich languages challenges neural machine translation (NMT) models with extremely sparse vocabularies where atomic treatment of surface forms is unrealistic. This problem is typically addressed by either pre-processing words into subword units or performing translation directly at the level of characters. The former is based on word segmentation algorithms optimized using corpus-level statistics with no regard to the translation task. The latter learns directly from translation data but requires rather deep architectures. In this paper, we propose to translate words by modeling word formation through a hierarchical latent variable model which mimics the process of morphological inflection. Our model generates words one character at a time by composing two latent representations: a continuous one, aimed at capturing the lexical semantics, and a set of (approximately) discrete features, aimed at capturing the morphosyntactic function, which are shared among different surface forms. Our model achieves better accuracy in translation into three morphologically-rich languages than conventional open-vocabulary NMT methods, while also demonstrating a better generalization capacity under low to mid-resource settings.

252. Mirror-Generative Neural Machine Translation [PDF] 返回目录
  ICLR 2020.
  Zaixiang Zheng, Hao Zhou, Shujian Huang, Lei Li, Xin-Yu Dai, Jiajun Chen
Training neural machine translation models (NMT) requires a large amount of parallel corpus, which is scarce for many language pairs. However, raw non-parallel corpora are often easy to obtain. Existing approaches have not exploited the full potential of non-parallel bilingual data either in training or decoding. In this paper, we propose the mirror-generative NMT (MGNMT), a single unified architecture that simultaneously integrates the source to target translation model, the target to source translation model, and two language models. Both translation models and language models share the same latent semantic space, therefore both translation directions can learn from non-parallel data more effectively. Besides, the translation models and language models can collaborate together during decoding. Our experiments show that the proposed MGNMT consistently outperforms existing approaches in a variety of scenarios and language pairs, including resource-rich and low-resource languages.

253. Lexical-Constraint-Aware Neural Machine Translation via Data Augmentation [PDF] 返回目录
  IJCAI 2020.
  Guanhua Chen, Yun Chen, Yong Wang, Victor O. K. Li
Leveraging lexical constraint is extremely significant in domain-specific machine translation and interactive machine translation. Previous studies mainly focus on extending beam search algorithm or augmenting the training corpus by replacing source phrases with the corresponding target translation. These methods either suffer from the heavy computation cost during inference or depend on the quality of the bilingual dictionary pre-specified by user or constructed with statistical machine translation. In response to these problems, we present a conceptually simple and empirically effective data augmentation approach in lexical constrained neural machine translation. Specifically, we make constraint-aware training data by first randomly sampling the phrases of the reference as constraints, and then packing them together into the source sentence with a separation symbol. Extensive experiments on several language pairs demonstrate that our approach achieves superior translation results over the existing systems, improving translation of constrained sentences without hurting the unconstrained ones.

254. Modeling Voting for System Combination in Machine Translation [PDF] 返回目录
  IJCAI 2020.
  Xuancheng Huang, Jiacheng Zhang, Zhixing Tan, Derek F. Wong, Huanbo Luan, Jingfang Xu, Maosong Sun, Yang Liu
System combination is an important technique for combining the hypotheses of different machine translation systems to improve translation performance. Although early statistical approaches to system combination have been proven effective in analyzing the consensus between hypotheses, they suffer from the error propagation problem due to the use of pipelines. While this problem has been alleviated by end-to-end training of multi-source sequence-to-sequence models recently, these neural models do not explicitly analyze the relations between hypotheses and fail to capture their agreement because the attention to a word in a hypothesis is calculated independently, ignoring the fact that the word might occur in multiple hypotheses. In this work, we propose an approach to modeling voting for system combination in machine translation. The basic idea is to enable words in hypotheses from different systems to vote on words that are representative and should get involved in the generation process. This can be done by quantifying the influence of each voter and its preference for each candidate. Our approach combines the advantages of statistical and neural methods since it can not only analyze the relations between hypotheses but also allow for end-to-end training. Experiments show that our approach is capable of better taking advantage of the consensus between hypotheses and achieves significant improvements over state-of-the-art baselines on Chinese-English and English-German machine translation tasks.

255. Task-Level Curriculum Learning for Non-Autoregressive Neural Machine Translation [PDF] 返回目录
  IJCAI 2020.
  Jinglin Liu, Yi Ren, Xu Tan, Chen Zhang, Tao Qin, Zhou Zhao, Tie-Yan Liu
Non-autoregressive translation (NAT) achieves faster inference speed but at the cost of worse accuracy compared with autoregressive translation (AT). Since AT and NAT can share model structure and AT is an easier task than NAT due to the explicit dependency on previous target-side tokens, a natural idea is to gradually shift the model training from the easier AT task to the harder NAT task. To smooth the shift from AT training to NAT training, in this paper, we introduce semi-autoregressive translation (SAT) as intermediate tasks. SAT contains a hyperparameter k, and each k value defines a SAT task with different degrees of parallelism. Specially, SAT covers AT and NAT as its special cases: it reduces to AT when k=1 and to NAT when k=N (N is the length of target sentence). We design curriculum schedules to gradually shift k from 1 to N, with different pacing functions and number of tasks trained at the same time. We called our method as task-level curriculum learning for NAT (TCL-NAT). Experiments on IWSLT14 De-En, IWSLT16 En-De, WMT14 En-De and De-En datasets show that TCL-NAT achieves significant accuracy improvements over previous NAT baselines and reduces the performance gap between NAT and AT models to 1-2 BLEU points, demonstrating the effectiveness of our proposed method.

256. Neural Machine Translation with Error Correction [PDF] 返回目录
  IJCAI 2020.
  Kaitao Song, Xu Tan, Jianfeng Lu
Neural machine translation (NMT) generates the next target token given as input the previous ground truth target tokens during training while the previous generated target tokens during inference, which causes discrepancy between training and inference as well as error propagation, and affects the translation accuracy. In this paper, we introduce an error correction mechanism into NMT, which corrects the error information in the previous generated tokens to better predict the next token. Specifically, we introduce two-stream self-attention from XLNet into NMT decoder, where the query stream is used to predict the next token, and meanwhile the content stream is used to correct the error information from the previous predicted tokens. We leverage scheduled sampling to simulate the prediction errors during training. Experiments on three IWSLT translation datasets and two WMT translation datasets demonstrate that our method achieves improvements over Transformer baseline and scheduled sampling. Further experimental analyses also verify the effectiveness of our proposed error correction mechanism to improve the translation quality.

257. Efficient Context-Aware Neural Machine Translation with Layer-Wise Weighting and Input-Aware Gating [PDF] 返回目录
  IJCAI 2020.
  Hongfei Xu, Deyi Xiong, Josef van Genabith, Qiuhui Liu
Existing Neural Machine Translation (NMT) systems are generally trained on a large amount of sentence-level parallel data, and during prediction sentences are independently translated, ignoring cross-sentence contextual information. This leads to inconsistency between translated sentences. In order to address this issue, context-aware models have been proposed. However, document-level parallel data constitutes only a small part of the parallel data available, and many approaches build context-aware models based on a pre-trained frozen sentence-level translation model in a two-step training manner. The computational cost of these approaches is usually high. In this paper, we propose to make the most of layers pre-trained on sentence-level data in contextual representation learning, reusing representations from the sentence-level Transformer and significantly reducing the cost of incorporating contexts in translation. We find that representations from shallow layers of a pre-trained sentence-level encoder play a vital role in source context encoding, and propose to perform source context encoding upon weighted combinations of pre-trained encoder layers' outputs. Instead of separately performing source context and input encoding, we propose to iteratively and jointly encode the source input and its contexts and to generate input-aware context representations with a cross-attention layer and a gating mechanism, which resets irrelevant information in context encoding. Our context-aware Transformer model outperforms the recent CADec [Voita et al., 2019c] on the English-Russian subtitle data and is about twice as fast in training and decoding.

258. Towards Making the Most of Context in Neural Machine Translation [PDF] 返回目录
  IJCAI 2020.
  Zaixiang Zheng, Xiang Yue, Shujian Huang, Jiajun Chen, Alexandra Birch
Document-level machine translation manages to outperform sentence level models by a small margin, but have failed to be widely adopted. We argue that previous research did not make a clear use of the global context, and propose a new document-level NMT framework that deliberately models the local context of each sentence with the awareness of the global context of the document in both source and target languages. We specifically design the model to be able to deal with documents containing any number of sentences, including single sentences. This unified approach allows our model to be trained elegantly on standard datasets without needing to train on sentence and document level data separately. Experimental results demonstrate that our model outperforms Transformer baselines and previous document-level NMT models with substantial margins of up to 2.1 BLEU on state-of-the-art baselines. We also provide analyses which show the benefit of context far beyond the neighboring two or three sentences, which previous studies have typically incorporated.

259. Knowledge Graphs Enhanced Neural Machine Translation [PDF] 返回目录
  IJCAI 2020.
  Yang Zhao, Jiajun Zhang, Yu Zhou, Chengqing Zong
Knowledge graphs (KGs) store much structured information on various entities, many of which are not covered by the parallel sentence pairs of neural machine translation (NMT). To improve the translation quality of these entities, in this paperwe propose a novel KGs enhanced NMT method. Specifically, we first induce the new translation results of these entities by transforming the source and target KGs into a unified semantic space. We then generate adequate pseudo parallel sentence pairs that contain these induced entity pairs. Finally, NMT model is jointly trained by the original and pseudo sentence pairs. The extensive experiments on Chinese-to-English and Englishto-Japanese translation tasks demonstrate that our method significantly outperforms the strong baseline models in translation quality, especially in handling the induced entities.

260. Bridging the Gap between Training and Inference for Neural Machine Translation (Extended Abstract) [PDF] 返回目录
  IJCAI 2020.
  Wen Zhang, Yang Feng, Qun Liu
Neural Machine Translation (NMT) generates target words sequentially in the way of predicting the next word conditioned on the context words. At training time, it predicts with the ground truth words as context while at inference it has to generate the entire sequence from scratch. This discrepancy of the fed context leads to error accumulation among the translation. Furthermore, word-level training requires strict matching between the generated sequence and the ground truth sequence which leads to overcorrection over different but reasonable translations. In this paper, we address these issues by sampling context words not only from the ground truth sequence but also from the predicted sequence during training. Experimental results on NIST Chinese->English and WMT2014 English->German translation tasks demonstrate that our method can achieve significant improvements on multiple data sets compared to strong baselines.

261. Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System? [PDF] 返回目录
  TACL 2020.
  Sorami Hisamoto, Matt Post, Kevin Duh
Data privacy is an important issue for “machine learning as a service” providers. We focus on the problem of membership inference attacks: Given a data sample and black-box access to a model’s API, determine whether the sample existed in the model’s training data. Our contribution is an investigation of this problem in the context of sequence-to-sequence models, which are important in applications such as machine translation and video captioning. We define the membership inference problem for sequence generation, provide an open dataset based on state-of-the-art machine translation models, and report initial results on whether these models leak private information against several kinds of membership inference attacks.

262. Better Document-Level Machine Translation with Bayes’ Rule [PDF] 返回目录
  TACL 2020.
  Lei Yu, Laurent Sartran, Wojciech Stokowiec, Wang Ling, Lingpeng Kong, Phil Blunsom, Chris Dyer
We show that Bayes’ rule provides an effective mechanism for creating document translation models that can be learned from only parallel sentences and monolingual documents a compelling benefit because parallel documents are not always available. In our formulation, the posterior probability of a candidate translation is the product of the unconditional (prior) probability of the candidate output document and the “reverse translation probability” of translating the candidate output back into the source language. Our proposed model uses a powerful autoregressive language model as the prior on target language documents, but it assumes that each sentence is translated independently from the target to the source language. Crucially, at test time, when a source document is observed, the document language model prior induces dependencies between the translations of the source sentences in the posterior. The model’s independence assumption not only enables efficient use of available data, but it additionally admits a practical left-to-right beam-search algorithm for carrying out inference. Experiments show that our model benefits from using cross-sentence context in the language model, and it outperforms existing document translation approaches.

263. Reproducible and Efficient Benchmarks for Hyperparameter Optimization of Neural Machine Translation Systems [PDF] 返回目录
  TACL 2020.
  Xuan Zhang, Kevin Duh
Hyperparameter selection is a crucial part of building neural machine translation (NMT) systems across both academia and industry. Fine-grained adjustments to a model’s architecture or training recipe can mean the difference between a positive and negative research result or between a state-of-the-art and underperforming system. While recent literature has proposed methods for automatic hyperparameter optimization (HPO), there has been limited work on applying these methods to neural machine translation (NMT), due in part to the high costs associated with experiments that train large numbers of model variants. To facilitate research in this space, we introduce a lookup-based approach that uses a library of pre-trained models for fast, low cost HPO experimentation. Our contributions include (1) the release of a large collection of trained NMT models covering a wide range of hyperparameters, (2) the proposal of targeted metrics for evaluating HPO methods on NMT, and (3) a reproducible benchmark of several HPO methods against our model library, including novel graph-based and multiobjective methods.

注:论文列表使用AC论文搜索器整理!