0%

【NLP】 2019 Neural Machine Translation 机器翻译相关论文整理

目录

1. Dynamic Layer Aggregation for Neural Machine Translation with Routing-by-Agreement, AAAI 2019 [PDF] 摘要
2. DTMT: A Novel Deep Transition Architecture for Neural Machine Translation, AAAI 2019 [PDF] 摘要
3. Unsupervised Neural Machine Translation with SMT as Posterior Regularization, AAAI 2019 [PDF] 摘要
4. TransNFCM: Translation-Based Neural Fashion Compatibility Modeling, AAAI 2019 [PDF] 摘要
5. Regularizing Neural Machine Translation by Target-Bidirectional Agreement, AAAI 2019 [PDF] 摘要
6. Addressing the Under-Translation Problem from the Entropy Perspective, AAAI 2019 [PDF] 摘要
7. Exploiting Time-Series Image-to-Image Translation to Expand the Range of Wildlife Habitat Analysis, AAAI 2019 [PDF] 摘要
8. Controllable Image-to-Video Translation: A Case Study on Facial Expression Generation, AAAI 2019 [PDF] 摘要
9. Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input, AAAI 2019 [PDF] 摘要
10. Non-Autoregressive Machine Translation with Auxiliary Regularization, AAAI 2019 [PDF] 摘要
11. Tied Transformers: Neural Machine Translation with Shared Encoder and Decoder, AAAI 2019 [PDF] 摘要
12. Recurrent Stacking of Layers for Compact Neural Machine Translation Models, AAAI 2019 [PDF] 摘要
13. Adapting Translation Models for Transcript Disfluency Detection, AAAI 2019 [PDF] 摘要
14. "Bilingual Expert" Can Find Translation Errors, AAAI 2019 [PDF] 摘要
15. PARABANK: Monolingual Bitext Generation and Sentential Paraphrasing via Lexically-Constrained Neural Machine Translation, AAAI 2019 [PDF] 摘要
16. Neural Machine Translation with Adequacy-Oriented Learning, AAAI 2019 [PDF] 摘要
17. Found in Translation: Learning Robust Joint Representations by Cyclic Translations between Modalities, AAAI 2019 [PDF] 摘要
18. Jointly Extracting Multiple Triplets with Multilayer Translation Constraints, AAAI 2019 [PDF] 摘要
19. Translating with Bilingual Topic Knowledge for Neural Machine Translation, AAAI 2019 [PDF] 摘要
20. Graph Based Translation Memory for Neural Machine Translation, AAAI 2019 [PDF] 摘要
21. Modeling Coherence for Discourse Neural Machine Translation, AAAI 2019 [PDF] 摘要
22. Identifying Semantics in Clinical Reports Using Neural Machine Translation, AAAI 2019 [PDF] 摘要
23. Unsupervised Pivot Translation for Distant Languages, ACL 2019 [PDF] 摘要
24. An Effective Approach to Unsupervised Machine Translation, ACL 2019 [PDF] 摘要
25. Effective Adversarial Regularization for Neural Machine Translation, ACL 2019 [PDF] 摘要
26. Revisiting Low-Resource Neural Machine Translation: A Case Study, ACL 2019 [PDF] 摘要
27. Domain Adaptive Inference for Neural Machine Translation, ACL 2019 [PDF] 摘要
28. When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion, ACL 2019 [PDF] 摘要
29. A Compact and Language-Sensitive Multilingual Translation Method, ACL 2019 [PDF] 摘要
30. Unsupervised Parallel Sentence Extraction with Parallel Segment Detection Helps Machine Translation, ACL 2019 [PDF] 摘要
31. Unsupervised Bilingual Word Embedding Agreement for Unsupervised Neural Machine Translation, ACL 2019 [PDF] 摘要
32. Effective Cross-lingual Transfer of Neural Machine Translation Models without Shared Vocabularies, ACL 2019 [PDF] 摘要
33. Improved Zero-shot Neural Machine Translation via Ignoring Spurious Correlations, ACL 2019 [PDF] 摘要
34. Syntactically Supervised Transformers for Faster Neural Machine Translation, ACL 2019 [PDF] 摘要
35. Dynamically Composing Domain-Data Selection with Clean-Data Selection by “Co-Curricular Learning” for Neural Machine Translation, ACL 2019 [PDF] 摘要
36. On the Word Alignment from Neural Machine Translation, ACL 2019 [PDF] 摘要
37. Imitation Learning for Non-Autoregressive Neural Machine Translation, ACL 2019 [PDF] 摘要
38. Monotonic Infinite Lookback Attention for Simultaneous Machine Translation, ACL 2019 [PDF] 摘要
39. Evaluating Gender Bias in Machine Translation, ACL 2019 [PDF] 摘要
40. Neural Machine Translation with Reordering Embeddings, ACL 2019 [PDF] 摘要
41. Neural Fuzzy Repair: Integrating Fuzzy Matches into Neural Machine Translation, ACL 2019 [PDF] 摘要
42. Learning Deep Transformer Models for Machine Translation, ACL 2019 [PDF] 摘要
43. Generating Diverse Translations with Sentence Codes, ACL 2019 [PDF] 摘要
44. Self-Supervised Neural Machine Translation, ACL 2019 [PDF] 摘要
45. Exploring Phoneme-Level Speech Representations for End-to-End Speech Translation, ACL 2019 [PDF] 摘要
46. Putting Evaluation in Context: Contextual Embeddings Improve Machine Translation Evaluation, ACL 2019 [PDF] 摘要
47. Domain Adaptation of Neural Machine Translation by Lexicon Induction, ACL 2019 [PDF] 摘要
48. Reference Network for Neural Machine Translation, ACL 2019 [PDF] 摘要
49. Retrieving Sequential Information for Non-Autoregressive Neural Machine Translation, ACL 2019 [PDF] 摘要
50. STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework, ACL 2019 [PDF] 摘要
51. Look Harder: A Neural Machine Translation Model with Hard Attention, ACL 2019 [PDF] 摘要
52. Robust Neural Machine Translation with Joint Textual and Phonetic Embedding, ACL 2019 [PDF] 摘要
53. Translating Translationese: A Two-Step Approach to Unsupervised Machine Translation, ACL 2019 [PDF] 摘要
54. Training Neural Machine Translation to Apply Terminology Constraints, ACL 2019 [PDF] 摘要
55. Sentence-Level Agreement for Neural Machine Translation, ACL 2019 [PDF] 摘要
56. Lattice-Based Transformer Encoder for Neural Machine Translation, ACL 2019 [PDF] 摘要
57. Shared-Private Bilingual Word Embeddings for Neural Machine Translation, ACL 2019 [PDF] 摘要
58. Robust Neural Machine Translation with Doubly Adversarial Inputs, ACL 2019 [PDF] 摘要
59. Bridging the Gap between Training and Inference for Neural Machine Translation, ACL 2019 [PDF] 摘要
60. Beyond BLEU:Training Neural Machine Translation with Semantic Similarity, ACL 2019 [PDF] 摘要
61. Simple and Effective Paraphrastic Similarity from Parallel Translations, ACL 2019 [PDF] 摘要
62. Unsupervised Question Answering by Cloze Translation, ACL 2019 [PDF] 摘要
63. Bilingual Lexicon Induction through Unsupervised Machine Translation, ACL 2019 [PDF] 摘要
64. Soft Contextual Data Augmentation for Neural Machine Translation, ACL 2019 [PDF] 摘要
65. Depth Growing for Neural Machine Translation, ACL 2019 [PDF] 摘要
66. Generalized Data Augmentation for Low-Resource Translation, ACL 2019 [PDF] 摘要
67. Better OOV Translation with Bilingual Terminology Mining, ACL 2019 [PDF] 摘要
68. Simultaneous Translation with Flexible Policy via Restricted Imitation Learning, ACL 2019 [PDF] 摘要
69. Target Conditioned Sampling: Optimizing Data Selection for Multilingual Neural Machine Translation, ACL 2019 [PDF] 摘要
70. Unsupervised Paraphrasing without Translation, ACL 2019 [PDF] 摘要
71. Reducing Word Omission Errors in Neural Machine Translation: A Contrastive Learning Approach, ACL 2019 [PDF] 摘要
72. Exploiting Sentential Context for Neural Machine Translation, ACL 2019 [PDF] 摘要
73. A Multi-Task Architecture on Relevance-based Neural Query Translation, ACL 2019 [PDF] 摘要
74. Latent Variable Model for Multi-modal Translation, ACL 2019 [PDF] 摘要
75. Lattice Transformer for Speech Translation, ACL 2019 [PDF] 摘要
76. Distilling Translations with Visual Awareness, ACL 2019 [PDF] 摘要
77. Paraphrases as Foreign Languages in Multilingual Neural Machine Translation, ACL 2019 [PDF] 摘要
78. Improving Mongolian-Chinese Neural Machine Translation with Morphological Noise, ACL 2019 [PDF] 摘要
79. Unsupervised Pretraining for Neural Machine Translation Using Elastic Weight Consolidation, ACL 2019 [PDF] 摘要
80. Attention over Heads: A Multi-Hop Attention for Neural Machine Translation, ACL 2019 [PDF] 摘要
81. From Bilingual to Multilingual Neural Machine Translation by Incremental Training, ACL 2019 [PDF] 摘要
82. Normalizing Non-canonical Turkish Texts Using Machine Translation Approaches, ACL 2019 [PDF] 摘要
83. English-Indonesian Neural Machine Translation for Spoken Language Domains, ACL 2019 [PDF] 摘要
84. Demonstration of a Neural Machine Translation System with Online Learning for Translators, ACL 2019 [PDF] 摘要
85. English-Ethiopian Languages Statistical Machine Translation, ACL 2019 [PDF] 摘要
86. Benchmarking Neural Machine Translation for Southern African Languages, ACL 2019 [PDF] 摘要
87. Assessing the Ability of Neural Machine Translation Models to Perform Syntactic Rewriting, ACL 2019 [PDF] 摘要
88. Creating a Corpus for Russian Data-to-Text Generation Using Neural Machine Translation and Post-Editing, ACL 2019 [PDF] 摘要
89. Building English-to-Serbian Machine Translation System for IMDb Movie Reviews, ACL 2019 [PDF] 摘要
90. Filling Gender & Number Gaps in Neural Machine Translation with Black-box Context Injection, ACL 2019 [PDF] 摘要
91. Equalizing Gender Bias in Neural Machine Translation with Word Embeddings Techniques, ACL 2019 [PDF] 摘要
92. On Measuring Gender Bias in Translation of Gender-neutral Pronouns, ACL 2019 [PDF] 摘要
93. Assessing Back-Translation as a Corpus Generation Strategy for non-English Tasks: A Study in Reading Comprehension and Word Sense Disambiguation, ACL 2019 [PDF] 摘要
94. An Evaluation of Language-Agnostic Inner-Attention-Based Representations in Machine Translation, ACL 2019 [PDF] 摘要
95. Auto-Encoding Variational Neural Machine Translation, ACL 2019 [PDF] 摘要
96. Incremental Domain Adaptation for Neural Machine Translation in Low-Resource Settings, ACL 2019 [PDF] 摘要
97. Morphology-aware Word-Segmentation in Dialectal Arabic Adaptation of Neural Machine Translation, ACL 2019 [PDF] 摘要
98. Translating Between Morphologically Rich Languages: An Arabic-to-Turkish Machine Translation System, ACL 2019 [PDF] 摘要
99. Unsupervised Compositional Translation of Multiword Expressions, ACL 2019 [PDF] 摘要
100. Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers), ACL 2019 [PDF] 摘要
101. Saliency-driven Word Alignment Interpretation for Neural Machine Translation, ACL 2019 [PDF] 摘要
102. Improving Zero-shot Translation with Language-Independent Constraints, ACL 2019 [PDF] 摘要
103. Incorporating Source Syntax into Transformer-Based Neural Machine Translation, ACL 2019 [PDF] 摘要
104. Generalizing Back-Translation in Neural Machine Translation, ACL 2019 [PDF] 摘要
105. Tagged Back-Translation, ACL 2019 [PDF] 摘要
106. The Effect of Translationese in Machine Translation Test Sets, ACL 2019 [PDF] 摘要
107. Customizing Neural Machine Translation for Subtitling, ACL 2019 [PDF] 摘要
108. Integration of Dubbing Constraints into Machine Translation, ACL 2019 [PDF] 摘要
109. Widening the Representation Bottleneck in Neural Machine Translation with Lexical Shortcuts, ACL 2019 [PDF] 摘要
110. A High-Quality Multilingual Dataset for Structured Documentation Translation, ACL 2019 [PDF] 摘要
111. Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), ACL 2019 [PDF] 摘要
112. Findings of the 2019 Conference on Machine Translation (WMT19), ACL 2019 [PDF] 摘要
113. Findings of the First Shared Task on Machine Translation Robustness, ACL 2019 [PDF] 摘要
114. The University of Edinburgh’s Submissions to the WMT19 News Translation Task, ACL 2019 [PDF] 摘要
115. GTCOM Neural Machine Translation Systems for WMT19, ACL 2019 [PDF] 摘要
116. Machine Translation with parfda, Moses, kenlm, nplm, and PRO, ACL 2019 [PDF] 摘要
117. LIUM’s Contributions to the WMT2019 News Translation Task: Data and Systems for German-French Language Pairs, ACL 2019 [PDF] 摘要
118. The University of Maryland’s Kazakh-English Neural Machine Translation System at WMT19, ACL 2019 [PDF] 摘要
119. DBMS-KU Interpolation for WMT19 News Translation Task, ACL 2019 [PDF] 摘要
120. The TALP-UPC Machine Translation Systems for WMT19 News Translation Task: Pivoting Techniques for Low Resource MT, ACL 2019 [PDF] 摘要
121. NICT’s Supervised Neural Machine Translation Systems for the WMT19 News Translation Task, ACL 2019 [PDF] 摘要
122. The University of Sydney’s Machine Translation System for WMT19, ACL 2019 [PDF] 摘要
123. The IIIT-H Gujarati-English Machine Translation System for WMT19, ACL 2019 [PDF] 摘要
124. Kingsoft’s Neural Machine Translation System for WMT19, ACL 2019 [PDF] 摘要
125. Evaluating the Supervised and Zero-shot Performance of Multi-lingual Translation Models, ACL 2019 [PDF] 摘要
126. The MLLP-UPV Supervised Machine Translation Systems for WMT19 News Translation Task, ACL 2019 [PDF] 摘要
127. Microsoft Translator at WMT 2019: Towards Large-Scale Document-Level Neural Machine Translation, ACL 2019 [PDF] 摘要
128. CUNI Systems for the Unsupervised News Translation Task in WMT 2019, ACL 2019 [PDF] 摘要
129. A Comparison on Fine-grained Pre-trained Embeddings for the WMT19Chinese-English News Translation Task, ACL 2019 [PDF] 摘要
130. The NiuTrans Machine Translation Systems for WMT19, ACL 2019 [PDF] 摘要
131. Multi-Source Transformer for Kazakh-Russian-English Neural Machine Translation, ACL 2019 [PDF] 摘要
132. Incorporating Word and Subword Units in Unsupervised Machine Translation Using Language Model Rescoring, ACL 2019 [PDF] 摘要
133. JUMT at WMT2019 News Translation Task: A Hybrid Approach to Machine Translation for Lithuanian to English, ACL 2019 [PDF] 摘要
134. Johns Hopkins University Submission for WMT News Translation Task, ACL 2019 [PDF] 摘要
135. NICT’s Unsupervised Neural and Statistical Machine Translation Systems for the WMT19 News Translation Task, ACL 2019 [PDF] 摘要
136. PROMT Systems for WMT 2019 Shared Translation Task, ACL 2019 [PDF] 摘要
137. JU-Saarland Submission to the WMT2019 English–Gujarati Translation Shared Task, ACL 2019 [PDF] 摘要
138. Facebook FAIR’s WMT19 News Translation Task Submission, ACL 2019 [PDF] 摘要
139. eTranslation’s Submissions to the WMT 2019 News Translation Task, ACL 2019 [PDF] 摘要
140. Tilde’s Machine Translation Systems for WMT 2019, ACL 2019 [PDF] 摘要
141. Apertium-fin-eng–Rule-based Shallow Machine Translation for WMT 2019 Shared Task, ACL 2019 [PDF] 摘要
142. The RWTH Aachen University Machine Translation Systems for WMT 2019, ACL 2019 [PDF] 摘要
143. The Universitat d’Alacant Submissions to the English-to-Kazakh News Translation Task at WMT 2019, ACL 2019 [PDF] 摘要
144. Baidu Neural Machine Translation Systems for WMT19, ACL 2019 [PDF] 摘要
145. University of Tartu’s Multilingual Multi-domain WMT19 News Translation Shared Task Submission, ACL 2019 [PDF] 摘要
146. Neural Machine Translation for English–Kazakh with Morphological Segmentation and Synthetic Data, ACL 2019 [PDF] 摘要
147. The LMU Munich Unsupervised Machine Translation System for WMT19, ACL 2019 [PDF] 摘要
148. Combining Local and Document-Level Context: The LMU Munich Neural Machine Translation System at WMT19, ACL 2019 [PDF] 摘要
149. IITP-MT System for Gujarati-English News Translation Task at WMT 2019, ACL 2019 [PDF] 摘要
150. The University of Helsinki Submissions to the WMT19 News Translation Task, ACL 2019 [PDF] 摘要
151. The En-Ru Two-way Integrated Machine Translation System Based on Transformer, ACL 2019 [PDF] 摘要
152. DFKI-NMT Submission to the WMT19 News Translation Task, ACL 2019 [PDF] 摘要
153. Linguistic Evaluation of German-English Machine Translation Using a Test Suite, ACL 2019 [PDF] 摘要
154. Evaluating Conjunction Disambiguation on English-to-German and French-to-German WMT 2019 Translation Hypotheses, ACL 2019 [PDF] 摘要
155. The MuCoW Test Suite at WMT 2019: Automatically Harvested Multilingual Contrastive Word Sense Disambiguation Test Sets for Machine Translation, ACL 2019 [PDF] 摘要
156. SAO WMT19 Test Suite: Machine Translation of Audit Reports, ACL 2019 [PDF] 摘要
157. WMDO: Fluency-based Word Mover’s Distance for Machine Translation Evaluation, ACL 2019 [PDF] 摘要
158. Meteor++ 2.0: Adopt Syntactic Level Paraphrase Knowledge into Machine Translation Evaluation, ACL 2019 [PDF] 摘要
159. EED: Extended Edit Distance Measure for Machine Translation, ACL 2019 [PDF] 摘要
160. Filtering Pseudo-References by Paraphrasing for Automatic Evaluation of Machine Translation, ACL 2019 [PDF] 摘要
161. Naver Labs Europe’s Systems for the WMT19 Machine Translation Robustness Task, ACL 2019 [PDF] 摘要
162. NICT’s Supervised Neural Machine Translation Systems for the WMT19 Translation Robustness Task, ACL 2019 [PDF] 摘要
163. NTT’s Machine Translation Systems for WMT19 Robustness Task, ACL 2019 [PDF] 摘要
164. Robust Machine Translation with Domain Sensitive Pseudo-Sources: Baidu-OSU WMT19 MT Robustness Shared Task System Report, ACL 2019 [PDF] 摘要
165. Improving Robustness of Neural Machine Translation with Multi-task Learning, ACL 2019 [PDF] 摘要
166. Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2), ACL 2019 [PDF] 摘要
167. Findings of the WMT 2019 Biomedical Translation Shared Task: Evaluation for MEDLINE Abstracts and Biomedical Terminologies, ACL 2019 [PDF] 摘要
168. RTM Stacking Results for Machine Translation Performance Prediction, ACL 2019 [PDF] 摘要
169. Unbabel’s Participation in the WMT19 Translation Quality Estimation Shared Task, ACL 2019 [PDF] 摘要
170. Quality Estimation and Translation Metrics via Pre-trained Word and Sentence Embeddings, ACL 2019 [PDF] 摘要
171. SOURCE: SOURce-Conditional Elmo-style Model for Machine Translation Quality Estimation, ACL 2019 [PDF] 摘要
172. Terminology-Aware Segmentation and Domain Feature for the WMT19 Biomedical Translation Task, ACL 2019 [PDF] 摘要
173. Exploring Transfer Learning and Domain Data Selection for the Biomedical Translation, ACL 2019 [PDF] 摘要
174. Huawei’s NMT Systems for the WMT 2019 Biomedical Translation Task, ACL 2019 [PDF] 摘要
175. UCAM Biomedical Translation at WMT19: Transfer Learning Multi-domain Ensembles, ACL 2019 [PDF] 摘要
176. BSC Participation in the WMT Translation of Biomedical Abstracts, ACL 2019 [PDF] 摘要
177. The MLLP-UPV Spanish-Portuguese and Portuguese-Spanish Machine Translation Systems for WMT19 Similar Language Translation Task, ACL 2019 [PDF] 摘要
178. The TALP-UPC System for the WMT Similar Language Task: Statistical vs Neural Machine Translation, ACL 2019 [PDF] 摘要
179. Machine Translation from an Intercomprehension Perspective, ACL 2019 [PDF] 摘要
180. Utilizing Monolingual Data in NMT for Similar Languages: Submission to Similar Language Translation Task, ACL 2019 [PDF] 摘要
181. Neural Machine Translation: Hindi-Nepali, ACL 2019 [PDF] 摘要
182. NICT’s Machine Translation Systems for the WMT19 Similar Language Translation Task, ACL 2019 [PDF] 摘要
183. Panlingua-KMI MT System for Similar Language Translation Task at WMT 2019, ACL 2019 [PDF] 摘要
184. UDS–DFKI Submission to the WMT2019 Czech–Polish Similar Language Translation Shared Task, ACL 2019 [PDF] 摘要
185. Neural Machine Translation of Low-Resource and Similar Languages with Backtranslation, ACL 2019 [PDF] 摘要
186. The University of Helsinki Submissions to the WMT19 Similar Language Translation Task, ACL 2019 [PDF] 摘要
187. Incorporating Source-Side Phrase Structures into Neural Machine Translation, CL 2019 [PDF] 摘要
188. Contextualized Translations of Phrasal Verbs with Distributional Compositional Semantics and Monolingual Corpora, CL 2019 [PDF] 摘要
189. Explicit Cross-lingual Pre-training for Unsupervised Machine Translation, EMNLP 2019 [PDF] 摘要
190. Latent Part-of-Speech Sequences for Neural Machine Translation, EMNLP 2019 [PDF] 摘要
191. Improving Back-Translation with Uncertainty-based Confidence Estimation, EMNLP 2019 [PDF] 摘要
192. Towards Linear Time Neural Machine Translation with Capsule Networks, EMNLP 2019 [PDF] 摘要
193. Iterative Dual Domain Adaptation for Neural Machine Translation, EMNLP 2019 [PDF] 摘要
194. Multi-agent Learning for Neural Machine Translation, EMNLP 2019 [PDF] 摘要
195. Pivot-based Transfer Learning for Neural Machine Translation between Non-English Languages, EMNLP 2019 [PDF] 摘要
196. Context-Aware Monolingual Repair for Neural Machine Translation, EMNLP 2019 [PDF] 摘要
197. Multi-Granularity Self-Attention for Neural Machine Translation, EMNLP 2019 [PDF] 摘要
198. One Model to Learn Both: Zero Pronoun Prediction and Translation, EMNLP 2019 [PDF] 摘要
199. Dynamic Past and Future for Neural Machine Translation, EMNLP 2019 [PDF] 摘要
200. Revisit Automatic Error Detection for Wrong and Missing Translation – A Supervised Approach, EMNLP 2019 [PDF] 摘要
201. Towards Understanding Neural Machine Translation with Word Importance, EMNLP 2019 [PDF] 摘要
202. Multilingual Neural Machine Translation with Language Clustering, EMNLP 2019 [PDF] 摘要
203. Entity Projection via Machine Translation for Cross-Lingual NER, EMNLP 2019 [PDF] 摘要
204. Multilingual word translation using auxiliary languages, EMNLP 2019 [PDF] 摘要
205. Simpler and Faster Learning of Adaptive Policies for Simultaneous Translation, EMNLP 2019 [PDF] 摘要
206. Recurrent Positional Embedding for Neural Machine Translation, EMNLP 2019 [PDF] 摘要
207. Machine Translation for Machines: the Sentiment Classification Use Case, EMNLP 2019 [PDF] 摘要
208. HABLex: Human Annotated Bilingual Lexicons for Experiments in Machine Translation, EMNLP 2019 [PDF] 摘要
209. Handling Syntactic Divergence in Low-resource Machine Translation, EMNLP 2019 [PDF] 摘要
210. Speculative Beam Search for Simultaneous Translation, EMNLP 2019 [PDF] 摘要
211. Exploiting Multilingualism through Multistage Fine-Tuning for Low-Resource Neural Machine Translation, EMNLP 2019 [PDF] 摘要
212. Unsupervised Domain Adaptation for Neural Machine Translation with Domain-Aware Feature Embeddings, EMNLP 2019 [PDF] 摘要
213. Encoders Help You Disambiguate Word Senses in Neural Machine Translation, EMNLP 2019 [PDF] 摘要
214. Enhancing Context Modeling with a Query-Guided Capsule Network for Document-level Translation, EMNLP 2019 [PDF] 摘要
215. Simple, Scalable Adaptation for Neural Machine Translation, EMNLP 2019 [PDF] 摘要
216. Controlling Text Complexity in Neural Machine Translation, EMNLP 2019 [PDF] 摘要
217. Hierarchical Modeling of Global Context for Document-Level Neural Machine Translation, EMNLP 2019 [PDF] 摘要
218. Evaluating Pronominal Anaphora in Machine Translation: An Evaluation Measure and a Test Suite, EMNLP 2019 [PDF] 摘要
219. IMaT: Unsupervised Text Attribute Transfer via Iterative Matching and Translation, EMNLP 2019 [PDF] 摘要
220. The Challenges of Optimizing Machine Translation for Low Resource Cross-Language Information Retrieval, EMNLP 2019 [PDF] 摘要
221. A Multi-Pairwise Extension of Procrustes Analysis for Multilingual Word Translation, EMNLP 2019 [PDF] 摘要
222. Exploiting Monolingual Data at Scale for Neural Machine Translation, EMNLP 2019 [PDF] 摘要
223. Machine Translation With Weakly Paired Documents, EMNLP 2019 [PDF] 摘要
224. The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives, EMNLP 2019 [PDF] 摘要
225. Understanding Data Augmentation in Neural Machine Translation: Two Perspectives towards Generalization, EMNLP 2019 [PDF] 摘要
226. Simple and Effective Noisy Channel Modeling for Neural Machine Translation, EMNLP 2019 [PDF] 摘要
227. Hint-Based Training for Non-Autoregressive Machine Translation, EMNLP 2019 [PDF] 摘要
228. The FLORES Evaluation Datasets for Low-Resource Machine Translation: Nepali–English and Sinhala–English, EMNLP 2019 [PDF] 摘要
229. INMT: Interactive Neural Machine Translation Prediction, EMNLP 2019 [PDF] 摘要
230. Proceedings of the 6th Workshop on Asian Translation, EMNLP 2019 [PDF] 摘要
231. Overview of the 6th Workshop on Asian Translation, EMNLP 2019 [PDF] 摘要
232. Compact and Robust Models for Japanese-English Character-level Machine Translation, EMNLP 2019 [PDF] 摘要
233. Controlling Japanese Honorifics in English-to-Japanese Neural Machine Translation, EMNLP 2019 [PDF] 摘要
234. English to Hindi Multi-modal Neural Machine Translation and Hindi Image Captioning, EMNLP 2019 [PDF] 摘要
235. Supervised and Unsupervised Machine Translation for Myanmar-English and Khmer-English, EMNLP 2019 [PDF] 摘要
236. English-Myanmar Supervised and Unsupervised NMT: NICT’s Machine Translation Systems at WAT-2019, EMNLP 2019 [PDF] 摘要
237. UCSMNLP: Statistical Machine Translation for WAT 2019, EMNLP 2019 [PDF] 摘要
238. NTT Neural Machine Translation Systems at WAT 2019, EMNLP 2019 [PDF] 摘要
239. Neural Machine Translation System using a Content-equivalently Translated Parallel Corpus for the Newswire Translation Tasks at WAT 2019, EMNLP 2019 [PDF] 摘要
240. Facebook AI’s WAT19 Myanmar-English Translation Task Submission, EMNLP 2019 [PDF] 摘要
241. Combining Translation Memory with Neural Machine Translation, EMNLP 2019 [PDF] 摘要
242. LTRC-MT Simple & Effective Hindi-English Neural Machine Translation Systems at WAT 2019, EMNLP 2019 [PDF] 摘要
243. Supervised neural machine translation based on data augmentation and improved training & inference process, EMNLP 2019 [PDF] 摘要
244. Our Neural Machine Translation Systems for WAT 2019, EMNLP 2019 [PDF] 摘要
245. Japanese-Russian TMU Neural Machine Translation System using Multilingual Model for WAT 2019, EMNLP 2019 [PDF] 摘要
246. NLPRL at WAT2019: Transformer-based Tamil – English Indic Task Neural Machine Translation System, EMNLP 2019 [PDF] 摘要
247. Idiap NMT System for WAT 2019 Multimodal Translation Task, EMNLP 2019 [PDF] 摘要
248. WAT2019: English-Hindi Translation on Hindi Visual Genome Dataset, EMNLP 2019 [PDF] 摘要
249. UCSYNLP-Lab Machine Translation Systems for WAT 2019, EMNLP 2019 [PDF] 摘要
250. Sentiment Aware Neural Machine Translation, EMNLP 2019 [PDF] 摘要
251. Overcoming the Rare Word Problem for low-resource language pairs in Neural Machine Translation, EMNLP 2019 [PDF] 摘要
252. Neural Arabic Text Diacritization: State of the Art Results and a Novel Approach for Machine Translation, EMNLP 2019 [PDF] 摘要
253. Neural Speech Translation using Lattice Transformations and Graph Networks, EMNLP 2019 [PDF] 摘要
254. Multilingual Whispers: Generating Paraphrases with Translation, EMNLP 2019 [PDF] 摘要
255. Training on Synthetic Noise Improves Robustness to Natural Noise in Machine Translation, EMNLP 2019 [PDF] 摘要
256. Improving Neural Machine Translation Robustness via Data Augmentation: Beyond Back-Translation, EMNLP 2019 [PDF] 摘要
257. Phonetic Normalization for Machine Translation of User Generated Content, EMNLP 2019 [PDF] 摘要
258. Proceedings of the 3rd Workshop on Neural Generation and Translation, EMNLP 2019 [PDF] 摘要
259. Findings of the Third Workshop on Neural Generation and Translation, EMNLP 2019 [PDF] 摘要
260. Recycling a Pre-trained BERT Encoder for Neural Machine Translation, EMNLP 2019 [PDF] 摘要
261. Domain Differential Adaptation for Neural Machine Translation, EMNLP 2019 [PDF] 摘要
262. Zero-Resource Neural Machine Translation with Monolingual Pivot Data, EMNLP 2019 [PDF] 摘要
263. On the use of BERT for Neural Machine Translation, EMNLP 2019 [PDF] 摘要
264. Machine Translation of Restaurant Reviews: New Corpus for Domain Adaptation and Robustness, EMNLP 2019 [PDF] 摘要
265. Adaptively Scheduled Multitask Learning: The Case of Low-Resource Neural Machine Translation, EMNLP 2019 [PDF] 摘要
266. On the Importance of Word Boundaries in Character-level Neural Machine Translation, EMNLP 2019 [PDF] 摘要
267. A Margin-based Loss with Synthetic Negative Samples for Continuous-output Machine Translation, EMNLP 2019 [PDF] 摘要
268. Mixed Multi-Head Self-Attention for Neural Machine Translation, EMNLP 2019 [PDF] 摘要
269. Interrogating the Explanatory Power of Attention in Neural Machine Translation, EMNLP 2019 [PDF] 摘要
270. Auto-Sizing the Transformer Network: Improving Speed, Efficiency, and Performance for Low-Resource Machine Translation, EMNLP 2019 [PDF] 摘要
271. Learning to Generate Word- and Phrase-Embeddings for Efficient Phrase-Based Neural Machine Translation, EMNLP 2019 [PDF] 摘要
272. Monash University’s Submissions to the WNGT 2019 Document Translation Task, EMNLP 2019 [PDF] 摘要
273. University of Edinburgh’s submission to the Document-level Generation and Translation Shared Task, EMNLP 2019 [PDF] 摘要
274. Naver Labs Europe’s Systems for the Document-Level Generation and Translation Task at WNGT 2019, EMNLP 2019 [PDF] 摘要
275. From Research to Production and Back: Ludicrously Fast Neural Machine Translation, EMNLP 2019 [PDF] 摘要
276. Selecting, Planning, and Rewriting: A Modular Approach for Data-to-Document Generation and Translation, EMNLP 2019 [PDF] 摘要
277. Back-Translation as Strategy to Tackle the Lack of Corpus in Natural Language Generation from Semantic Representations, EMNLP 2019 [PDF] 摘要
278. Understanding the Effect of Textual Adversaries in Multimodal Machine Translation, EMNLP 2019 [PDF] 摘要
279. Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019), EMNLP 2019 [PDF] 摘要
280. Context-Aware Neural Machine Translation Decoding, EMNLP 2019 [PDF] 摘要
281. When and Why is Document-level Context Useful in Neural Machine Translation?, EMNLP 2019 [PDF] 摘要
282. Data augmentation using back-translation for context-aware neural machine translation, EMNLP 2019 [PDF] 摘要
283. Context-aware Neural Machine Translation with Coreference Information, EMNLP 2019 [PDF] 摘要
284. Learning Multimodal Graph-to-Graph Translation for Molecule Optimization, ICLR 2019 [PDF] 摘要
285. Identifying and Controlling Important Neurons in Neural Machine Translation, ICLR 2019 [PDF] 摘要
286. A Universal Music Translation Network, ICLR 2019 [PDF] 摘要
287. Harmonic Unpaired Image-to-image Translation, ICLR 2019 [PDF] 摘要
288. Multilingual Neural Machine Translation with Knowledge Distillation, ICLR 2019 [PDF] 摘要
289. Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency, ICLR 2019 [PDF] 摘要
290. Multilingual Neural Machine Translation With Soft Decoupled Encoding, ICLR 2019 [PDF] 摘要
291. InstaGAN: Instance-aware Image-to-Image Translation, ICLR 2019 [PDF] 摘要
292. Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation, ICML 2019 [PDF] 摘要
293. Mixture Models for Diverse Machine Translation: Tricks of the Trade, ICML 2019 [PDF] 摘要
294. Dense Temporal Convolution Network for Sign Language Translation, IJCAI 2019 [PDF] 摘要
295. Connectionist Temporal Modeling of Video and Language: a Joint Model for Translation and Sign Labeling, IJCAI 2019 [PDF] 摘要
296. Deliberation Learning for Image-to-Image Translation, IJCAI 2019 [PDF] 摘要
297. Image-to-Image Translation with Multi-Path Consistency Regularization, IJCAI 2019 [PDF] 摘要
298. From Words to Sentences: A Progressive Learning Approach for Zero-resource Machine Translation with Visual Pivots, IJCAI 2019 [PDF] 摘要
299. Polygon-Net: A General Framework for Jointly Boosting Multiple Unsupervised Neural Machine Translation Models, IJCAI 2019 [PDF] 摘要
300. Pre-training on high-resource speech recognition improves low-resource speech-to-text translation, NAACL 2019 [PDF] 摘要
301. ReWE: Regressing Word Embeddings for Regularization of Neural Machine Translation Systems, NAACL 2019 [PDF] 摘要
302. Lost in Machine Translation: A Method to Reduce Meaning Loss, NAACL 2019 [PDF] 摘要
303. Bi-Directional Differentiable Input Reconstruction for Low-Resource Neural Machine Translation, NAACL 2019 [PDF] 摘要
304. Code-Switching for Enhancing NMT with Pre-Specified Translation, NAACL 2019 [PDF] 摘要
305. Understanding and Improving Hidden Representations for Neural Machine Translation, NAACL 2019 [PDF] 摘要
306. Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting, NAACL 2019 [PDF] 摘要
307. Syntax-Enhanced Neural Machine Translation with Syntax-Aware Word Representations, NAACL 2019 [PDF] 摘要
308. Competence-based Curriculum Learning for Neural Machine Translation, NAACL 2019 [PDF] 摘要
309. Extract and Edit: An Alternative to Back-Translation for Unsupervised Neural Machine Translation, NAACL 2019 [PDF] 摘要
310. Consistency by Agreement in Zero-Shot Neural Machine Translation, NAACL 2019 [PDF] 摘要
311. Learning to Stop in Structured Prediction for Neural Machine Translation, NAACL 2019 [PDF] 摘要
312. Curriculum Learning for Domain Adaptation in Neural Machine Translation, NAACL 2019 [PDF] 摘要
313. Improving Robustness of Machine Translation with Synthetic Noise, NAACL 2019 [PDF] 摘要
314. Non-Parametric Adaptation for Neural Machine Translation, NAACL 2019 [PDF] 摘要
315. Online Distilling from Checkpoints for Neural Machine Translation, NAACL 2019 [PDF] 摘要
316. MuST-C: a Multilingual Speech Translation Corpus, NAACL 2019 [PDF] 摘要
317. Improving Neural Machine Translation with Neural Syntactic Distance, NAACL 2019 [PDF] 摘要
318. Measuring Immediate Adaptation Performance for Neural Machine Translation, NAACL 2019 [PDF] 摘要
319. Differentiable Sampling with Flexible Reference Word Order for Neural Machine Translation, NAACL 2019 [PDF] 摘要
320. Reinforcement Learning based Curriculum Optimization for Neural Machine Translation, NAACL 2019 [PDF] 摘要
321. Overcoming Catastrophic Forgetting During Domain Adaptation of Neural Machine Translation, NAACL 2019 [PDF] 摘要
322. Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout, NAACL 2019 [PDF] 摘要
323. Fluent Translations from Disfluent Speech in End-to-End Speech Translation, NAACL 2019 [PDF] 摘要
324. Neural Machine Translation of Text from Non-Native Speakers, NAACL 2019 [PDF] 摘要
325. Improving Domain Adaptation Translation with Domain Invariant and Specific Information, NAACL 2019 [PDF] 摘要
326. Selective Attention for Context-aware Neural Machine Translation, NAACL 2019 [PDF] 摘要
327. Unsupervised Extraction of Partial Translations for Neural Machine Translation, NAACL 2019 [PDF] 摘要
328. Revisiting Adversarial Autoencoder for Unsupervised Word Translation with Cycle Consistency and Improved Training, NAACL 2019 [PDF] 摘要
329. Addressing word-order Divergence in Multilingual Neural Machine Translation for extremely Low Resource Languages, NAACL 2019 [PDF] 摘要
330. Massively Multilingual Neural Machine Translation, NAACL 2019 [PDF] 摘要
331. Probing the Need for Visual Context in Multimodal Machine Translation, NAACL 2019 [PDF] 摘要
332. Multimodal Machine Translation with Embedding Prediction, NAACL 2019 [PDF] 摘要
333. Train, Sort, Explain: Learning to Diagnose Translation Models, NAACL 2019 [PDF] 摘要
334. Neural Machine Translation between Myanmar (Burmese) and Rakhine (Arakanese), NAACL 2019 [PDF] 摘要
335. Comparing Pipelined and Integrated Approaches to Dialectal Arabic Neural Machine Translation, NAACL 2019 [PDF] 摘要
336. A Blissymbolics Translation System, NAACL 2019 [PDF] 摘要
337. Grounded Word Sense Translation, NAACL 2019 [PDF] 摘要
338. Multi-mapping Image-to-Image Translation via Learning Disentanglement, NeurIPS 2019 [PDF] 摘要
339. Flow-based Image-to-Image Translation with Feature Disentanglement, NeurIPS 2019 [PDF] 摘要
340. Comparing Unsupervised Word Translation Methods Step by Step, NeurIPS 2019 [PDF] 摘要
341. Neural Machine Translation with Soft Prototype, NeurIPS 2019 [PDF] 摘要
342. Explicitly disentangling image content from translation and rotation with spatial-VAE, NeurIPS 2019 [PDF] 摘要
343. Semantic Neural Machine Translation Using AMR, TACL 2019 [PDF] 摘要
344. Synchronous Bidirectional Neural Machine Translation, TACL 2019 [PDF] 摘要
345. Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation, TACL 2019 [PDF] 摘要

摘要

1. Dynamic Layer Aggregation for Neural Machine Translation with Routing-by-Agreement [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: AI and the Web
  Zi-Yi Dou, Zhaopeng Tu, Xing Wang, Longyue Wang, Shuming Shi, Tong Zhang
With the promising progress of deep neural networks, layer aggregation has been used to fuse information across layers in various fields, such as computer vision and machine translation. However, most of the previous methods combine layers in a static fashion in that their aggregation strategy is independent of specific hidden states. Inspired by recent progress on capsule networks, in this paper we propose to use routing-by-agreement strategies to aggregate layers dynamically. Specifically, the algorithm learns the probability of a part (individual layer representations) assigned to a whole (aggregated representations) in an iterative way and combines parts accordingly. We implement our algorithm on top of the state-of-the-art neural machine translation model TRANSFORMER and conduct experiments on the widely-used WMT14 sh⇒German and WMT17 Chinese⇒English translation datasets. Experimental results across language pairs show that the proposed approach consistently outperforms the strong baseline model and a representative static aggregation model.

2. DTMT: A Novel Deep Transition Architecture for Neural Machine Translation [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: AI and the Web
  Fandong Meng, Jinchao Zhang
Past years have witnessed rapid developments in Neural Machine Translation (NMT). Most recently, with advanced modeling and training techniques, the RNN-based NMT (RNMT) has shown its potential strength, even compared with the well-known Transformer (self-attentional) model. Although the RNMT model can possess very deep architectures through stacking layers, the transition depth between consecutive hidden states along the sequential axis is still shallow. In this paper, we further enhance the RNN-based NMT through increasing the transition depth between consecutive hidden states and build a novel Deep Transition RNN-based Architecture for Neural Machine Translation, named DTMT. This model enhances the hidden-to-hidden transition with multiple non-linear transformations, as well as maintains a linear transformation path throughout this deep transition by the well-designed linear transformation mechanism to alleviate the gradient vanishing problem. Experiments show that with the specially designed deep transition modules, our DTMT can achieve remarkable improvements on translation quality. Experimental results on Chinese⇒English translation task show that DTMT can outperform the Transformer model by +2.09 BLEU points and achieve the best results ever reported in the same dataset. On WMT14 English⇒German and English⇒French translation tasks, DTMT shows superior quality to the state-of-the-art NMT systems, including the Transformer and the RNMT+.

3. Unsupervised Neural Machine Translation with SMT as Posterior Regularization [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: AI and the Web
  Shuo Ren, Zhirui Zhang, Shujie Liu, Ming Zhou, Shuai Ma
Without real bilingual corpus available, unsupervised Neural Machine Translation (NMT) typically requires pseudo parallel data generated with the back-translation method for the model training. However, due to weak supervision, the pseudo data inevitably contain noises and errors that will be accumulated and reinforced in the subsequent training process, leading to bad translation performance. To address this issue, we introduce phrase based Statistic Machine Translation (SMT) models which are robust to noisy data, as posterior regularizations to guide the training of unsupervised NMT models in the iterative back-translation process. Our method starts from SMT models built with pre-trained language models and word-level translation tables inferred from cross-lingual embeddings. Then SMT and NMT models are optimized jointly and boost each other incrementally in a unified EM framework. In this way, (1) the negative effect caused by errors in the iterative back-translation process can be alleviated timely by SMT filtering noises from its phrase tables; meanwhile, (2) NMT can compensate for the deficiency of fluency inherent in SMT. Experiments conducted on en-fr and en-de translation tasks show that our method outperforms the strong baseline and achieves new state-of-the-art unsupervised machine translation performance.

4. TransNFCM: Translation-Based Neural Fashion Compatibility Modeling [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: AI and the Web
  Xun Yang, Yunshan Ma, Lizi Liao, Meng Wang, Tat-Seng Chua
Identifying mix-and-match relationships between fashion items is an urgent task in a fashion e-commerce recommender system. It will significantly enhance user experience and satisfaction. However, due to the challenges of inferring the rich yet complicated set of compatibility patterns in a large e-commerce corpus of fashion items, this task is still underexplored. Inspired by the recent advances in multirelational knowledge representation learning and deep neural networks, this paper proposes a novel Translation-based Neural Fashion Compatibility Modeling (TransNFCM) framework, which jointly optimizes fashion item embeddings and category-specific complementary relations in a unified space via an end-to-end learning manner. TransNFCM places items in a unified embedding space where a category-specific relation (category-comp-category) is modeled as a vector translation operating on the embeddings of compatible items from the corresponding categories. By this way, we not only capture the specific notion of compatibility conditioned on a specific pair of complementary categories, but also preserve the global notion of compatibility. We also design a deep fashion item encoder which exploits the complementary characteristic of visual and textual features to represent the fashion products. To the best of our knowledge, this is the first work that uses category-specific complementary relations to model the category-aware compatibility between items in a translation-based embedding space. Extensive experiments demonstrate the effectiveness of TransNFCM over the state-of-the-arts on two real-world datasets.

5. Regularizing Neural Machine Translation by Target-Bidirectional Agreement [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: AI and the Web
  Zhirui Zhang, Shuangzhi Wu, Shujie Liu, Mu Li, Ming Zhou, Tong Xu
Although Neural Machine Translation (NMT) has achieved remarkable progress in the past several years, most NMT systems still suffer from a fundamental shortcoming as in other sequence generation tasks: errors made early in generation process are fed as inputs to the model and can be quickly amplified, harming subsequent sequence generation. To address this issue, we propose a novel model regularization method for NMT training, which aims to improve the agreement between translations generated by left-to-right (L2R) and right-to-left (R2L) NMT decoders. This goal is achieved by introducing two Kullback-Leibler divergence regularization terms into the NMT training objective to reduce the mismatch between output probabilities of L2R and R2L models. In addition, we also employ a joint training strategy to allow L2R and R2L models to improve each other in an interactive update process. Experimental results show that our proposed method significantly outperforms state-of-the-art baselines on Chinese-English and English-German translation tasks.

6. Addressing the Under-Translation Problem from the Entropy Perspective [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: AI and the Web
  Yang Zhao, Jiajun Zhang, Chengqing Zong, Zhongjun He, Hua Wu
Neural Machine Translation (NMT) has drawn much attention due to its promising translation performance in recent years. However, the under-translation problem still remains a big challenge. In this paper, we focus on the under-translation problem and attempt to find out what kinds of source words are more likely to be ignored. Through analysis, we observe that a source word with a large translation entropy is more inclined to be dropped. To address this problem, we propose a coarse-to-fine framework. In coarse-grained phase, we introduce a simple strategy to reduce the entropy of highentropy words through constructing the pseudo target sentences. In fine-grained phase, we propose three methods, including pre-training method, multitask method and two-pass method, to encourage the neural model to correctly translate these high-entropy words. Experimental results on various translation tasks show that our method can significantly improve the translation quality and substantially reduce the under-translation cases of high-entropy words.

7. Exploiting Time-Series Image-to-Image Translation to Expand the Range of Wildlife Habitat Analysis [PDF] 返回目录
  AAAI 2019. AAAI Special Technical Track: AI for Social Impact
  Ruobing Zheng, Ze Luo, Baoping Yan
Characterizing wildlife habitat is one of the main topics in animal ecology. Locational data obtained from radio tracking and field observation are widely used in habitat analysis. However, such sampling methods are costly and laborious, and insufficient relocations often prevent scientists from conducting large-range and long-term research. In this paper, we innovatively exploit the image-to-image translation technology to expand the range of wildlife habitat analysis. We proposed a novel approach for implementing time-series imageto-image translation via metric embedding. A siamese neural network is used to learn the Euclidean temporal embedding from the image space. This embedding produces temporal vectors which bring time information into the adversarial network. The well-trained framework could effectively map the probabilistic habitat models from remote sensing imagery, helping scientists get rid of the persistent dependence on animal relocations. We illustrate our approach in a real-world application for mapping the habitats of Bar-headed Geese at Qinghai Lake breeding ground. We compare our model against several baselines and achieve promising results.

8. Controllable Image-to-Video Translation: A Case Study on Facial Expression Generation [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: Machine Learning
  Lijie Fan, Wenbing Huang, Chuang Gan, Junzhou Huang, Boqing Gong
The recent advances in deep learning have made it possible to generate photo-realistic images by using neural networks and even to extrapolate video frames from an input video clip. In this paper, for the sake of both furthering this exploration and our own interest in a realistic application, we study imageto-video translation and particularly focus on the videos of facial expressions. This problem challenges the deep neural networks by another temporal dimension comparing to the image-to-image translation. Moreover, its single input image fails most existing video generation methods that rely on recurrent models. We propose a user-controllable approach so as to generate video clips of various lengths from a single face image. The lengths and types of the expressions are controlled by users. To this end, we design a novel neural network architecture that can incorporate the user input into its skip connections and propose several improvements to the adversarial training method for the neural network. Experiments and user studies verify the effectiveness of our approach. Especially, we would like to highlight that even for the face images in the wild (downloaded from the Web and the authors’ own photos), our model can generate high-quality facial expression videos of which about 50% are labeled as real by Amazon Mechanical Turk workers.

9. Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: Machine Learning
  Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, Tie-Yan Liu
Non-autoregressive translation (NAT) models, which remove the dependence on previous target tokens from the inputs of the decoder, achieve significantly inference speedup but at the cost of inferior accuracy compared to autoregressive translation (AT) models. Previous work shows that the quality of the inputs of the decoder is important and largely impacts the model accuracy. In this paper, we propose two methods to enhance the decoder inputs so as to improve NAT models. The first one directly leverages a phrase table generated by conventional SMT approaches to translate source tokens to target tokens, which are then fed into the decoder as inputs. The second one transforms source-side word embeddings to target-side word embeddings through sentence-level alignment and word-level adversary learning, and then feeds the transformed word embeddings into the decoder as inputs. Experimental results show our method largely outperforms the NAT baseline (Gu et al. 2017) by 5.11 BLEU scores on WMT14 English-German task and 4.72 BLEU scores on WMT16 English-Romanian task.

10. Non-Autoregressive Machine Translation with Auxiliary Regularization [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: Machine Learning
  Yiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, Tie-Yan Liu
As a new neural machine translation approach, NonAutoregressive machine Translation (NAT) has attracted attention recently due to its high efficiency in inference. However, the high efficiency has come at the cost of not capturing the sequential dependency on the target side of translation, which causes NAT to suffer from two kinds of translation errors: 1) repeated translations (due to indistinguishable adjacent decoder hidden states), and 2) incomplete translations (due to incomplete transfer of source side information via the decoder hidden states). In this paper, we propose to address these two problems by improving the quality of decoder hidden representations via two auxiliary regularization terms in the training process of an NAT model. First, to make the hidden states more distinguishable, we regularize the similarity between consecutive hidden states based on the corresponding target tokens. Second, to force the hidden states to contain all the information in the source sentence, we leverage the dual nature of translation tasks (e.g., English to German and German to English) and minimize a backward reconstruction error to ensure that the hidden states of the NAT decoder are able to recover the source side sentence. Extensive experiments conducted on several benchmark datasets show that both regularization strategies are effective and can alleviate the issues of repeated translations and incomplete translations in NAT models. The accuracy of NAT models is therefore improved significantly over the state-of-the-art NAT models with even better efficiency for inference.

11. Tied Transformers: Neural Machine Translation with Shared Encoder and Decoder [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: Machine Learning
  Yingce Xia, Tianyu He, Xu Tan, Fei Tian, Di He, Tao Qin
Sharing source and target side vocabularies and word embeddings has been a popular practice in neural machine translation (briefly, NMT) for similar languages (e.g., English to French or German translation). The success of such wordlevel sharing motivates us to move one step further: we consider model-level sharing and tie the whole parts of the encoder and decoder of an NMT model. We share the encoder and decoder of Transformer (Vaswani et al. 2017), the state-of-the-art NMT model, and obtain a compact model named Tied Transformer. Experimental results demonstrate that such a simple method works well for both similar and dissimilar language pairs. We empirically verify our framework for both supervised NMT and unsupervised NMT: we achieve a 35.52 BLEU score on IWSLT 2014 German to English translation, 28.98/29.89 BLEU scores on WMT 2014 English to German translation without/with monolingual data, and a 22.05 BLEU score on WMT 2016 unsupervised German to English translation.

12. Recurrent Stacking of Layers for Compact Neural Machine Translation Models [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: Natural Language Processing
  Raj Dabre, Atsushi Fujita
In encoder-decoder based sequence-to-sequence modeling, the most common practice is to stack a number of recurrent, convolutional, or feed-forward layers in the encoder and decoder. While the addition of each new layer improves the sequence generation quality, this also leads to a significant increase in the number of parameters. In this paper, we propose to share parameters across all layers thereby leading to a recurrently stacked sequence-to-sequence model. We report on an extensive case study on neural machine translation (NMT) using our proposed method, experimenting with a variety of datasets. We empirically show that the translation quality of a model that recurrently stacks a single-layer 6 times, despite its significantly fewer parameters, approaches that of a model that stacks 6 different layers. We also show how our method can benefit from a prevalent way for improving NMT, i.e., extending training data with pseudo-parallel corpora generated by back-translation. We then analyze the effects of recurrently stacked layers by visualizing the attentions of models that use recurrently stacked layers and models that do not. Finally, we explore the limits of parameter sharing where we share even the parameters between the encoder and decoder in addition to recurrent stacking of layers.

13. Adapting Translation Models for Transcript Disfluency Detection [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: Natural Language Processing
  Qianqian Dong, Feng Wang, Zhen Yang, Wei Chen, Shuang Xu, Bo Xu
Transcript disfluency detection (TDD) is an important component of the real-time speech translation system, which arouses more and more interests in recent years. This paper presents our study on adapting neural machine translation (NMT) models for TDD. We propose a general training framework for adapting NMT models to TDD task rapidly. In this framework, the main structure of the model is implemented similar to the NMT model. Additionally, several extended modules and training techniques which are independent of the NMT model are proposed to improve the performance, such as the constrained decoding, denoising autoencoder initialization and a TDD-specific training object. With the proposed training framework, we achieve significant improvement. However, it is too slow in decoding to be practical. To build a feasible and production-ready solution for TDD, we propose a fast non-autoregressive TDD model following the non-autoregressive NMT model emerged recently. Even we do not assume the specific architecture of the NMT model, we build our TDD model on the basis of Transformer, which is the state-of-the-art NMT model. We conduct extensive experiments on the publicly available set, Switchboard, and in-house Chinese set. Experimental results show that the proposed model significantly outperforms previous state-ofthe-art models.

14. "Bilingual Expert" Can Find Translation Errors [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: Natural Language Processing
  Kai Fan, Jiayi Wang, Bo Li, Fengming Zhou, Boxing Chen, Luo Si
The performances of machine translation (MT) systems are usually evaluated by the metric BLEU when the golden references are provided. However, in the case of model inference or production deployment, golden references are usually expensively available, such as human annotation with bilingual expertise. In order to address the issue of translation quality estimation (QE) without reference, we propose a general framework for automatic evaluation of the translation output for the QE task in the Conference on Statistical Machine Translation (WMT). We first build a conditional target language model with a novel bidirectional transformer, named neural bilingual expert model, which is pre-trained on large parallel corpora for feature extraction. For QE inference, the bilingual expert model can simultaneously produce the joint latent representation between the source and the translation, and real-valued measurements of possible erroneous tokens based on the prior knowledge learned from parallel data. Subsequently, the features will further be fed into a simple Bi-LSTM predictive model for quality estimation. The experimental results show that our approach achieves the state-of-the-art performance in most public available datasets of WMT 2017/2018 QE task.

15. PARABANK: Monolingual Bitext Generation and Sentential Paraphrasing via Lexically-Constrained Neural Machine Translation [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: Natural Language Processing
  J. Edward Hu, Rachel Rudinger, Matt Post, Benjamin Van Durme
We present PARABANK, a large-scale English paraphrase dataset that surpasses prior work in both quantity and quality. Following the approach of PARANMT (Wieting and Gimpel, 2018), we train a Czech-English neural machine translation (NMT) system to generate novel paraphrases of English reference sentences. By adding lexical constraints to the NMT decoding procedure, however, we are able to produce multiple high-quality sentential paraphrases per source sentence, yielding an English paraphrase resource with more than 4 billion generated tokens and exhibiting greater lexical diversity. Using human judgments, we also demonstrate that PARABANK’s paraphrases improve over PARANMT on both semantic similarity and fluency. Finally, we use PARABANK to train a monolingual NMT model with the same support for lexically-constrained decoding for sentence rewriting tasks.

16. Neural Machine Translation with Adequacy-Oriented Learning [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: Natural Language Processing
  Xiang Kong, Zhaopeng Tu, Shuming Shi, Eduard H. Hovy, Tong Zhang
Although Neural Machine Translation (NMT) models have advanced state-of-the-art performance in machine translation, they face problems like the inadequate translation. We attribute this to that the standard Maximum Likelihood Estimation (MLE) cannot judge the real translation quality due to its several limitations. In this work, we propose an adequacyoriented learning mechanism for NMT by casting translation as a stochastic policy in Reinforcement Learning (RL), where the reward is estimated by explicitly measuring translation adequacy. Benefiting from the sequence-level training of RL strategy and a more accurate reward designed specifically for translation, our model outperforms multiple strong baselines, including (1) standard and coverage-augmented attention models with MLE-based training, and (2) advanced reinforcement and adversarial training strategies with rewards based on both word-level BLEU and character-level CHRF3. Quantitative and qualitative analyses on different language pairs and NMT architectures demonstrate the effectiveness and universality of the proposed approach.

17. Found in Translation: Learning Robust Joint Representations by Cyclic Translations between Modalities [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: Natural Language Processing
  Hai Pham, Paul Pu Liang, Thomas Manzini, Louis-Philippe Morency, Barnabás Póczos
Multimodal sentiment analysis is a core research area that studies speaker sentiment expressed from the language, visual, and acoustic modalities. The central challenge in multimodal learning involves inferring joint representations that can process and relate information from these modalities. However, existing work learns joint representations by requiring all modalities as input and as a result, the learned representations may be sensitive to noisy or missing modalities at test time. With the recent success of sequence to sequence (Seq2Seq) models in machine translation, there is an opportunity to explore new ways of learning joint representations that may not require all input modalities at test time. In this paper, we propose a method to learn robust joint representations by translating between modalities. Our method is based on the key insight that translation from a source to a target modality provides a method of learning joint representations using only the source modality as input. We augment modality translations with a cycle consistency loss to ensure that our joint representations retain maximal information from all modalities. Once our translation model is trained with paired multimodal data, we only need data from the source modality at test time for final sentiment prediction. This ensures that our model remains robust from perturbations or missing information in the other modalities. We train our model with a coupled translationprediction objective and it achieves new state-of-the-art results on multimodal sentiment analysis datasets: CMU-MOSI, ICTMMMO, and YouTube. Additional experiments show that our model learns increasingly discriminative joint representations with more input modalities while maintaining robustness to missing or perturbed modalities.

18. Jointly Extracting Multiple Triplets with Multilayer Translation Constraints [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: Natural Language Processing
  Zhen Tan, Xiang Zhao, Wei Wang, Weidong Xiao
Triplets extraction is an essential and pivotal step in automatic knowledge base construction, which captures structural information from unstructured text corpus. Conventional extraction models use a pipeline of named entity recognition and relation classification to extract entities and relations, respectively, which ignore the connection between the two tasks. Recently, several neural network-based models were proposed to tackle the problem, and achieved state-of-the-art performance. However, most of them are unable to extract multiple triplets from a single sentence, which are yet commonly seen in real-life scenarios. To close the gap, we propose in this paper a joint neural extraction model for multitriplets, namely, TME, which is capable of adaptively discovering multiple triplets simultaneously in a sentence via ranking with translation mechanism. In experiment, TME exhibits superior performance and achieves an improvement of 37.6% on F1 score over state-of-the-art competitors.

19. Translating with Bilingual Topic Knowledge for Neural Machine Translation [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: Natural Language Processing
  Xiangpeng Wei, Yue Hu, Luxi Xing, Yipeng Wang, Li Gao
The dominant neural machine translation (NMT) models that based on the encoder-decoder architecture have recently achieved the state-of-the-art performance. Traditionally, the NMT models only depend on the representations learned during training for mapping a source sentence into the target domain. However, the learned representations often suffer from implicit and inadequately informed properties. In this paper, we propose a novel bilingual topic enhanced NMT (BLTNMT) model to improve translation performance by incorporating bilingual topic knowledge into NMT. Specifically, the bilingual topic knowledge is included into the hidden states of both encoder and decoder, as well as the attention mechanism. With this new setting, the proposed BLT-NMT has access to the background knowledge implied in bilingual topics which is beyond the sequential context, and enables the attention mechanism to attend to topic-level attentions for generating accurate target words during translation. Experimental results show that the proposed model consistently outperforms the traditional RNNsearch and the previous topic-informed NMT on Chinese-English and EnglishGerman translation tasks. We also introduce the bilingual topic knowledge into the newly emerged Transformer base model on English-German translation and achieve a notable improvement.

20. Graph Based Translation Memory for Neural Machine Translation [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: Natural Language Processing
  Mengzhou Xia, Guoping Huang, Lemao Liu, Shuming Shi
A translation memory (TM) is proved to be helpful to improve neural machine translation (NMT). Existing approaches either pursue the decoding efficiency by merely accessing local information in a TM or encode the global information in a TM yet sacrificing efficiency due to redundancy. We propose an efficient approach to making use of the global information in a TM. The key idea is to pack a redundant TM into a compact graph and perform additional attention mechanisms over the packed graph for integrating the TM representation into the decoding network. We implement the model by extending the state-of-the-art NMT, Transformer. Extensive experiments on three language pairs show that the proposed approach is efficient in terms of running time and space occupation, and particularly it outperforms multiple strong baselines in terms of BLEU scores.

21. Modeling Coherence for Discourse Neural Machine Translation [PDF] 返回目录
  AAAI 2019. AAAI Technical Track: Natural Language Processing
  Hao Xiong, Zhongjun He, Hua Wu, Haifeng Wang
Discourse coherence plays an important role in the translation of one text. However, the previous reported models most focus on improving performance over individual sentence while ignoring cross-sentence links and dependencies, which affects the coherence of the text. In this paper, we propose to use discourse context and reward to refine the translation quality from the discourse perspective. In particular, we generate the translation of individual sentences at first. Next, we deliberate the preliminary produced translations, and train the model to learn the policy that produces discourse coherent text by a reward teacher. Practical results on multiple discourse test datasets indicate that our model significantly improves the translation quality over the state-of-the-art baseline system by +1.23 BLEU score. Moreover, our model generates more discourse coherent text and obtains +2.2 BLEU improvements when evaluated by discourse metrics.

22. Identifying Semantics in Clinical Reports Using Neural Machine Translation [PDF] 返回目录
  AAAI 2019. IAAI Technical Track: Emerging Papers
  Srikanth Mujjiga, Vamsi Krishna, Kalyan Chakravarthi, Vijayananda J
Clinical documents are vital resources for radiologists when they have to consult or refer while studying similar cases. In large healthcare facilities where millions of reports are generated, searching for relevant documents is quite challenging. With abundant interchangeable words in clinical domain, understanding the semantics of the words in the clinical documents is vital to improve the search results. This paper details an end to end semantic search application to address the large scale information retrieval problem of clinical reports. The paper specifically focuses on the challenge of identifying semantics in the clinical reports to facilitate search at semantic level. The semantic search works by mapping the documents into the concept space and the search is performed in the concept space. A unique approach of framing the concept mapping problem as a language translation problem is proposed in this paper. The concept mapper is modelled using the Neural machine translation model (NMT) based on encoder-decoder with attention architecture. The regular expression based concept mapper takes approximately 3 seconds to extract UMLS concepts from a single document, where as the trained NMT does the same in approximately 30 milliseconds. NMT based model further enables incorporation of negation detection to identify whether a concept is negated or not, facilitating search for negated queries.

23. Unsupervised Pivot Translation for Distant Languages [PDF] 返回目录
  ACL 2019.
  Yichong Leng, Xu Tan, Tao Qin, Xiang-Yang Li, Tie-Yan Liu
Unsupervised neural machine translation (NMT) has attracted a lot of attention recently. While state-of-the-art methods for unsupervised translation usually perform well between similar languages (e.g., English-German translation), they perform poorly between distant languages, because unsupervised alignment does not work well for distant languages. In this work, we introduce unsupervised pivot translation for distant languages, which translates a language to a distant language through multiple hops, and the unsupervised translation on each hop is relatively easier than the original direct translation. We propose a learning to route (LTR) method to choose the translation path between the source and target languages. LTR is trained on language pairs whose best translation path is available and is applied on the unseen language pairs for path selection. Experiments on 20 languages and 294 distant language pairs demonstrate the advantages of the unsupervised pivot translation for distant languages, as well as the effectiveness of the proposed LTR for path selection. Specifically, in the best case, LTR achieves an improvement of 5.58 BLEU points over the conventional direct unsupervised method.

24. An Effective Approach to Unsupervised Machine Translation [PDF] 返回目录
  ACL 2019.
  Mikel Artetxe, Gorka Labaka, Eneko Agirre
While machine translation has traditionally relied on large amounts of parallel corpora, a recent research line has managed to train both Neural Machine Translation (NMT) and Statistical Machine Translation (SMT) systems using monolingual corpora only. In this paper, we identify and address several deficiencies of existing unsupervised SMT approaches by exploiting subword information, developing a theoretically well founded unsupervised tuning method, and incorporating a joint refinement procedure. Moreover, we use our improved SMT system to initialize a dual NMT model, which is further fine-tuned through on-the-fly back-translation. Together, we obtain large improvements over the previous state-of-the-art in unsupervised machine translation. For instance, we get 22.5 BLEU points in English-to-German WMT 2014, 5.5 points more than the previous best unsupervised system, and 0.5 points more than the (supervised) shared task winner back in 2014.

25. Effective Adversarial Regularization for Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Motoki Sato, Jun Suzuki, Shun Kiyono
A regularization technique based on adversarial perturbation, which was initially developed in the field of image processing, has been successfully applied to text classification tasks and has yielded attractive improvements. We aim to further leverage this promising methodology into more sophisticated and critical neural models in the natural language processing field, i.e., neural machine translation (NMT) models. However, it is not trivial to apply this methodology to such models. Thus, this paper investigates the effectiveness of several possible configurations of applying the adversarial perturbation and reveals that the adversarial regularization technique can significantly and consistently improve the performance of widely used NMT models, such as LSTM-based and Transformer-based models.

26. Revisiting Low-Resource Neural Machine Translation: A Case Study [PDF] 返回目录
  ACL 2019.
  Rico Sennrich, Biao Zhang
It has been shown that the performance of neural machine translation (NMT) drops starkly in low-resource conditions, underperforming phrase-based statistical machine translation (PBSMT) and requiring large amounts of auxiliary data to achieve competitive results. In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. We discuss some pitfalls to be aware of when training low-resource NMT systems, and recent techniques that have shown to be especially helpful in low-resource settings, resulting in a set of best practices for low-resource NMT. In our experiments on German–English with different amounts of IWSLT14 training data, we show that, without the use of any auxiliary monolingual or multilingual data, an optimized NMT system can outperform PBSMT with far less data than previously claimed. We also apply these techniques to a low-resource Korean–English dataset, surpassing previously reported results by 4 BLEU.

27. Domain Adaptive Inference for Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Danielle Saunders, Felix Stahlberg, Adrià de Gispert, Bill Byrne
We investigate adaptive ensemble weighting for Neural Machine Translation, addressing the case of improving performance on a new and potentially unknown domain without sacrificing performance on the original domain. We adapt sequentially across two Spanish-English and three English-German tasks, comparing unregularized fine-tuning, L2 and Elastic Weight Consolidation. We then report a novel scheme for adaptive NMT ensemble decoding by extending Bayesian Interpolation with source information, and report strong improvements across test domains without access to the domain label.

28. When a Good Translation is Wrong in Context: Context-Aware Machine Translation Improves on Deixis, Ellipsis, and Lexical Cohesion [PDF] 返回目录
  ACL 2019.
  Elena Voita, Rico Sennrich, Ivan Titov
Though machine translation errors caused by the lack of context beyond one sentence have long been acknowledged, the development of context-aware NMT systems is hampered by several problems. Firstly, standard metrics are not sensitive to improvements in consistency in document-level translations. Secondly, previous work on context-aware NMT assumed that the sentence-aligned parallel data consisted of complete documents while in most practical scenarios such document-level data constitutes only a fraction of the available parallel data. To address the first issue, we perform a human study on an English-Russian subtitles dataset and identify deixis, ellipsis and lexical cohesion as three main sources of inconsistency. We then create test sets targeting these phenomena. To address the second shortcoming, we consider a set-up in which a much larger amount of sentence-level data is available compared to that aligned at the document level. We introduce a model that is suitable for this scenario and demonstrate major gains over a context-agnostic baseline on our new benchmarks without sacrificing performance as measured with BLEU.

29. A Compact and Language-Sensitive Multilingual Translation Method [PDF] 返回目录
  ACL 2019.
  Yining Wang, Long Zhou, Jiajun Zhang, Feifei Zhai, Jingfang Xu, Chengqing Zong
Multilingual neural machine translation (Multi-NMT) with one encoder-decoder model has made remarkable progress due to its simple deployment. However, this multilingual translation paradigm does not make full use of language commonality and parameter sharing between encoder and decoder. Furthermore, this kind of paradigm cannot outperform the individual models trained on bilingual corpus in most cases. In this paper, we propose a compact and language-sensitive method for multilingual translation. To maximize parameter sharing, we first present a universal representor to replace both encoder and decoder models. To make the representor sensitive for specific languages, we further introduce language-sensitive embedding, attention, and discriminator with the ability to enhance model performance. We verify our methods on various translation scenarios, including one-to-many, many-to-many and zero-shot. Extensive experiments demonstrate that our proposed methods remarkably outperform strong standard multilingual translation systems on WMT and IWSLT datasets. Moreover, we find that our model is especially helpful in low-resource and zero-shot translation scenarios.

30. Unsupervised Parallel Sentence Extraction with Parallel Segment Detection Helps Machine Translation [PDF] 返回目录
  ACL 2019.
  Viktor Hangya, Alexander Fraser
Mining parallel sentences from comparable corpora is important. Most previous work relies on supervised systems, which are trained on parallel data, thus their applicability is problematic in low-resource scenarios. Recent developments in building unsupervised bilingual word embeddings made it possible to mine parallel sentences based on cosine similarities of source and target language words. We show that relying only on this information is not enough, since sentences often have similar words but different meanings. We detect continuous parallel segments in sentence pair candidates and rely on them when mining parallel sentences. We show better mining accuracy on three language pairs in a standard shared task on artificial data. We also provide the first experiments showing that parallel sentences mined from real life sources improve unsupervised MT. Our code is available, we hope it will be used to support low-resource MT research.

31. Unsupervised Bilingual Word Embedding Agreement for Unsupervised Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Haipeng Sun, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Tiejun Zhao
Unsupervised bilingual word embedding (UBWE), together with other technologies such as back-translation and denoising, has helped unsupervised neural machine translation (UNMT) achieve remarkable results in several language pairs. In previous methods, UBWE is first trained using non-parallel monolingual corpora and then this pre-trained UBWE is used to initialize the word embedding in the encoder and decoder of UNMT. That is, the training of UBWE and UNMT are separate. In this paper, we first empirically investigate the relationship between UBWE and UNMT. The empirical findings show that the performance of UNMT is significantly affected by the performance of UBWE. Thus, we propose two methods that train UNMT with UBWE agreement. Empirical results on several language pairs show that the proposed methods significantly outperform conventional UNMT.

32. Effective Cross-lingual Transfer of Neural Machine Translation Models without Shared Vocabularies [PDF] 返回目录
  ACL 2019.
  Yunsu Kim, Yingbo Gao, Hermann Ney
Transfer learning or multilingual model is essential for low-resource neural machine translation (NMT), but the applicability is limited to cognate languages by sharing their vocabularies. This paper shows effective techniques to transfer a pretrained NMT model to a new, unrelated language without shared vocabularies. We relieve the vocabulary mismatch by using cross-lingual word embedding, train a more language-agnostic encoder by injecting artificial noises, and generate synthetic data easily from the pretraining data without back-translation. Our methods do not require restructuring the vocabulary or retraining the model. We improve plain NMT transfer by up to +5.1% BLEU in five low-resource translation tasks, outperforming multilingual joint training by a large margin. We also provide extensive ablation studies on pretrained embedding, synthetic data, vocabulary size, and parameter freezing for a better understanding of NMT transfer.

33. Improved Zero-shot Neural Machine Translation via Ignoring Spurious Correlations [PDF] 返回目录
  ACL 2019.
  Jiatao Gu, Yong Wang, Kyunghyun Cho, Victor O.K. Li
Zero-shot translation, translating between language pairs on which a Neural Machine Translation (NMT) system has never been trained, is an emergent property when training the system in multilingual settings. However, naive training for zero-shot NMT easily fails, and is sensitive to hyper-parameter setting. The performance typically lags far behind the more conventional pivot-based approach which translates twice using a third language as a pivot. In this work, we address the degeneracy problem due to capturing spurious correlations by quantitatively analyzing the mutual information between language IDs of the source and decoded sentences. Inspired by this analysis, we propose to use two simple but effective approaches: (1) decoder pre-training; (2) back-translation. These methods show significant improvement (4 22 BLEU points) over the vanilla zero-shot translation on three challenging multilingual datasets, and achieve similar or better results than the pivot-based approach.

34. Syntactically Supervised Transformers for Faster Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Nader Akoury, Kalpesh Krishna, Mohit Iyyer
Standard decoders for neural machine translation autoregressively generate a single target token per timestep, which slows inference especially for long outputs. While architectural advances such as the Transformer fully parallelize the decoder computations at training time, inference still proceeds sequentially. Recent developments in non- and semi-autoregressive decoding produce multiple tokens per timestep independently of the others, which improves inference speed but deteriorates translation quality. In this work, we propose the syntactically supervised Transformer (SynST), which first autoregressively predicts a chunked parse tree before generating all of the target tokens in one shot conditioned on the predicted parse. A series of controlled experiments demonstrates that SynST decodes sentences ~5x faster than the baseline autoregressive Transformer while achieving higher BLEU scores than most competing methods on En-De and En-Fr datasets.

35. Dynamically Composing Domain-Data Selection with Clean-Data Selection by “Co-Curricular Learning” for Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Wei Wang, Isaac Caswell, Ciprian Chelba
Noise and domain are important aspects of data quality for neural machine translation. Existing research focus separately on domain-data selection, clean-data selection, or their static combination, leaving the dynamic interaction across them not explicitly examined. This paper introduces a “co-curricular learning” method to compose dynamic domain-data selection with dynamic clean-data selection, for transfer learning across both capabilities. We apply an EM-style optimization procedure to further refine the “co-curriculum”. Experiment results and analysis with two domains demonstrate the effectiveness of the method and the properties of data scheduled by the co-curriculum.

36. On the Word Alignment from Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Xintong Li, Guanlin Li, Lemao Liu, Max Meng, Shuming Shi
Prior researches suggest that neural machine translation (NMT) captures word alignment through its attention mechanism, however, this paper finds attention may almost fail to capture word alignment for some NMT models. This paper thereby proposes two methods to induce word alignment which are general and agnostic to specific NMT models. Experiments show that both methods induce much better word alignment than attention. This paper further visualizes the translation through the word alignment induced by NMT. In particular, it analyzes the effect of alignment errors on translation errors at word level and its quantitative analysis over many testing examples consistently demonstrate that alignment errors are likely to lead to translation errors measured by different metrics.

37. Imitation Learning for Non-Autoregressive Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Bingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, Xu Sun
Non-autoregressive translation models (NAT) have achieved impressive inference speedup. A potential issue of the existing NAT algorithms, however, is that the decoding is conducted in parallel, without directly considering previous context. In this paper, we propose an imitation learning framework for non-autoregressive machine translation, which still enjoys the fast translation speed but gives comparable translation performance compared to its auto-regressive counterpart. We conduct experiments on the IWSLT16, WMT14 and WMT16 datasets. Our proposed model achieves a significant speedup over the autoregressive models, while keeping the translation quality comparable to the autoregressive models. By sampling sentence length in parallel at inference time, we achieve the performance of 31.85 BLEU on WMT16 Ro→En and 30.68 BLEU on IWSLT16 En→De.

38. Monotonic Infinite Lookback Attention for Simultaneous Machine Translation [PDF] 返回目录
  ACL 2019.
  Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, Colin Raffel
Simultaneous machine translation begins to translate each source sentence before the source speaker is finished speaking, with applications to live and streaming scenarios. Simultaneous systems must carefully schedule their reading of the source sentence to balance quality against latency. We present the first simultaneous translation system to learn an adaptive schedule jointly with a neural machine translation (NMT) model that attends over all source tokens read thus far. We do so by introducing Monotonic Infinite Lookback (MILk) attention, which maintains both a hard, monotonic attention head to schedule the reading of the source sentence, and a soft attention head that extends from the monotonic head back to the beginning of the source. We show that MILk’s adaptive schedule allows it to arrive at latency-quality trade-offs that are favorable to those of a recently proposed wait-k strategy for many latency values.

39. Evaluating Gender Bias in Machine Translation [PDF] 返回目录
  ACL 2019.
  Gabriel Stanovsky, Noah A. Smith, Luke Zettlemoyer
We present the first challenge set and evaluation protocol for the analysis of gender bias in machine translation (MT). Our approach uses two recent coreference resolution datasets composed of English sentences which cast participants into non-stereotypical gender roles (e.g., “The doctor asked the nurse to help her in the operation”). We devise an automatic gender bias evaluation method for eight target languages with grammatical gender, based on morphological analysis (e.g., the use of female inflection for the word “doctor”). Our analyses show that four popular industrial MT systems and two recent state-of-the-art academic MT models are significantly prone to gender-biased translation errors for all tested target languages. Our data and code are publicly available at https://github.com/gabrielStanovsky/mt_gender.

40. Neural Machine Translation with Reordering Embeddings [PDF] 返回目录
  ACL 2019.
  Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita
The reordering model plays an important role in phrase-based statistical machine translation. However, there are few works that exploit the reordering information in neural machine translation. In this paper, we propose a reordering mechanism to learn the reordering embedding of a word based on its contextual information. These learned reordering embeddings are stacked together with self-attention networks to learn sentence representation for machine translation. The reordering mechanism can be easily integrated into both the encoder and the decoder in the Transformer translation system. Experimental results on WMT’14 English-to-German, NIST Chinese-to-English, and WAT Japanese-to-English translation tasks demonstrate that the proposed methods can significantly improve the performance of the Transformer.

41. Neural Fuzzy Repair: Integrating Fuzzy Matches into Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Bram Bulte, Arda Tezcan
We present a simple yet powerful data augmentation method for boosting Neural Machine Translation (NMT) performance by leveraging information retrieved from a Translation Memory (TM). We propose and test two methods for augmenting NMT training data with fuzzy TM matches. Tests on the DGT-TM data set for two language pairs show consistent and substantial improvements over a range of baseline systems. The results suggest that this method is promising for any translation environment in which a sizeable TM is available and a certain amount of repetition across translations is to be expected, especially considering its ease of implementation.

42. Learning Deep Transformer Models for Machine Translation [PDF] 返回目录
  ACL 2019.
  Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, Lidia S. Chao
Transformer is the state-of-the-art model in recent machine translation evaluations. Two strands of research are promising to improve models of this kind: the first uses wide networks (a.k.a. Transformer-Big) and has been the de facto standard for development of the Transformer system, and the other uses deeper language representation but faces the difficulty arising from learning deep networks. Here, we continue the line of research on the latter. We claim that a truly deep Transformer model can surpass the Transformer-Big counterpart by 1) proper use of layer normalization and 2) a novel way of passing the combination of previous layers to the next. On WMT’16 English-German and NIST OpenMT’12 Chinese-English tasks, our deep system (30/25-layer encoder) outperforms the shallow Transformer-Big/Base baseline (6-layer encoder) by 0.4-2.4 BLEU points. As another bonus, the deep model is 1.6X smaller in size and 3X faster in training than Transformer-Big.

43. Generating Diverse Translations with Sentence Codes [PDF] 返回目录
  ACL 2019.
  Raphael Shu, Hideki Nakayama, Kyunghyun Cho
Users of machine translation systems may desire to obtain multiple candidates translated in different ways. In this work, we attempt to obtain diverse translations by using sentence codes to condition the sentence generation. We describe two methods to extract the codes, either with or without the help of syntax information. For diverse generation, we sample multiple candidates, each of which conditioned on a unique code. Experiments show that the sampled translations have much higher diversity scores when using reasonable sentence codes, where the translation quality is still on par with the baselines even under strong constraint imposed by the codes. In qualitative analysis, we show that our method is able to generate paraphrase translations with drastically different structures. The proposed approach can be easily adopted to existing translation systems as no modification to the model is required.

44. Self-Supervised Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Dana Ruiter, Cristina España-Bonet, Josef van Genabith
We present a simple new method where an emergent NMT system is used for simultaneously selecting training data and learning internal NMT representations. This is done in a self-supervised way without parallel data, in such a way that both tasks enhance each other during training. The method is language independent, introduces no additional hyper-parameters, and achieves BLEU scores of 29.21 (en2fr) and 27.36 (fr2en) on newstest2014 using English and French Wikipedia data for training.

45. Exploring Phoneme-Level Speech Representations for End-to-End Speech Translation [PDF] 返回目录
  ACL 2019.
  Elizabeth Salesky, Matthias Sperber, Alan W Black
Previous work on end-to-end translation from speech has primarily used frame-level features as speech representations, which creates longer, sparser sequences than text. We show that a naive method to create compressed phoneme-like speech representations is far more effective and efficient for translation than traditional frame-level speech features. Specifically, we generate phoneme labels for speech frames and average consecutive frames with the same label to create shorter, higher-level source sequences for translation. We see improvements of up to 5 BLEU on both our high and low resource language pairs, with a reduction in training time of 60%. Our improvements hold across multiple data sizes and two language pairs.

46. Putting Evaluation in Context: Contextual Embeddings Improve Machine Translation Evaluation [PDF] 返回目录
  ACL 2019.
  Nitika Mathur, Timothy Baldwin, Trevor Cohn
Accurate, automatic evaluation of machine translation is critical for system tuning, and evaluating progress in the field. We proposed a simple unsupervised metric, and additional supervised metrics which rely on contextual word embeddings to encode the translation and reference sentences. We find that these models rival or surpass all existing metrics in the WMT 2017 sentence-level and system-level tracks, and our trained model has a substantially higher correlation with human judgements than all existing metrics on the WMT 2017 to-English sentence level dataset.

47. Domain Adaptation of Neural Machine Translation by Lexicon Induction [PDF] 返回目录
  ACL 2019.
  Junjie Hu, Mengzhou Xia, Graham Neubig, Jaime Carbonell
It has been previously noted that neural machine translation (NMT) is very sensitive to domain shift. In this paper, we argue that this is a dual effect of the highly lexicalized nature of NMT, resulting in failure for sentences with large numbers of unknown words, and lack of supervision for domain-specific words. To remedy this problem, we propose an unsupervised adaptation method which fine-tunes a pre-trained out-of-domain NMT model using a pseudo-in-domain corpus. Specifically, we perform lexicon induction to extract an in-domain lexicon, and construct a pseudo-parallel in-domain corpus by performing word-for-word back-translation of monolingual in-domain target sentences. In five domains over twenty pairwise adaptation settings and two model architectures, our method achieves consistent improvements without using any in-domain parallel sentences, improving up to 14 BLEU over unadapted models, and up to 2 BLEU over strong back-translation baselines.

48. Reference Network for Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Han Fu, Chenghao Liu, Jianling Sun
Neural Machine Translation (NMT) has achieved notable success in recent years. Such a framework usually generates translations in isolation. In contrast, human translators often refer to reference data, either rephrasing the intricate sentence fragments with common terms in source language, or just accessing to the golden translation directly. In this paper, we propose a Reference Network to incorporate referring process into translation decoding of NMT. To construct a reference book, an intuitive way is to store the detailed translation history with extra memory, which is computationally expensive. Instead, we employ Local Coordinates Coding (LCC) to obtain global context vectors containing monolingual and bilingual contextual information for NMT decoding. Experimental results on Chinese-English and English-German tasks demonstrate that our proposed model is effective in improving the translation quality with lightweight computation cost.

49. Retrieving Sequential Information for Non-Autoregressive Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Xilin Chen, Jie Zhou
Non-Autoregressive Transformer (NAT) aims to accelerate the Transformer model through discarding the autoregressive mechanism and generating target words independently, which fails to exploit the target sequential information. Over-translation and under-translation errors often occur for the above reason, especially in the long sentence translation scenario. In this paper, we propose two approaches to retrieve the target sequential information for NAT to enhance its translation ability while preserving the fast-decoding property. Firstly, we propose a sequence-level training method based on a novel reinforcement algorithm for NAT (Reinforce-NAT) to reduce the variance and stabilize the training procedure. Secondly, we propose an innovative Transformer decoder named FS-decoder to fuse the target sequential information into the top layer of the decoder. Experimental results on three translation tasks show that the Reinforce-NAT surpasses the baseline NAT system by a significant margin on BLEU without decelerating the decoding speed and the FS-decoder achieves comparable translation performance to the autoregressive Transformer with considerable speedup.

50. STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework [PDF] 返回目录
  ACL 2019.
  Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, Haifeng Wang
Simultaneous translation, which translates sentences before they are finished, is use- ful in many scenarios but is notoriously dif- ficult due to word-order differences. While the conventional seq-to-seq framework is only suitable for full-sentence translation, we pro- pose a novel prefix-to-prefix framework for si- multaneous translation that implicitly learns to anticipate in a single translation model. Within this framework, we present a very sim- ple yet surprisingly effective “wait-k” policy trained to generate the target sentence concur- rently with the source sentence, but always k words behind. Experiments show our strat- egy achieves low latency and reasonable qual- ity (compared to full-sentence translation) on 4 directions: zh↔en and de↔en.

51. Look Harder: A Neural Machine Translation Model with Hard Attention [PDF] 返回目录
  ACL 2019.
  Sathish Reddy Indurthi, Insoo Chung, Sangha Kim
Soft-attention based Neural Machine Translation (NMT) models have achieved promising results on several translation tasks. These models attend all the words in the source sequence for each target token, which makes them ineffective for long sequence translation. In this work, we propose a hard-attention based NMT model which selects a subset of source tokens for each target token to effectively handle long sequence translation. Due to the discrete nature of the hard-attention mechanism, we design a reinforcement learning algorithm coupled with reward shaping strategy to efficiently train it. Experimental results show that the proposed model performs better on long sequences and thereby achieves significant BLEU score improvement on English-German (EN-DE) and English-French (ENFR) translation tasks compared to the soft attention based NMT.

52. Robust Neural Machine Translation with Joint Textual and Phonetic Embedding [PDF] 返回目录
  ACL 2019.
  Hairong Liu, Mingbo Ma, Liang Huang, Hao Xiong, Zhongjun He
Neural machine translation (NMT) is notoriously sensitive to noises, but noises are almost inevitable in practice. One special kind of noise is the homophone noise, where words are replaced by other words with similar pronunciations. We propose to improve the robustness of NMT to homophone noises by 1) jointly embedding both textual and phonetic information of source sentences, and 2) augmenting the training dataset with homophone noises. Interestingly, to achieve better translation quality and more robustness, we found that most (though not all) weights should be put on the phonetic rather than textual information. Experiments show that our method not only significantly improves the robustness of NMT to homophone noises, but also surprisingly improves the translation quality on some clean test sets.

53. Translating Translationese: A Two-Step Approach to Unsupervised Machine Translation [PDF] 返回目录
  ACL 2019.
  Nima Pourdamghani, Nada Aldarrab, Marjan Ghazvininejad, Kevin Knight, Jonathan May
Given a rough, word-by-word gloss of a source language sentence, target language natives can uncover the latent, fully-fluent rendering of the translation. In this work we explore this intuition by breaking translation into a two step process: generating a rough gloss by means of a dictionary and then ‘translating’ the resulting pseudo-translation, or ‘Translationese’ into a fully fluent translation. We build our Translationese decoder once from a mish-mash of parallel data that has the target language in common and then can build dictionaries on demand using unsupervised techniques, resulting in rapidly generated unsupervised neural MT systems for many source languages. We apply this process to 14 test languages, obtaining better or comparable translation results on high-resource languages than previously published unsupervised MT studies, and obtaining good quality results for low-resource languages that have never been used in an unsupervised MT scenario.

54. Training Neural Machine Translation to Apply Terminology Constraints [PDF] 返回目录
  ACL 2019.
  Georgiana Dinu, Prashant Mathur, Marcello Federico, Yaser Al-Onaizan
This paper proposes a novel method to inject custom terminology into neural machine translation at run time. Previous works have mainly proposed modifications to the decoding algorithm in order to constrain the output to include run-time-provided target terms. While being effective, these constrained decoding methods add, however, significant computational overhead to the inference step, and, as we show in this paper, can be brittle when tested in realistic conditions. In this paper we approach the problem by training a neural MT system to learn how to use custom terminology when provided with the input. Comparative experiments show that our method is not only more effective than a state-of-the-art implementation of constrained decoding, but is also as fast as constraint-free decoding.

55. Sentence-Level Agreement for Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Mingming Yang, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Min Zhang, Tiejun Zhao
The training objective of neural machine translation (NMT) is to minimize the loss between the words in the translated sentences and those in the references. In NMT, there is a natural correspondence between the source sentence and the target sentence. However, this relationship has only been represented using the entire neural network and the training objective is computed in word-level. In this paper, we propose a sentence-level agreement module to directly minimize the difference between the representation of source and target sentence. The proposed agreement module can be integrated into NMT as an additional training objective function and can also be used to enhance the representation of the source sentences. Empirical results on the NIST Chinese-to-English and WMT English-to-German tasks show the proposed agreement module can significantly improve the NMT performance.

56. Lattice-Based Transformer Encoder for Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Fengshun Xiao, Jiangtong Li, Hai Zhao, Rui Wang, Kehai Chen
Neural machine translation (NMT) takes deterministic sequences for source representations. However, either word-level or subword-level segmentations have multiple choices to split a source sequence with different word segmentors or different subword vocabulary sizes. We hypothesize that the diversity in segmentations may affect the NMT performance. To integrate different segmentations with the state-of-the-art NMT model, Transformer, we propose lattice-based encoders to explore effective word or subword representation in an automatic way during training. We propose two methods: 1) lattice positional encoding and 2) lattice-aware self-attention. These two methods can be used together and show complementary to each other to further improve translation performance. Experiment results show superiorities of lattice-based encoders in word-level and subword-level representations over conventional Transformer encoder.

57. Shared-Private Bilingual Word Embeddings for Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Xuebo Liu, Derek F. Wong, Yang Liu, Lidia S. Chao, Tong Xiao, Jingbo Zhu
Word embedding is central to neural machine translation (NMT), which has attracted intensive research interest in recent years. In NMT, the source embedding plays the role of the entrance while the target embedding acts as the terminal. These layers occupy most of the model parameters for representation learning. Furthermore, they indirectly interface via a soft-attention mechanism, which makes them comparatively isolated. In this paper, we propose shared-private bilingual word embeddings, which give a closer relationship between the source and target embeddings, and which also reduce the number of model parameters. For similar source and target words, their embeddings tend to share a part of the features and they cooperatively learn these common representation units. Experiments on 5 language pairs belonging to 6 different language families and written in 5 different alphabets demonstrate that the proposed model provides a significant performance boost over the strong baselines with dramatically fewer model parameters.

58. Robust Neural Machine Translation with Doubly Adversarial Inputs [PDF] 返回目录
  ACL 2019.
  Yong Cheng, Lu Jiang, Wolfgang Macherey
Neural machine translation (NMT) often suffers from the vulnerability to noisy perturbations in the input. We propose an approach to improving the robustness of NMT models, which consists of two parts: (1) attack the translation model with adversarial source examples; (2) defend the translation model with adversarial target inputs to improve its robustness against the adversarial source inputs. For the generation of adversarial inputs, we propose a gradient-based method to craft adversarial examples informed by the translation loss over the clean inputs. Experimental results on Chinese-English and English-German translation tasks demonstrate that our approach achieves significant improvements (2.8 and 1.6 BLEU points) over Transformer on standard clean benchmarks as well as exhibiting higher robustness on noisy data.

59. Bridging the Gap between Training and Inference for Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Wen Zhang, Yang Feng, Fandong Meng, Di You, Qun Liu
Neural Machine Translation (NMT) generates target words sequentially in the way of predicting the next word conditioned on the context words. At training time, it predicts with the ground truth words as context while at inference it has to generate the entire sequence from scratch. This discrepancy of the fed context leads to error accumulation among the way. Furthermore, word-level training requires strict matching between the generated sequence and the ground truth sequence which leads to overcorrection over different but reasonable translations. In this paper, we address these issues by sampling context words not only from the ground truth sequence but also from the predicted sequence by the model during training, where the predicted sequence is selected with a sentence-level optimum. Experiment results on Chinese->English and WMT’14 English->German translation tasks demonstrate that our approach can achieve significant improvements on multiple datasets.

60. Beyond BLEU:Training Neural Machine Translation with Semantic Similarity [PDF] 返回目录
  ACL 2019.
  John Wieting, Taylor Berg-Kirkpatrick, Kevin Gimpel, Graham Neubig
While most neural machine translation (NMT)systems are still trained using maximum likelihood estimation, recent work has demonstrated that optimizing systems to directly improve evaluation metrics such as BLEU can significantly improve final translation accuracy. However, training with BLEU has some limitations: it doesn’t assign partial credit, it has a limited range of output values, and it can penalize semantically correct hypotheses if they differ lexically from the reference. In this paper, we introduce an alternative reward function for optimizing NMT systems that is based on recent work in semantic similarity. We evaluate on four disparate languages trans-lated to English, and find that training with our proposed metric results in better translations as evaluated by BLEU, semantic similarity, and human evaluation, and also that the optimization procedure converges faster. Analysis suggests that this is because the proposed metric is more conducive to optimization, assigning partial credit and providing more diversity in scores than BLEU

61. Simple and Effective Paraphrastic Similarity from Parallel Translations [PDF] 返回目录
  ACL 2019.
  John Wieting, Kevin Gimpel, Graham Neubig, Taylor Berg-Kirkpatrick
We present a model and methodology for learning paraphrastic sentence embeddings directly from bitext, removing the time-consuming intermediate step of creating para-phrase corpora. Further, we show that the resulting model can be applied to cross lingual tasks where it both outperforms and is orders of magnitude faster than more complex state-of-the-art baselines.

62. Unsupervised Question Answering by Cloze Translation [PDF] 返回目录
  ACL 2019.
  Patrick Lewis, Ludovic Denoyer, Sebastian Riedel
Obtaining training data for Question Answering (QA) is time-consuming and resource-intensive, and existing QA datasets are only available for limited domains and languages. In this work, we explore to what extent high quality training data is actually required for Extractive QA, and investigate the possibility of unsupervised Extractive QA. We approach this problem by first learning to generate context, question and answer triples in an unsupervised manner, which we then use to synthesize Extractive QA training data automatically. To generate such triples, we first sample random context paragraphs from a large corpus of documents and then random noun phrases or Named Entity mentions from these paragraphs as answers. Next we convert answers in context to “fill-in-the-blank” cloze questions and finally translate them into natural questions. We propose and compare various unsupervised ways to perform cloze-to-natural question translation, including training an unsupervised NMT model using non-aligned corpora of natural questions and cloze questions as well as a rule-based approach. We find that modern QA models can learn to answer human questions surprisingly well using only synthetic training data. We demonstrate that, without using the SQuAD training data at all, our approach achieves 56.4 F1 on SQuAD v1 (64.5 F1 when the answer is a Named Entity mention), outperforming early supervised models.

63. Bilingual Lexicon Induction through Unsupervised Machine Translation [PDF] 返回目录
  ACL 2019.
  Mikel Artetxe, Gorka Labaka, Eneko Agirre
A recent research line has obtained strong results on bilingual lexicon induction by aligning independently trained word embeddings in two languages and using the resulting cross-lingual embeddings to induce word translation pairs through nearest neighbor or related retrieval methods. In this paper, we propose an alternative approach to this problem that builds on the recent work on unsupervised machine translation. This way, instead of directly inducing a bilingual lexicon from cross-lingual embeddings, we use them to build a phrase-table, combine it with a language model, and use the resulting machine translation system to generate a synthetic parallel corpus, from which we extract the bilingual lexicon using statistical word alignment techniques. As such, our method can work with any word embedding and cross-lingual mapping technique, and it does not require any additional resource besides the monolingual corpus used to train the embeddings. When evaluated on the exact same cross-lingual embeddings, our proposed method obtains an average improvement of 6 accuracy points over nearest neighbor and 4 points over CSLS retrieval, establishing a new state-of-the-art in the standard MUSE dataset.

64. Soft Contextual Data Augmentation for Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Fei Gao, Jinhua Zhu, Lijun Wu, Yingce Xia, Tao Qin, Xueqi Cheng, Wengang Zhou, Tie-Yan Liu
While data augmentation is an important trick to boost the accuracy of deep learning methods in computer vision tasks, its study in natural language tasks is still very limited. In this paper, we present a novel data augmentation method for neural machine translation.Different from previous augmentation methods that randomly drop, swap or replace words with other words in a sentence, we softly augment a randomly chosen word in a sentence by its contextual mixture of multiple related words. More accurately, we replace the one-hot representation of a word by a distribution (provided by a language model) over the vocabulary, i.e., replacing the embedding of this word by a weighted combination of multiple semantically similar words. Since the weights of those words depend on the contextual information of the word to be replaced,the newly generated sentences capture much richer information than previous augmentation methods. Experimental results on both small scale and large scale machine translation data sets demonstrate the superiority of our method over strong baselines.

65. Depth Growing for Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Lijun Wu, Yiren Wang, Yingce Xia, Fei Tian, Fei Gao, Tao Qin, Jianhuang Lai, Tie-Yan Liu
While very deep neural networks have shown effectiveness for computer vision and text classification applications, how to increase the network depth of the neural machine translation (NMT) models for better translation quality remains a challenging problem. Directly stacking more blocks to the NMT model results in no improvement and even drop in performance. In this work, we propose an effective two-stage approach with three specially designed components to construct deeper NMT models, which result in significant improvements over the strong Transformer baselines on WMT14 English→German and English→French translation tasks.

66. Generalized Data Augmentation for Low-Resource Translation [PDF] 返回目录
  ACL 2019.
  Mengzhou Xia, Xiang Kong, Antonios Anastasopoulos, Graham Neubig
Low-resource language pairs with a paucity of parallel data pose challenges for machine translation in terms of both adequacy and fluency. Data augmentation utilizing a large amount of monolingual data is regarded as an effective way to alleviate the problem. In this paper, we propose a general framework of data augmentation for low-resource machine translation not only using target-side monolingual data, but also by pivoting through a related high-resource language. Specifically, we experiment with a two-step pivoting method to convert high-resource data to the low-resource language, making best use of available resources to better approximate the true distribution of the low-resource language. First, we inject low-resource words into high-resource sentences through an induced bilingual dictionary. Second, we further edit the high-resource data injected with low-resource words using a modified unsupervised machine translation framework. Extensive experiments on four low-resource datasets show that under extreme low-resource settings, our data augmentation techniques improve translation quality by up to 1.5 to 8 BLEU points compared to supervised back-translation baselines.

67. Better OOV Translation with Bilingual Terminology Mining [PDF] 返回目录
  ACL 2019.
  Matthias Huck, Viktor Hangya, Alexander Fraser
Unseen words, also called out-of-vocabulary words (OOVs), are difficult for machine translation. In neural machine translation, byte-pair encoding can be used to represent OOVs, but they are still often incorrectly translated. We improve the translation of OOVs in NMT using easy-to-obtain monolingual data. We look for OOVs in the text to be translated and translate them using simple-to-construct bilingual word embeddings (BWEs). In our MT experiments we take the 5-best candidates, which is motivated by intrinsic mining experiments. Using all five of the proposed target language words as queries we mine target-language sentences. We then back-translate, forcing the back-translation of each of the five proposed target-language OOV-translation-candidates to be the original source-language OOV. We show that by using this synthetic data to fine-tune our system the translation of OOVs can be dramatically improved. In our experiments we use a system trained on Europarl and mine sentences containing medical terms from monolingual data.

68. Simultaneous Translation with Flexible Policy via Restricted Imitation Learning [PDF] 返回目录
  ACL 2019.
  Baigong Zheng, Renjie Zheng, Mingbo Ma, Liang Huang
Simultaneous translation is widely useful but remains one of the most difficult tasks in NLP. Previous work either uses fixed-latency policies, or train a complicated two-staged model using reinforcement learning. We propose a much simpler single model that adds a “delay” token to the target vocabulary, and design a restricted dynamic oracle to greatly simplify training. Experiments on Chinese <-> English simultaneous translation show that our work leads to flexible policies that achieve better BLEU scores and lower latencies compared to both fixed and RL-learned policies.

69. Target Conditioned Sampling: Optimizing Data Selection for Multilingual Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Xinyi Wang, Graham Neubig
To improve low-resource Neural Machine Translation (NMT) with multilingual corpus, training on the most related high-resource language only is generally more effective than us- ing all data available (Neubig and Hu, 2018). However, it remains a question whether a smart data selection strategy can further improve low-resource NMT with data from other auxiliary languages. In this paper, we seek to construct a sampling distribution over all multilingual data, so that it minimizes the training loss of the low-resource language. Based on this formulation, we propose and efficient algorithm, (TCS), which first samples a target sentence, and then conditionally samples its source sentence. Experiments show TCS brings significant gains of up to 2 BLEU improvements on three of four languages we test, with minimal training overhead.

70. Unsupervised Paraphrasing without Translation [PDF] 返回目录
  ACL 2019.
  Aurko Roy, David Grangier
Paraphrasing is an important task demonstrating the ability to abstract semantic content from its surface form. Recent literature on automatic paraphrasing is dominated by methods leveraging machine translation as an intermediate step. This contrasts with humans, who can paraphrase without necessarily being bilingual. This work proposes to learn paraphrasing models only from a monolingual corpus. To that end, we propose a residual variant of vector-quantized variational auto-encoder. Our experiments consider paraphrase identification, and paraphrasing for training set augmentation, comparing to supervised and unsupervised translation-based approaches. Monolingual paraphrasing is shown to outperform unsupervised translation in all contexts. The comparison with supervised MT is more mixed: monolingual paraphrasing is interesting for identification and augmentation but supervised MT is superior for generation.

71. Reducing Word Omission Errors in Neural Machine Translation: A Contrastive Learning Approach [PDF] 返回目录
  ACL 2019.
  Zonghan Yang, Yong Cheng, Yang Liu, Maosong Sun
While neural machine translation (NMT) has achieved remarkable success, NMT systems are prone to make word omission errors. In this work, we propose a contrastive learning approach to reducing word omission errors in NMT. The basic idea is to enable the NMT model to assign a higher probability to a ground-truth translation and a lower probability to an erroneous translation, which is automatically constructed from the ground-truth translation by omitting words. We design different types of negative examples depending on the number of omitted words, word frequency, and part of speech. Experiments on Chinese-to-English, German-to-English, and Russian-to-English translation tasks show that our approach is effective in reducing word omission errors and achieves better translation performance than three baseline methods.

72. Exploiting Sentential Context for Neural Machine Translation [PDF] 返回目录
  ACL 2019.
  Xing Wang, Zhaopeng Tu, Longyue Wang, Shuming Shi
In this work, we present novel approaches to exploit sentential context for neural machine translation (NMT). Specifically, we show that a shallow sentential context extracted from the top encoder layer only, can improve translation performance via contextualizing the encoding representations of individual words. Next, we introduce a deep sentential context, which aggregates the sentential context representations from all of the internal layers of the encoder to form a more comprehensive context representation. Experimental results on the WMT14 English-German and English-French benchmarks show that our model consistently improves performance over the strong Transformer model, demonstrating the necessity and effectiveness of exploiting sentential context for NMT.

73. A Multi-Task Architecture on Relevance-based Neural Query Translation [PDF] 返回目录
  ACL 2019.
  Sheikh Muhammad Sarwar, Hamed Bonab, James Allan
We describe a multi-task learning approach to train a Neural Machine Translation (NMT) model with a Relevance-based Auxiliary Task (RAT) for search query translation. The translation process for Cross-lingual Information Retrieval (CLIR) task is usually treated as a black box and it is performed as an independent step. However, an NMT model trained on sentence-level parallel data is not aware of the vocabulary distribution of the retrieval corpus. We address this problem and propose a multi-task learning architecture that achieves 16% improvement over a strong baseline on Italian-English query-document dataset. We show using both quantitative and qualitative analysis that our model generates balanced and precise translations with the regularization effect it achieves from multi-task learning paradigm.

74. Latent Variable Model for Multi-modal Translation [PDF] 返回目录
  ACL 2019.
  Iacer Calixto, Miguel Rios, Wilker Aziz
In this work, we propose to model the interaction between visual and textual features for multi-modal neural machine translation (MMT) through a latent variable model. This latent variable can be seen as a multi-modal stochastic embedding of an image and its description in a foreign language. It is used in a target-language decoder and also to predict image features. Importantly, our model formulation utilises visual and textual inputs during training but does not require that images be available at test time. We show that our latent variable MMT formulation improves considerably over strong baselines, including a multi-task learning approach (Elliott and Kadar, 2017) and a conditional variational auto-encoder approach (Toyama et al., 2016). Finally, we show improvements due to (i) predicting image features in addition to only conditioning on them, (ii) imposing a constraint on the KL term to promote models with non-negligible mutual information between inputs and latent variable, and (iii) by training on additional target-language image descriptions (i.e. synthetic data).

75. Lattice Transformer for Speech Translation [PDF] 返回目录
  ACL 2019.
  Pei Zhang, Niyu Ge, Boxing Chen, Kai Fan
Recent advances in sequence modeling have highlighted the strengths of the transformer architecture, especially in achieving state-of-the-art machine translation results. However, depending on the up-stream systems, e.g., speech recognition, or word segmentation, the input to translation system can vary greatly. The goal of this work is to extend the attention mechanism of the transformer to naturally consume the lattice in addition to the traditional sequential input. We first propose a general lattice transformer for speech translation where the input is the output of the automatic speech recognition (ASR) which contains multiple paths and posterior scores. To leverage the extra information from the lattice structure, we develop a novel controllable lattice attention mechanism to obtain latent representations. On the LDC Spanish-English speech translation corpus, our experiments show that lattice transformer generalizes significantly better and outperforms both a transformer baseline and a lattice LSTM. Additionally, we validate our approach on the WMT 2017 Chinese-English translation task with lattice inputs from different BPE segmentations. In this task, we also observe the improvements over strong baselines.

76. Distilling Translations with Visual Awareness [PDF] 返回目录
  ACL 2019.
  Julia Ive, Pranava Madhyastha, Lucia Specia
Previous work on multimodal machine translation has shown that visual information is only needed in very specific cases, for example in the presence of ambiguous words where the textual context is not sufficient. As a consequence, models tend to learn to ignore this information. We propose a translate-and-refine approach to this problem where images are only used by a second stage decoder. This approach is trained jointly to generate a good first draft translation and to improve over this draft by (i) making better use of the target language textual context (both left and right-side contexts) and (ii) making use of visual context. This approach leads to the state of the art results. Additionally, we show that it has the ability to recover from erroneous or missing words in the source language.

77. Paraphrases as Foreign Languages in Multilingual Neural Machine Translation [PDF] 返回目录
  ACL 2019. Student Research Workshop
  Zhong Zhou, Matthias Sperber, Alexander Waibel
Paraphrases, rewordings of the same semantic meaning, are useful for improving generalization and translation. Unlike previous works that only explore paraphrases at the word or phrase level, we use different translations of the whole training data that are consistent in structure as paraphrases at the corpus level. We treat paraphrases as foreign languages, tag source sentences with paraphrase labels, and train on parallel paraphrases in the style of multilingual Neural Machine Translation (NMT). Our multi-paraphrase NMT that trains only on two languages outperforms the multilingual baselines. Adding paraphrases improves the rare word translation and increases entropy and diversity in lexical choice. Adding the source paraphrases boosts performance better than adding the target ones, while adding both lifts performance further. We achieve a BLEU score of 57.2 for French-to-English translation using 24 corpus-level paraphrases of the Bible, which outperforms the multilingual baselines and is +34.7 above the single-source single-target NMT baseline.

78. Improving Mongolian-Chinese Neural Machine Translation with Morphological Noise [PDF] 返回目录
  ACL 2019. Student Research Workshop
  Yatu Ji, Hongxu Hou, Chen Junjie, Nier Wu
For the translation of agglutinative language such as typical Mongolian, unknown (UNK) words not only come from the quite restricted vocabulary, but also mostly from misunderstanding of the translation model to the morphological changes. In this study, we introduce a new adversarial training model to alleviate the UNK problem in Mongolian-Chinese machine translation. The training process can be described as three adversarial sub models (generator, value screener and discriminator), playing a win-win game. In this game, the added screener plays the role of emphasizing that the discriminator pays attention to the added Mongolian morphological noise in the form of pseudo-data and improving the training efficiency. The experimental results show that the newly emerged Mongolian-Chinese task is state-of-the-art. Under this premise, the training time is greatly shortened.

79. Unsupervised Pretraining for Neural Machine Translation Using Elastic Weight Consolidation [PDF] 返回目录
  ACL 2019. Student Research Workshop
  Dušan Variš, Ondřej Bojar
This work presents our ongoing research of unsupervised pretraining in neural machine translation (NMT). In our method, we initialize the weights of the encoder and decoder with two language models that are trained with monolingual data and then fine-tune the model on parallel data using Elastic Weight Consolidation (EWC) to avoid forgetting of the original language modeling task. We compare the regularization by EWC with the previous work that focuses on regularization by language modeling objectives. The positive result is that using EWC with the decoder achieves BLEU scores similar to the previous work. However, the model converges 2-3 times faster and does not require the original unlabeled training data during the fine-tuning stage. In contrast, the regularization using EWC is less effective if the original and new tasks are not closely related. We show that initializing the bidirectional NMT encoder with a left-to-right language model and forcing the model to remember the original left-to-right language modeling task limits the learning capacity of the encoder for the whole bidirectional context.

80. Attention over Heads: A Multi-Hop Attention for Neural Machine Translation [PDF] 返回目录
  ACL 2019. Student Research Workshop
  Shohei Iida, Ryuichiro Kimura, Hongyi Cui, Po-Hsuan Hung, Takehito Utsuro, Masaaki Nagata
In this paper, we propose a multi-hop attention for the Transformer. It refines the attention for an output symbol by integrating that of each head, and consists of two hops. The first hop attention is the scaled dot-product attention which is the same attention mechanism used in the original Transformer. The second hop attention is a combination of multi-layer perceptron (MLP) attention and head gate, which efficiently increases the complexity of the model by adding dependencies between heads. We demonstrate that the translation accuracy of the proposed multi-hop attention outperforms the baseline Transformer significantly, +0.85 BLEU point for the IWSLT-2017 German-to-English task and +2.58 BLEU point for the WMT-2017 German-to-English task. We also find that the number of parameters required for a multi-hop attention is smaller than that for stacking another self-attention layer and the proposed model converges significantly faster than the original Transformer.

81. From Bilingual to Multilingual Neural Machine Translation by Incremental Training [PDF] 返回目录
  ACL 2019. Student Research Workshop
  Carlos Escolano, Marta R. Costa-jussà, José A. R. Fonollosa
Multilingual Neural Machine Translation approaches are based on the use of task specific models and the addition of one more language can only be done by retraining the whole system. In this work, we propose a new training schedule that allows the system to scale to more languages without modification of the previous components based on joint training and language-independent encoder/decoder modules allowing for zero-shot translation. This work in progress shows close results to state-of-the-art in the WMT task.

82. Normalizing Non-canonical Turkish Texts Using Machine Translation Approaches [PDF] 返回目录
  ACL 2019. Student Research Workshop
  Talha Çolakoğlu, Umut Sulubacak, Ahmet Cüneyd Tantuğ
With the growth of the social web, user-generated text data has reached unprecedented sizes. Non-canonical text normalization provides a way to exploit this as a practical source of training data for language processing systems. The state of the art in Turkish text normalization is composed of a token level pipeline of modules, heavily dependent on external linguistic resources and manually defined rules. Instead, we propose a fully automated, context-aware machine translation approach with fewer stages of processing. Experiments with various implementations of our approach show that we are able to surpass the current best-performing system by a large margin.

83. English-Indonesian Neural Machine Translation for Spoken Language Domains [PDF] 返回目录
  ACL 2019. Student Research Workshop
  Meisyarah Dwiastuti
In this work, we conduct a study on Neural Machine Translation (NMT) for English-Indonesian (EN-ID) and Indonesian-English (ID-EN). We focus on spoken language domains, namely colloquial and speech languages. We build NMT systems using the Transformer model for both translation directions and implement domain adaptation, in which we train our pre-trained NMT systems on speech language (in-domain) data. Moreover, we conduct an evaluation on how the domain-adaptation method in our EN-ID system can result in more formal translation outputs.

84. Demonstration of a Neural Machine Translation System with Online Learning for Translators [PDF] 返回目录
  ACL 2019. System Demonstrations
  Miguel Domingo, Mercedes García-Martínez, Amando Estela Pastor, Laurent Bié, Alexander Helle, Álvaro Peris, Francisco Casacuberta, Manuel Herranz Pérez
We present a demonstration of our system, which implements online learning for neural machine translation in a production environment. These techniques allow the system to continuously learn from the corrections provided by the translators. We implemented an end-to-end platform integrating our machine translation servers to one of the most common user interfaces for professional translators: SDL Trados Studio. We pretend to save post-editing effort as the machine is continuously learning from its mistakes and adapting the models to a specific domain or user style.

85. English-Ethiopian Languages Statistical Machine Translation [PDF] 返回目录
  ACL 2019.
  Solomon Teferra Abate, Michael Melese, Martha Yifiru Tachbelie, Million Meshesha, Solomon Atinafu, Wondwossen Mulugeta, Yaregal Assabie, Hafte Abera, Biniyam Ephrem, Tewodros Gebreselassie, Wondimagegnhue Tsegaye Tufa, Amanuel Lemma, Tsegaye Andargie, Seifedin Shifaw
In this paper, we describe an attempt towards the development of parallel corpora for English and Ethiopian Languages, such as Amharic, Tigrigna, Afan-Oromo, Wolaytta and Ge’ez. The corpora are used for conducting bi-directional SMT experiments. The BLEU scores of the bi-directional SMT systems show a promising result. The morphological richness of the Ethiopian languages has a great impact on the performance of SMT especially when the targets are Ethiopian languages.

86. Benchmarking Neural Machine Translation for Southern African Languages [PDF] 返回目录
  ACL 2019.
  Jade Abbott, Laura Martinus
Unlike major Western languages, most African languages are very low-resourced. Furthermore, the resources that do exist are often scattered and difficult to obtain and discover. As a result, the data and code for existing research has rarely been shared, meaning researchers struggle to reproduce reported results, and almost no publicly available benchmarks or leaderboards for African machine translation models exist. To start to address these problems, we trained neural machine translation models for a subset of Southern African languages on publicly-available datasets. We provide the code for training the models and evaluate the models on a newly released evaluation set, with the aim of starting a leaderboard for Southern African languages and spur future research in the field.

87. Assessing the Ability of Neural Machine Translation Models to Perform Syntactic Rewriting [PDF] 返回目录
  ACL 2019.
  Jahkel Robin, Alvin Grissom II, Matthew Roselli
We describe work in progress for evaluating performance of sequence-to-sequence neural networks on the task of syntax-based reordering for rules applicable to simultaneous machine translation. We train models that attempt to rewrite English sentences using rules that are commonly used by human interpreters. We examine the performance of these models to determine which forms of rewriting are more difficult for them to learn and which architectures are the best at learning them.

88. Creating a Corpus for Russian Data-to-Text Generation Using Neural Machine Translation and Post-Editing [PDF] 返回目录
  ACL 2019. the 7th Workshop on Balto-Slavic Natural Language Processing
  Anastasia Shimorina, Elena Khasanova, Claire Gardent
In this paper, we propose an approach for semi-automatically creating a data-to-text (D2T) corpus for Russian that can be used to learn a D2T natural language generation model. An error analysis of the output of an English-to-Russian neural machine translation system shows that 80% of the automatically translated sentences contain an error and that 53% of all translation errors bear on named entities (NE). We therefore focus on named entities and introduce two post-editing techniques for correcting wrongly translated NEs.

89. Building English-to-Serbian Machine Translation System for IMDb Movie Reviews [PDF] 返回目录
  ACL 2019. the 7th Workshop on Balto-Slavic Natural Language Processing
  Pintu Lohar, Maja Popović, Andy Way
This paper reports the results of the first experiment dealing with the challenges of building a machine translation system for user-generated content involving a complex South Slavic language. We focus on translation of English IMDb user movie reviews into Serbian, in a low-resource scenario. We explore potentials and limits of (i) phrase-based and neural machine translation systems trained on out-of-domain clean parallel data from news articles (ii) creating additional synthetic in-domain parallel corpus by machine-translating the English IMDb corpus into Serbian. Our main findings are that morphology and syntax are better handled by the neural approach than by the phrase-based approach even in this low-resource mismatched domain scenario, however the situation is different for the lexical aspect, especially for person names. This finding also indicates that in general, machine translation of person names into Slavic languages (especially those which require/allow transcription) should be investigated more systematically.

90. Filling Gender & Number Gaps in Neural Machine Translation with Black-box Context Injection [PDF] 返回目录
  ACL 2019. the First Workshop on Gender Bias in Natural Language Processing
  Amit Moryossef, Roee Aharoni, Yoav Goldberg
When translating from a language that does not morphologically mark information such as gender and number into a language that does, translation systems must “guess” this missing information, often leading to incorrect translations in the given context. We propose a black-box approach for injecting the missing information to a pre-trained neural machine translation system, allowing to control the morphological variations in the generated translations without changing the underlying model or training data. We evaluate our method on an English to Hebrew translation task, and show that it is effective in injecting the gender and number information and that supplying the correct information improves the translation accuracy in up to 2.3 BLEU on a female-speaker test set for a state-of-the-art online black-box system. Finally, we perform a fine-grained syntactic analysis of the generated translations that shows the effectiveness of our method.

91. Equalizing Gender Bias in Neural Machine Translation with Word Embeddings Techniques [PDF] 返回目录
  ACL 2019. the First Workshop on Gender Bias in Natural Language Processing
  Joel Escudé Font, Marta R. Costa-jussà
Neural machine translation has significantly pushed forward the quality of the field. However, there are remaining big issues with the output translations and one of them is fairness. Neural models are trained on large text corpora which contain biases and stereotypes. As a consequence, models inherit these social biases. Recent methods have shown results in reducing gender bias in other natural language processing tools such as word embeddings. We take advantage of the fact that word embeddings are used in neural machine translation to propose a method to equalize gender biases in neural machine translation using these representations. Specifically, we propose, experiment and analyze the integration of two debiasing techniques over GloVe embeddings in the Transformer translation architecture. We evaluate our proposed system on the WMT English-Spanish benchmark task, showing gains up to one BLEU point. As for the gender bias evaluation, we generate a test set of occupations and we show that our proposed system learns to equalize existing biases from the baseline system.

92. On Measuring Gender Bias in Translation of Gender-neutral Pronouns [PDF] 返回目录
  ACL 2019. the First Workshop on Gender Bias in Natural Language Processing
  Won Ik Cho, Ji Won Kim, Seok Min Kim, Nam Soo Kim
Ethics regarding social bias has recently thrown striking issues in natural language processing. Especially for gender-related topics, the need for a system that reduces the model bias has grown in areas such as image captioning, content recommendation, and automated employment. However, detection and evaluation of gender bias in the machine translation systems are not yet thoroughly investigated, for the task being cross-lingual and challenging to define. In this paper, we propose a scheme for making up a test set that evaluates the gender bias in a machine translation system, with Korean, a language with gender-neutral pronouns. Three word/phrase sets are primarily constructed, each incorporating positive/negative expressions or occupations; all the terms are gender-independent or at least not biased to one side severely. Then, additional sentence lists are constructed concerning formality of the pronouns and politeness of the sentences. With the generated sentence set of size 4,236 in total, we evaluate gender bias in conventional machine translation systems utilizing the proposed measure, which is termed here as translation gender bias index (TGBI). The corpus and the code for evaluation is available on-line.

93. Assessing Back-Translation as a Corpus Generation Strategy for non-English Tasks: A Study in Reading Comprehension and Word Sense Disambiguation [PDF] 返回目录
  ACL 2019. the 13th Linguistic Annotation Workshop
  Fabricio Monsalve, Kervy Rivas Rojas, Marco Antonio Sobrevilla Cabezudo, Arturo Oncevay
Corpora curated by experts have sustained Natural Language Processing mainly in English, but the expensiveness of corpora creation is a barrier for the development in further languages. Thus, we propose a corpus generation strategy that only requires a machine translation system between English and the target language in both directions, where we filter the best translations by computing automatic translation metrics and the task performance score. By studying Reading Comprehension in Spanish and Word Sense Disambiguation in Portuguese, we identified that a more quality-oriented metric has high potential in the corpora selection without degrading the task performance. We conclude that it is possible to systematise the building of quality corpora using machine translation and automatic metrics, besides some prior effort to clean and process the data.

94. An Evaluation of Language-Agnostic Inner-Attention-Based Representations in Machine Translation [PDF] 返回目录
  ACL 2019. the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)
  Alessandro Raganato, Raúl Vázquez, Mathias Creutz, Jörg Tiedemann
In this paper, we explore a multilingual translation model with a cross-lingually shared layer that can be used as fixed-size sentence representation in different downstream tasks. We systematically study the impact of the size of the shared layer and the effect of including additional languages in the model. In contrast to related previous work, we demonstrate that the performance in translation does correlate with trainable downstream tasks. In particular, we show that larger intermediate layers not only improve translation quality, especially for long sentences, but also push the accuracy of trainable classification tasks. On the other hand, shorter representations lead to increased compression that is beneficial in non-trainable similarity tasks. We hypothesize that the training procedure on the downstream task enables the model to identify the encoded information that is useful for the specific task whereas non-trainable benchmarks can be confused by other types of information also encoded in the representation of a sentence.

95. Auto-Encoding Variational Neural Machine Translation [PDF] 返回目录
  ACL 2019. the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)
  Bryan Eikema, Wilker Aziz
We present a deep generative model of bilingual sentence pairs for machine translation. The model generates source and target sentences jointly from a shared latent representation and is parameterised by neural networks. We perform efficient training using amortised variational inference and reparameterised gradients. Additionally, we discuss the statistical implications of joint modelling and propose an efficient approximation to maximum a posteriori decoding for fast test-time predictions. We demonstrate the effectiveness of our model in three machine translation scenarios: in-domain training, mixed-domain training, and learning from a mix of gold-standard and synthetic data. Our experiments show consistently that our joint formulation outperforms conditional modelling (i.e. standard neural machine translation) in all such scenarios.

96. Incremental Domain Adaptation for Neural Machine Translation in Low-Resource Settings [PDF] 返回目录
  ACL 2019. the Fourth Arabic Natural Language Processing Workshop
  Marimuthu Kalimuthu, Michael Barz, Daniel Sonntag
We study the problem of incremental domain adaptation of a generic neural machine translation model with limited resources (e.g., budget and time) for human translations or model training. In this paper, we propose a novel query strategy for selecting “unlabeled” samples from a new domain based on sentence embeddings for Arabic. We accelerate the fine-tuning process of the generic model to the target domain. Specifically, our approach estimates the informativeness of instances from the target domain by comparing the distance of their sentence embeddings to embeddings from the generic domain. We perform machine translation experiments (Ar-to-En direction) for comparing a random sampling baseline with our new approach, similar to active learning, using two small update sets for simulating the work of human translators. For the prescribed setting we can save more than 50% of the annotation costs without loss in quality, demonstrating the effectiveness of our approach.

97. Morphology-aware Word-Segmentation in Dialectal Arabic Adaptation of Neural Machine Translation [PDF] 返回目录
  ACL 2019. the Fourth Arabic Natural Language Processing Workshop
  Ahmed Tawfik, Mahitab Emam, Khaled Essam, Robert Nabil, Hany Hassan
Parallel corpora available for building machine translation (MT) models for dialectal Arabic (DA) are rather limited. The scarcity of resources has prompted the use of Modern Standard Arabic (MSA) abundant resources to complement the limited dialectal resource. However, dialectal clitics often differ between MSA and DA. This paper compares morphology-aware DA word segmentation to other word segmentation approaches like Byte Pair Encoding (BPE) and Sub-word Regularization (SR). A set of experiments conducted on Egyptian Arabic (EA), Levantine Arabic (LA), and Gulf Arabic (GA) show that a sufficiently accurate morphology-aware segmentation used in conjunction with BPE outperforms the other word segmentation approaches.

98. Translating Between Morphologically Rich Languages: An Arabic-to-Turkish Machine Translation System [PDF] 返回目录
  ACL 2019. the Fourth Arabic Natural Language Processing Workshop
  İlknur Durgar El-Kahlout, Emre Bektaş, Naime Şeyma Erdem, Hamza Kaya
This paper introduces the work on building a machine translation system for Arabic-to-Turkish in the news domain. Our work includes collecting parallel datasets in several ways for a new and low-resourced language pair, building baseline systems with state-of-the-art architectures and developing language specific algorithms for better translation. Parallel datasets are mainly collected three different ways; i) translating Arabic texts into Turkish by professional translators, ii) exploiting the web for open-source Arabic-Turkish parallel texts, iii) using back-translation. We per-formed preliminary experiments for Arabic-to-Turkish machine translation with neural(Marian) machine translation tools with a novel morphologically motivated vocabulary reduction method.

99. Unsupervised Compositional Translation of Multiword Expressions [PDF] 返回目录
  ACL 2019. the Joint Workshop on Multiword Expressions and WordNet (MWE-WN 2019)
  Pablo Gamallo, Marcos Garcia
This article describes a dependency-based strategy that uses compositional distributional semantics and cross-lingual word embeddings to translate multiword expressions (MWEs). Our unsupervised approach performs translation as a process of word contextualization by taking into account lexico-syntactic contexts and selectional preferences. This strategy is suited to translate phraseological combinations and phrases whose constituent words are lexically restricted by each other. Several experiments in adjective-noun and verb-object compounds show that mutual contextualization (co-compositionality) clearly outperforms other compositional methods. The paper also contributes with a new freely available dataset of English-Spanish MWEs used to validate the proposed compositional strategy.

100. Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers) [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 1: Research Papers)
  Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Matt Post, Marco Turchi, Karin Verspoor


101. Saliency-driven Word Alignment Interpretation for Neural Machine Translation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 1: Research Papers)
  Shuoyang Ding, Hainan Xu, Philipp Koehn
Despite their original goal to jointly learn to align and translate, Neural Machine Translation (NMT) models, especially Transformer, are often perceived as not learning interpretable word alignments. In this paper, we show that NMT models do learn interpretable word alignments, which could only be revealed with proper interpretation methods. We propose a series of such methods that are model-agnostic, are able to be applied either offline or online, and do not require parameter update or architectural change. We show that under the force decoding setup, the alignments induced by our interpretation method are of better quality than fast-align for some systems, and when performing free decoding, they agree well with the alignments induced by automatic alignment tools.

102. Improving Zero-shot Translation with Language-Independent Constraints [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 1: Research Papers)
  Ngoc-Quan Pham, Jan Niehues, Thanh-Le Ha, Alexander Waibel
An important concern in training multilingual neural machine translation (NMT) is to translate between language pairs unseen during training, i.e zero-shot translation. Improving this ability kills two birds with one stone by providing an alternative to pivot translation which also allows us to better understand how the model captures information between languages. In this work, we carried out an investigation on this capability of the multilingual NMT models. First, we intentionally create an encoder architecture which is independent with respect to the source language. Such experiments shed light on the ability of NMT encoders to learn multilingual representations, in general. Based on such proof of concept, we were able to design regularization methods into the standard Transformer model, so that the whole architecture becomes more robust in zero-shot conditions. We investigated the behaviour of such models on the standard IWSLT 2017 multilingual dataset. We achieved an average improvement of 2.23 BLEU points across 12 language pairs compared to the zero-shot performance of a state-of-the-art multilingual system. Additionally, we carry out further experiments in which the effect is confirmed even for language pairs with multiple intermediate pivots.

103. Incorporating Source Syntax into Transformer-Based Neural Machine Translation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 1: Research Papers)
  Anna Currey, Kenneth Heafield
Transformer-based neural machine translation (NMT) has recently achieved state-of-the-art performance on many machine translation tasks. However, recent work (Raganato and Tiedemann, 2018; Tang et al., 2018; Tran et al., 2018) has indicated that Transformer models may not learn syntactic structures as well as their recurrent neural network-based counterparts, particularly in low-resource cases. In this paper, we incorporate constituency parse information into a Transformer NMT model. We leverage linearized parses of the source training sentences in order to inject syntax into the Transformer architecture without modifying it. We introduce two methods: a multi-task machine translation and parsing model with a single encoder and decoder, and a mixed encoder model that learns to translate directly from parsed and unparsed source sentences. We evaluate our methods on low-resource translation from English into twenty target languages, showing consistent improvements of 1.3 BLEU on average across diverse target languages for the multi-task technique. We further evaluate the models on full-scale WMT tasks, finding that the multi-task model aids low- and medium-resource NMT but degenerates high-resource English-German translation.

104. Generalizing Back-Translation in Neural Machine Translation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 1: Research Papers)
  Miguel Graça, Yunsu Kim, Julian Schamper, Shahram Khadivi, Hermann Ney
Back-translation — data augmentation by translating target monolingual data — is a crucial component in modern neural machine translation (NMT). In this work, we reformulate back-translation in the scope of cross-entropy optimization of an NMT model, clarifying its underlying mathematical assumptions and approximations beyond its heuristic usage. Our formulation covers broader synthetic data generation schemes, including sampling from a target-to-source NMT model. With this formulation, we point out fundamental problems of the sampling-based approaches and propose to remedy them by (i) disabling label smoothing for the target-to-source model and (ii) sampling from a restricted search space. Our statements are investigated on the WMT 2018 German <-> English news translation task.

105. Tagged Back-Translation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 1: Research Papers)
  Isaac Caswell, Ciprian Chelba, David Grangier
Recent work in Neural Machine Translation (NMT) has shown significant quality gains from noised-beam decoding during back-translation, a method to generate synthetic parallel data. We show that the main role of such synthetic noise is not to diversify the source side, as previously suggested, but simply to indicate to the model that the given source is synthetic. We propose a simpler alternative to noising techniques, consisting of tagging back-translated source sentences with an extra token. Our results on WMT outperform noised back-translation in English-Romanian and match performance on English-German, redefining the state-of-the-art on the former.

106. The Effect of Translationese in Machine Translation Test Sets [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 1: Research Papers)
  Mike Zhang, Antonio Toral
The effect of translationese has been studied in the field of machine translation (MT), mostly with respect to training data. We study in depth the effect of translationese on test data, using the test sets from the last three editions of WMT’s news shared task, containing 17 translation directions. We show evidence that (i) the use of translationese in test sets results in inflated human evaluation scores for MT systems; (ii) in some cases system rankings do change and (iii) the impact translationese has on a translation direction is inversely correlated to the translation quality attainable by state-of-the-art MT systems for that direction.

107. Customizing Neural Machine Translation for Subtitling [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 1: Research Papers)
  Evgeny Matusov, Patrick Wilken, Yota Georgakopoulou
In this work, we customized a neural machine translation system for translation of subtitles in the domain of entertainment. The neural translation model was adapted to the subtitling content and style and extended by a simple, yet effective technique for utilizing inter-sentence context for short sentences such as dialog turns. The main contribution of the paper is a novel subtitle segmentation algorithm that predicts the end of a subtitle line given the previous word-level context using a recurrent neural network learned from human segmentation decisions. This model is combined with subtitle length and duration constraints established in the subtitling industry. We conducted a thorough human evaluation with two post-editors (English-to-Spanish translation of a documentary and a sitcom). It showed a notable productivity increase of up to 37% as compared to translating from scratch and significant reductions in human translation edit rate in comparison with the post-editing of the baseline non-adapted system without a learned segmentation model.

108. Integration of Dubbing Constraints into Machine Translation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 1: Research Papers)
  Ashutosh Saboo, Timo Baumann
Translation systems aim to perform a meaning-preserving conversion of linguistic material (typically text but also speech) from a source to a target language (and, to a lesser degree, the corresponding socio-cultural contexts). Dubbing, i.e., the lip-synchronous translation and revoicing of speech adds to this constraints about the close matching of phonetic and resulting visemic synchrony characteristics of source and target material. There is an inherent conflict between a translation’s meaning preservation and ‘dubbability’ and the resulting trade-off can be controlled by weighing the synchrony constraints. We introduce our work, which to the best of our knowledge is the first of its kind, on integrating synchrony constraints into the machine translation paradigm. We present first results for the integration of synchrony constraints into encoder decoder-based neural machine translation and show that considerably more ‘dubbable’ translations can be achieved with only a small impact on BLEU score, and dubbability improves more steeply than BLEU degrades.

109. Widening the Representation Bottleneck in Neural Machine Translation with Lexical Shortcuts [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 1: Research Papers)
  Denis Emelin, Ivan Titov, Rico Sennrich
The transformer is a state-of-the-art neural translation model that uses attention to iteratively refine lexical representations with information drawn from the surrounding context. Lexical features are fed into the first layer and propagated through a deep network of hidden layers. We argue that the need to represent and propagate lexical features in each layer limits the model’s capacity for learning and representing other information relevant to the task. To alleviate this bottleneck, we introduce gated shortcut connections between the embedding layer and each subsequent layer within the encoder and decoder. This enables the model to access relevant lexical content dynamically, without expending limited resources on storing it within intermediate states. We show that the proposed modification yields consistent improvements over a baseline transformer on standard WMT translation tasks in 5 translation directions (0.9 BLEU on average) and reduces the amount of lexical information passed along the hidden layers. We furthermore evaluate different ways to integrate lexical connections into the transformer architecture and present ablation experiments exploring the effect of proposed shortcuts on model behavior.

110. A High-Quality Multilingual Dataset for Structured Documentation Translation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 1: Research Papers)
  Kazuma Hashimoto, Raffaella Buschiazzo, James Bradbury, Teresa Marshall, Richard Socher, Caiming Xiong
This paper presents a high-quality multilingual dataset for the documentation domain to advance research on localization of structured text. Unlike widely-used datasets for translation of plain text, we collect XML-structured parallel text segments from the online documentation for an enterprise software platform. These Web pages have been professionally translated from English into 16 languages and maintained by domain experts, and around 100,000 text segments are available for each language pair. We build and evaluate translation models for seven target languages from English, with several different copy mechanisms and an XML-constrained beam search. We also experiment with a non-English pair to show that our dataset has the potential to explicitly enable 17 × 16 translation settings. Our experiments show that learning to translate with the XML tags improves translation accuracy, and the beam search accurately generates XML structures. We also discuss trade-offs of using the copy mechanisms by focusing on translation of numerical words and named entities. We further provide a detailed human analysis of gaps between the model output and human translations for real-world applications, including suitability for post-editing.

111. Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1) [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Matt Post, Marco Turchi, Karin Verspoor


112. Findings of the 2019 Conference on Machine Translation (WMT19) [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Loïc Barrault, Ondřej Bojar, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Santanu Pal, Matt Post, Marcos Zampieri
This paper presents the results of the premier shared task organized alongside the Conference on Machine Translation (WMT) 2019. Participants were asked to build machine translation systems for any of 18 language pairs, to be evaluated on a test set of news stories. The main metric for this task is human judgment of translation quality. The task was also opened up to additional test suites to probe specific aspects of translation.

113. Findings of the First Shared Task on Machine Translation Robustness [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Xian Li, Paul Michel, Antonios Anastasopoulos, Yonatan Belinkov, Nadir Durrani, Orhan Firat, Philipp Koehn, Graham Neubig, Juan Pino, Hassan Sajjad
We share the findings of the first shared task on improving robustness of Machine Translation (MT). The task provides a testbed representing challenges facing MT models deployed in the real world, and facilitates new approaches to improve models’ robustness to noisy input and domain mismatch. We focus on two language pairs (English-French and English-Japanese), and the submitted systems are evaluated on a blind test set consisting of noisy comments on Reddit and professionally sourced translations. As a new task, we received 23 submissions by 11 participating teams from universities, companies, national labs, etc. All submitted systems achieved large improvements over baselines, with the best improvement having +22.33 BLEU. We evaluated submissions by both human judgment and automatic evaluation (BLEU), which shows high correlations (Pearson’s r = 0.94 and 0.95). Furthermore, we conducted a qualitative analysis of the submitted systems using compare-mt, which revealed their salient differences in handling challenges in this task. Such analysis provides additional insights when there is occasional disagreement between human judgment and BLEU, e.g. systems better at producing colloquial expressions received higher score from human judgment.

114. The University of Edinburgh’s Submissions to the WMT19 News Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Rachel Bawden, Nikolay Bogoychev, Ulrich Germann, Roman Grundkiewicz, Faheem Kirefu, Antonio Valerio Miceli Barone, Alexandra Birch
The University of Edinburgh participated in the WMT19 Shared Task on News Translation in six language directions: English↔Gujarati, English↔Chinese, German→English, and English→Czech. For all translation directions, we created or used back-translations of monolingual data in the target language as additional synthetic training data. For English↔Gujarati, we also explored semi-supervised MT with cross-lingual language model pre-training, and translation pivoting through Hindi. For translation to and from Chinese, we investigated character-based tokenisation vs. sub-word segmentation of Chinese text. For German→English, we studied the impact of vast amounts of back-translated training data on translation quality, gaining a few additional insights over Edunov et al. (2018). For English→Czech, we compared different preprocessing and tokenisation regimes.

115. GTCOM Neural Machine Translation Systems for WMT19 [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Chao Bei, Hao Zong, Conghu Yuan, Qingming Liu, Baoyong Fan
This paper describes the Global Tone Communication Co., Ltd.’s submission of the WMT19 shared news translation task. We participate in six directions: English to (Gujarati, Lithuanian and Finnish) and (Gujarati, Lithuanian and Finnish) to English. Further, we get the best BLEU scores in the directions of English to Gujarati and Lithuanian to English (28.2 and 36.3 respectively) among all the participants. The submitted systems mainly focus on back-translation, knowledge distillation and reranking to build a competitive model for this task. Also, we apply language model to filter monolingual data, back-translated data and parallel data. The techniques we apply for data filtering include filtering by rules, language models. Besides, We conduct several experiments to validate different knowledge distillation techniques and right-to-left (R2L) reranking.

116. Machine Translation with parfda, Moses, kenlm, nplm, and PRO [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Ergun Biçici
We build parfda Moses statistical machine translation (SMT) models for most language pairs in the news translation task. We experiment with a hybrid approach using neural language models integrated into Moses. We obtain the constrained data statistics on the machine translation task, the coverage of the test sets, and the upper bounds on the translation results. We also contribute a new testsuite for the German-English language pair and a new automated key phrase extraction technique for the evaluation of the testsuite translations.

117. LIUM’s Contributions to the WMT2019 News Translation Task: Data and Systems for German-French Language Pairs [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Fethi Bougares, Jane Wottawa, Anne Baillot, Loïc Barrault, Adrien Bardet
This paper describes the neural machine translation (NMT) systems of the LIUM Laboratory developed for the French↔German news translation task of the Fourth Conference onMachine Translation (WMT 2019). The chosen language pair is included for the first time in the WMT news translation task. We de-scribe how the training and the evaluation data was created. We also present our participation in the French↔German translation directions using self-attentional Transformer networks with small and big architectures.

118. The University of Maryland’s Kazakh-English Neural Machine Translation System at WMT19 [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Eleftheria Briakou, Marine Carpuat
This paper describes the University of Maryland’s submission to the WMT 2019 Kazakh-English news translation task. We study the impact of transfer learning from another low-resource but related language. We experiment with different ways of encoding lexical units to maximize lexical overlap between the two language pairs, as well as back-translation and ensembling. The submitted system improves over a Kazakh-only baseline by +5.45 BLEU on newstest2019.

119. DBMS-KU Interpolation for WMT19 News Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Sari Dewi Budiwati, Al Hafiz Akbar Maulana Siagian, Tirana Noor Fatyanosa, Masayoshi Aritsugi
This paper presents the participation of DBMS-KU Interpolation system in WMT19 shared task, namely, Kazakh-English language pair. We examine the use of interpolation method using a different language model order. Our Interpolation system combines a direct translation with Russian as a pivot language. We use 3-gram and 5-gram language model orders to perform the language translation in this work. To reduce noise in the pivot translation process, we prune the phrase table of source-pivot and pivot-target. Our experimental results show that our Interpolation system outperforms the Baseline in terms of BLEU-cased score by +0.5 and +0.1 points in Kazakh-English and English-Kazakh, respectively. In particular, using the 5-gram language model order in our system could obtain better BLEU-cased score than utilizing the 3-gram one. Interestingly, we found that by employing the Interpolation system could reduce the perplexity score of English-Kazakh when using 3-gram language model order.

120. The TALP-UPC Machine Translation Systems for WMT19 News Translation Task: Pivoting Techniques for Low Resource MT [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Noe Casas, José A. R. Fonollosa, Carlos Escolano, Christine Basta, Marta R. Costa-jussà
In this article, we describe the TALP-UPC research group participation in the WMT19 news translation shared task for Kazakh-English. Given the low amount of parallel training data, we resort to using Russian as pivot language, training subword-based statistical translation systems for Russian-Kazakh and Russian-English that were then used to create two synthetic pseudo-parallel corpora for Kazakh-English and English-Kazakh respectively. Finally, a self-attention model based on the decoder part of the Transformer architecture was trained on the two pseudo-parallel corpora.

121. NICT’s Supervised Neural Machine Translation Systems for the WMT19 News Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Raj Dabre, Kehai Chen, Benjamin Marie, Rui Wang, Atsushi Fujita, Masao Utiyama, Eiichiro Sumita
In this paper, we describe our supervised neural machine translation (NMT) systems that we developed for the news translation task for Kazakh↔English, Gujarati↔English, Chinese↔English, and English→Finnish translation directions. We focused on leveraging multilingual transfer learning and back-translation for the extremely low-resource language pairs: Kazakh↔English and Gujarati↔English translation. For the Chinese↔English translation, we used the provided parallel data augmented with a large quantity of back-translated monolingual data to train state-of-the-art NMT systems. We then employed techniques that have been proven to be most effective, such as back-translation, fine-tuning, and model ensembling, to generate the primary submissions of Chinese↔English. For English→Finnish, our submission from WMT18 remains a strong baseline despite the increase in parallel corpora for this year’s task.

122. The University of Sydney’s Machine Translation System for WMT19 [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Liang Ding, Dacheng Tao
This paper describes the University of Sydney’s submission of the WMT 2019 shared news translation task. We participated in the Finnish->English direction and got the best BLEU(33.0) score among all the participants. Our system is based on the self-attentional Transformer networks, into which we integrated the most recent effective strategies from academic research (e.g., BPE, back translation, multi-features data selection, data augmentation, greedy model ensemble, reranking, ConMBR system combination, and postprocessing). Furthermore, we propose a novel augmentation method Cycle Translation and a data mixture strategy Big/Small parallel construction to entirely exploit the synthetic corpus. Extensive experiments show that adding the above techniques can make continuous improvements of the BLEU scores, and the best result outperforms the baseline (Transformer ensemble model trained with the original parallel corpus) by approximately 5.3 BLEU score, achieving the state-of-the-art performance.

123. The IIIT-H Gujarati-English Machine Translation System for WMT19 [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Vikrant Goyal, Dipti Misra Sharma
This paper describes the Neural Machine Translation system of IIIT-Hyderabad for the Gujarati→English news translation shared task of WMT19. Our system is basedon encoder-decoder framework with attention mechanism. We experimented with Multilingual Neural MT models. Our experiments show that Multilingual Neural Machine Translation leveraging parallel data from related language pairs helps in significant BLEU improvements upto 11.5, for low resource language pairs like Gujarati-English

124. Kingsoft’s Neural Machine Translation System for WMT19 [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Xinze Guo, Chang Liu, Xiaolong Li, Yiran Wang, Guoliang Li, Feng Wang, Zhitao Xu, Liuyi Yang, Li Ma, Changliang Li
This paper describes the Kingsoft AI Lab’s submission to the WMT2019 news translation shared task. We participated in two language directions: English-Chinese and Chinese-English. For both language directions, we trained several variants of Transformer models using the provided parallel data enlarged with a large quantity of back-translated monolingual data. The best translation result was obtained with ensemble and reranking techniques. According to automatic metrics (BLEU) our Chinese-English system reached the second highest score, and our English-Chinese system reached the second highest score for this subtask.

125. Evaluating the Supervised and Zero-shot Performance of Multi-lingual Translation Models [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Chris Hokamp, John Glover, Demian Gholipour Ghalandari
We study several methods for full or partial sharing of the decoder parameters of multi-lingual NMT models. Using only the WMT 2019 shared task parallel datasets for training, we evaluate both fully supervised and zero-shot translation performance in 110 unique translation directions. We use additional test sets and re-purpose evaluation methods recently used for unsupervised MT in order to evaluate zero-shot translation performance for language pairs where no gold-standard parallel data is available. To our knowledge, this is the largest evaluation of multi-lingual translation yet conducted in terms of the total size of the training data we use, and in terms of the number of zero-shot translation pairs we evaluate. We conduct an in-depth evaluation of the translation performance of different models, highlighting the trade-offs between methods of sharing decoder parameters. We find that models which have task-specific decoder parameters outperform models where decoder parameters are fully shared across all tasks.

126. The MLLP-UPV Supervised Machine Translation Systems for WMT19 News Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Javier Iranzo-Sánchez, Gonçal Garcés Díaz-Munío, Jorge Civera, Alfons Juan
This paper describes the participation of the MLLP research group of the Universitat Politècnica de València in the WMT 2019 News Translation Shared Task. In this edition, we have submitted systems for the German ↔ English and German ↔ French language pairs, participating in both directions of each pair. Our submitted systems, based on the Transformer architecture, make ample use of data filtering, synthetic data and domain adaptation through fine-tuning.

127. Microsoft Translator at WMT 2019: Towards Large-Scale Document-Level Neural Machine Translation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Marcin Junczys-Dowmunt
This paper describes the Microsoft Translator submissions to the WMT19 news translation shared task for English-German. Our main focus is document-level neural machine translation with deep transformer models. We start with strong sentence-level baselines, trained on large-scale data created via data-filtering and noisy back-translation and find that back-translation seems to mainly help with translationese input. We explore fine-tuning techniques, deeper models and different ensembling strategies to counter these effects. Using document boundaries present in the authentic and synthetic parallel data, we create sequences of up to 1000 subword segments and train transformer translation models. We experiment with data augmentation techniques for the smaller authentic data with document-boundaries and for larger authentic data without boundaries. We further explore multi-task training for the incorporation of document-level source language monolingual data via the BERT-objective on the encoder and two-pass decoding for combinations of sentence-level and document-level systems. Based on preliminary human evaluation results, evaluators strongly prefer the document-level systems over our comparable sentence-level system. The document-level systems also seem to score higher than the human references in source-based direct assessment.

128. CUNI Systems for the Unsupervised News Translation Task in WMT 2019 [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Ivana Kvapilíková, Dominik Macháček, Ondřej Bojar
In this paper we describe the CUNI translation system used for the unsupervised news shared task of the ACL 2019 Fourth Conference on Machine Translation (WMT19). We follow the strategy of Artetxe ae at. (2018b), creating a seed phrase-based system where the phrase table is initialized from cross-lingual embedding mappings trained on monolingual data, followed by a neural machine translation system trained on synthetic parallel data. The synthetic corpus was produced from a monolingual corpus by a tuned PBMT model refined through iterative back-translation. We further focus on the handling of named entities, i.e. the part of vocabulary where the cross-lingual embedding mapping suffers most. Our system reaches a BLEU score of 15.3 on the German-Czech WMT19 shared task.

129. A Comparison on Fine-grained Pre-trained Embeddings for the WMT19Chinese-English News Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Zhenhao Li, Lucia Specia
This paper describes our submission to the WMT 2019 Chinese-English (zh-en) news translation shared task. Our systems are based on RNN architectures with pre-trained embeddings which utilize character and sub-character information. We compare models with these different granularity levels using different evaluating metics. We find that a finer granularity embeddings can help the model according to character level evaluation and that the pre-trained embeddings can also be beneficial for model performance marginally when the training data is limited.

130. The NiuTrans Machine Translation Systems for WMT19 [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Bei Li, Yinqiao Li, Chen Xu, Ye Lin, Jiqiang Liu, Hui Liu, Ziyang Wang, Yuhao Zhang, Nuo Xu, Zeyang Wang, Kai Feng, Hexuan Chen, Tengbo Liu, Yanyang Li, Qiang Wang, Tong Xiao, Jingbo Zhu
This paper described NiuTrans neural machine translation systems for the WMT 2019 news translation tasks. We participated in 13 translation directions, including 11 supervised tasks, namely EN↔{ZH, DE, RU, KK, LT}, GU→EN and the unsupervised DE↔CS sub-track. Our systems were built on Deep Transformer and several back-translation methods. Iterative knowledge distillation and ensemble+reranking were also employed to obtain stronger models. Our unsupervised submissions were based on NMT enhanced by SMT. As a result, we achieved the highest BLEU scores in {KK↔EN, GU→EN} directions, ranking 2nd in {RU→EN, DE↔CS} and 3rd in {ZH→EN, LT→EN, EN→RU, EN↔DE} among all constrained submissions.

131. Multi-Source Transformer for Kazakh-Russian-English Neural Machine Translation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Patrick Littell, Chi-kiu Lo, Samuel Larkin, Darlene Stewart
We describe the neural machine translation (NMT) system developed at the National Research Council of Canada (NRC) for the Kazakh-English news translation task of the Fourth Conference on Machine Translation (WMT19). Our submission is a multi-source NMT taking both the original Kazakh sentence and its Russian translation as input for translating into English.

132. Incorporating Word and Subword Units in Unsupervised Machine Translation Using Language Model Rescoring [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Zihan Liu, Yan Xu, Genta Indra Winata, Pascale Fung
This paper describes CAiRE’s submission to the unsupervised machine translation track of the WMT’19 news shared task from German to Czech. We leverage a phrase-based statistical machine translation (PBSMT) model and a pre-trained language model to combine word-level neural machine translation (NMT) and subword-level NMT models without using any parallel data. We propose to solve the morphological richness problem of languages by training byte-pair encoding (BPE) embeddings for German and Czech separately, and they are aligned using MUSE (Conneau et al., 2018). To ensure the fluency and consistency of translations, a rescoring mechanism is proposed that reuses the pre-trained language model to select the translation candidates generated through beam search. Moreover, a series of pre-processing and post-processing approaches are applied to improve the quality of final translations.

133. JUMT at WMT2019 News Translation Task: A Hybrid Approach to Machine Translation for Lithuanian to English [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Sainik Kumar Mahata, Avishek Garain, Adityar Rayala, Dipankar Das, Sivaji Bandyopadhyay
In the current work, we present a description of the system submitted to WMT 2019 News Translation Shared task. The system was created to translate news text from Lithuanian to English. To accomplish the given task, our system used a Word Embedding based Neural Machine Translation model to post edit the outputs generated by a Statistical Machine Translation model. The current paper documents the architecture of our model, descriptions of the various modules and the results produced using the same. Our system garnered a BLEU score of 17.6.

134. Johns Hopkins University Submission for WMT News Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Kelly Marchisio, Yash Kumar Lal, Philipp Koehn
We describe the work of Johns Hopkins University for the shared task of news translation organized by the Fourth Conference on Machine Translation (2019). We submitted systems for both directions of the English-German language pair. The systems combine multiple techniques – sampling, filtering, iterative backtranslation, and continued training – previously used to improve performance of neural machine translation models. At submission time, we achieve a BLEU score of 38.1 for De-En and 42.5 for En-De translation directions on newstest2019. Post-submission, the score is 38.4 for De-En and 42.8 for En-De. Various experiments conducted in the process are also described.

135. NICT’s Unsupervised Neural and Statistical Machine Translation Systems for the WMT19 News Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Benjamin Marie, Haipeng Sun, Rui Wang, Kehai Chen, Atsushi Fujita, Masao Utiyama, Eiichiro Sumita
This paper presents the NICT’s participation in the WMT19 unsupervised news translation task. We participated in the unsupervised translation direction: German-Czech. Our primary submission to the task is the result of a simple combination of our unsupervised neural and statistical machine translation systems. Our system is ranked first for the German-to-Czech translation task, using only the data provided by the organizers (“constraint’”), according to both BLEU-cased and human evaluation. We also performed contrastive experiments with other language pairs, namely, English-Gujarati and English-Kazakh, to better assess the effectiveness of unsupervised machine translation in for distant language pairs and in truly low-resource conditions.

136. PROMT Systems for WMT 2019 Shared Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Alexander Molchanov
This paper describes the PROMT submissions for the WMT 2019 Shared News Translation Task. This year we participated in two language pairs and in three directions: English-Russian, English-German and German-English. All our submissions are Marian-based neural systems. We use significantly more data compared to the last year. We also present our improved data filtering pipeline.

137. JU-Saarland Submission to the WMT2019 English–Gujarati Translation Shared Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Riktim Mondal, Shankha Raj Nayek, Aditya Chowdhury, Santanu Pal, Sudip Kumar Naskar, Josef van Genabith
In this paper we describe our joint submission (JU-Saarland) from Jadavpur University and Saarland University in the WMT 2019 news translation shared task for English–Gujarati language pair within the translation task sub-track. Our baseline and primary submissions are built using Recurrent neural network (RNN) based neural machine translation (NMT) system which follows attention mechanism. Given the fact that the two languages belong to different language families and there is not enough parallel data for this language pair, building a high quality NMT system for this language pair is a difficult task. We produced synthetic data through back-translation from available monolingual data. We report the translation quality of our English–Gujarati and Gujarati–English NMT systems trained at word, byte-pair and character encoding levels where RNN at word level is considered as the baseline and used for comparison purpose. Our English–Gujarati system ranked in the second position in the shared task.

138. Facebook FAIR’s WMT19 News Translation Task Submission [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov
This paper describes Facebook FAIR’s submission to the WMT19 shared news translation task. We participate in four language directions, English <-> German and English <-> Russian in both directions. Following our submission from last year, our baseline systems are large BPE-based transformer models trained with the FAIRSEQ sequence modeling toolkit. This year we experiment with different bitext data filtering schemes, as well as with adding filtered back-translated data. We also ensemble and fine-tune our models on domain-specific data, then decode using noisy channel model reranking. Our system improves on our previous system’s performance by 4.5 BLEU points and achieves the best case-sensitive BLEU score for the translation direction English→Russian.

139. eTranslation’s Submissions to the WMT 2019 News Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Csaba Oravecz, Katina Bontcheva, Adrien Lardilleux, László Tihanyi, Andreas Eisele
This paper describes the submissions of the eTranslation team to the WMT 2019 news translation shared task. The systems have been developed with the aim of identifying and following rather than establishing best practices, under the constraints imposed by a low resource training and decoding environment normally used for our production systems. Thus most of the findings and results are transferable to systems used in the eTranslation service. Evaluations suggest that this approach is able to produce decent models with good performance and speed without the overhead of using prohibitively deep and complex architectures.

140. Tilde’s Machine Translation Systems for WMT 2019 [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Marcis Pinnis, Rihards Krišlauks, Matīss Rikters
The paper describes the development process of Tilde’s NMT systems for the WMT 2019 shared task on news translation. We trained systems for the English-Lithuanian and Lithuanian-English translation directions in constrained and unconstrained tracks. We build upon the best methods of the previous year’s competition and combine them with recent advancements in the field. We also present a new method to ensure source domain adherence in back-translated data. Our systems achieved a shared first place in human evaluation.

141. Apertium-fin-eng–Rule-based Shallow Machine Translation for WMT 2019 Shared Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Tommi Pirinen
In this paper we describe a rule-based, bi-directional machine translation system for the Finnish—English language pair. The baseline system was based on the existing data of FinnWordNet, omorfi and apertium-eng. We have built the disambiguation, lexical selection and translation rules by hand. The dictionaries and rules have been developed based on the shared task data. We describe in this article the use of the shared task data as a kind of a test-driven development workflow in RBMT development and show that it suits perfectly to a modern software engineering continuous integration workflow of RBMT and yields big increases to BLEU scores with minimal effort.

142. The RWTH Aachen University Machine Translation Systems for WMT 2019 [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Jan Rosendahl, Christian Herold, Yunsu Kim, Miguel Graça, Weiyue Wang, Parnia Bahar, Yingbo Gao, Hermann Ney
This paper describes the neural machine translation systems developed at the RWTH Aachen University for the German-English, Chinese-English and Kazakh-English news translation tasks of the Fourth Conference on Machine Translation (WMT19). For all tasks, the final submitted system is based on the Transformer architecture. We focus on improving data filtering and fine-tuning as well as systematically evaluating interesting approaches like unigram language model segmentation and transfer learning. For the De-En task, none of the tested methods gave a significant improvement over last years winning system and we end up with the same performance, resulting in 39.6% BLEU on newstest2019. In the Zh-En task, we show 1.3% BLEU improvement over our last year’s submission, which we mostly attribute to the splitting of long sentences during translation. We further report results on the Kazakh-English task where we gain improvements of 11.1% BLEU over our baseline system. On the same task we present a recent transfer learning approach, which uses half of the free parameters of our submission system and performs on par with it.

143. The Universitat d’Alacant Submissions to the English-to-Kazakh News Translation Task at WMT 2019 [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Víctor M. Sánchez-Cartagena, Juan Antonio Pérez-Ortiz, Felipe Sánchez-Martínez
This paper describes the two submissions of Universitat d’Alacant to the English-to-Kazakh news translation task at WMT 2019. Our submissions take advantage of monolingual data and parallel data from other language pairs by means of iterative backtranslation, pivot backtranslation and transfer learning. They also use linguistic information in two ways: morphological segmentation of Kazakh text, and integration of the output of a rule-based machine translation system. Our systems were ranked second in terms of chrF++ despite being built from an ensemble of only 2 independent training runs.

144. Baidu Neural Machine Translation Systems for WMT19 [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Meng Sun, Bojian Jiang, Hao Xiong, Zhongjun He, Hua Wu, Haifeng Wang
In this paper we introduce the systems Baidu submitted for the WMT19 shared task on Chinese<->English news translation. Our systems are based on the Transformer architecture with some effective improvements. Data selection, back translation, data augmentation, knowledge distillation, domain adaptation, model ensemble and re-ranking are employed and proven effective in our experiments. Our Chinese->English system achieved the highest case-sensitive BLEU score among all constrained submissions, and our English->Chinese system ranked the second in all submissions.

145. University of Tartu’s Multilingual Multi-domain WMT19 News Translation Shared Task Submission [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Andre Tättar, Elizaveta Korotkova, Mark Fishel
This paper describes the University of Tartu’s submission to the news translation shared task of WMT19, where the core idea was to train a single multilingual system to cover several language pairs of the shared task and submit its results. We only used the constrained data from the shared task. We describe our approach and its results and discuss the technical issues we faced.

146. Neural Machine Translation for English–Kazakh with Morphological Segmentation and Synthetic Data [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Antonio Toral, Lukas Edman, Galiya Yeshmagambetova, Jennifer Spenader
This paper presents the systems submitted by the University of Groningen to the English– Kazakh language pair (both translation directions) for the WMT 2019 news translation task. We explore the potential benefits of (i) morphological segmentation (both unsupervised and rule-based), given the agglutinative nature of Kazakh, (ii) data from two additional languages (Turkish and Russian), given the scarcity of English–Kazakh data and (iii) synthetic data, both for the source and for the target language. Our best submissions ranked second for Kazakh→English and third for English→Kazakh in terms of the BLEU automatic evaluation metric.

147. The LMU Munich Unsupervised Machine Translation System for WMT19 [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Dario Stojanovski, Viktor Hangya, Matthias Huck, Alexander Fraser
We describe LMU Munich’s machine translation system for German→Czech translation which was used to participate in the WMT19 shared task on unsupervised news translation. We train our model using monolingual data only from both languages. The final model is an unsupervised neural model using established techniques for unsupervised translation such as denoising autoencoding and online back-translation. We bootstrap the model with masked language model pretraining and enhance it with back-translations from an unsupervised phrase-based system which is itself bootstrapped using unsupervised bilingual word embeddings.

148. Combining Local and Document-Level Context: The LMU Munich Neural Machine Translation System at WMT19 [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Dario Stojanovski, Alexander Fraser
We describe LMU Munich’s machine translation system for English→German translation which was used to participate in the WMT19 shared task on supervised news translation. We specifically participated in the document-level MT track. The system used as a primary submission is a context-aware Transformer capable of both rich modeling of limited contextual information and integration of large-scale document-level context with a less rich representation. We train this model by fine-tuning a big Transformer baseline. Our experimental results show that document-level context provides for large improvements in translation quality, and adding a rich representation of the previous sentence provides a small additional gain.

149. IITP-MT System for Gujarati-English News Translation Task at WMT 2019 [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Sukanta Sen, Kamal Kumar Gupta, Asif Ekbal, Pushpak Bhattacharyya
We describe our submission to WMT 2019 News translation shared task for Gujarati-English language pair. We submit constrained systems, i.e, we rely on the data provided for this language pair and do not use any external data. We train Transformer based subword-level neural machine translation (NMT) system using original parallel corpus along with synthetic parallel corpus obtained through back-translation of monolingual data. Our primary systems achieve BLEU scores of 10.4 and 8.1 for Gujarati→English and English→Gujarati, respectively. We observe that incorporating monolingual data through back-translation improves the BLEU score significantly over baseline NMT and SMT systems for this language pair.

150. The University of Helsinki Submissions to the WMT19 News Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Aarne Talman, Umut Sulubacak, Raúl Vázquez, Yves Scherrer, Sami Virpioja, Alessandro Raganato, Arvi Hurskainen, Jörg Tiedemann
In this paper we present the University of Helsinki submissions to the WMT 2019 shared news translation task in three language pairs: English-German, English-Finnish and Finnish-English. This year we focused first on cleaning and filtering the training data using multiple data-filtering approaches, resulting in much smaller and cleaner training sets. For English-German we trained both sentence-level transformer models as well as compared different document-level translation approaches. For Finnish-English and English-Finnish we focused on different segmentation approaches and we also included a rule-based system for English-Finnish.

151. The En-Ru Two-way Integrated Machine Translation System Based on Transformer [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Doron Yu
Machine translation is one of the most popular areas in natural language processing. WMT is a conference to assess the level of machine translation capabilities of organizations around the world, which is the evaluation activity we participated in. In this review we participated in a two-way translation track from Russian to English and English to Russian. We used official training data, 38 million parallel corpora, and 10 million monolingual corpora. The overall framework we use is the Transformer neural machine translation model, supplemented by data filtering, post-processing, reordering and other related processing methods. The BLEU value of our final translation result from Russian to English is 38.7, ranking 5th, while from English to Russian is 27.8, ranking 10th.

152. DFKI-NMT Submission to the WMT19 News Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Jingyi Zhang, Josef van Genabith
This paper describes the DFKI-NMT submission to the WMT19 News translation task. We participated in both English-to-German and German-to-English directions. We trained Transformer models and adopted various techniques for effectively training our models, including data selection, back-translation and in-domain fine-tuning. We give a detailed analysis of the performance of our system.

153. Linguistic Evaluation of German-English Machine Translation Using a Test Suite [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Eleftherios Avramidis, Vivien Macketanz, Ursula Strohriegel, Hans Uszkoreit
We present the results of the application of a grammatical test suite for German-to-English MT on the systems submitted at WMT19, with a detailed analysis for 107 phenomena organized in 14 categories. The systems still translate wrong one out of four test items in average. Low performance is indicated for idioms, modals, pseudo-clefts, multi-word expressions and verb valency. When compared to last year, there has been a improvement of function words, non verbal agreement and punctuation. More detailed conclusions about particular systems and phenomena are also presented.

154. Evaluating Conjunction Disambiguation on English-to-German and French-to-German WMT 2019 Translation Hypotheses [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Maja Popović
We present a test set for evaluating an MT system’s capability to translate ambiguous conjunctions depending on the sentence structure. We concentrate on the English conjunction “but” and its French equivalent “mais” which can be translated into two different German conjunctions. We evaluate all English-to-German and French-to-German submissions to the WMT 2019 shared translation task. The evaluation is done mainly automatically, with additional fast manual inspection of unclear cases. All systems almost perfectly recognise the target conjunction “aber”, whereas accuracies for the other target conjunction “sondern” range from 78% to 97%, and the errors are mostly caused by replacing it with the alternative conjunction “aber”. The best performing system for both language pairs is a multilingual Transformer “TartuNLP” system trained on all WMT 2019 language pairs which use the Latin script, indicating that the multilingual approach is beneficial for conjunction disambiguation. As for other system features, such as using synthetic back-translated data, context-aware, hybrid, etc., no particular (dis)advantages can be observed. Qualitative manual inspection of translation hypotheses shown that highly ranked systems generally produce translations with high adequacy and fluency, meaning that these systems are not only capable of capturing the right conjunction whereas the rest of the translation hypothesis is poor. On the other hand, the low ranked systems generally exhibit lower fluency and poor adequacy.

155. The MuCoW Test Suite at WMT 2019: Automatically Harvested Multilingual Contrastive Word Sense Disambiguation Test Sets for Machine Translation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Alessandro Raganato, Yves Scherrer, Jörg Tiedemann
Supervised Neural Machine Translation (NMT) systems currently achieve impressive translation quality for many language pairs. One of the key features of a correct translation is the ability to perform word sense disambiguation (WSD), i.e., to translate an ambiguous word with its correct sense. Existing evaluation benchmarks on WSD capabilities of translation systems rely heavily on manual work and cover only few language pairs and few word types. We present MuCoW, a multilingual contrastive test suite that covers 16 language pairs with more than 200 thousand contrastive sentence pairs, automatically built from word-aligned parallel corpora and the wide-coverage multilingual sense inventory of BabelNet. We evaluate the quality of the ambiguity lexicons and of the resulting test suite on all submissions from 9 language pairs presented in the WMT19 news shared translation task, plus on other 5 language pairs using NMT pretrained models. The MuCoW test suite is available at http://github.com/Helsinki-NLP/MuCoW.

156. SAO WMT19 Test Suite: Machine Translation of Audit Reports [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Tereza Vojtěchová, Michal Novák, Miloš Klouček, Ondřej Bojar
This paper describes a machine translation test set of documents from the auditing domain and its use as one of the “test suites” in the WMT19 News Translation Task for translation directions involving Czech, English and German. Our evaluation suggests that current MT systems optimized for the general news domain can perform quite well even in the particular domain of audit reports. The detailed manual evaluation however indicates that deep factual knowledge of the domain is necessary. For the naked eye of a non-expert, translations by many systems seem almost perfect and automatic MT evaluation with one reference is practically useless for considering these details. Furthermore, we show on a sample document from the domain of agreements that even the best systems completely fail in preserving the semantics of the agreement, namely the identity of the parties.

157. WMDO: Fluency-based Word Mover’s Distance for Machine Translation Evaluation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Julian Chow, Lucia Specia, Pranava Madhyastha
We propose WMDO, a metric based on distance between distributions in the semantic vector space. Matching in the semantic space has been investigated for translation evaluation, but the constraints of a translation’s word order have not been fully explored. Building on the Word Mover’s Distance metric and various word embeddings, we introduce a fragmentation penalty to account for fluency of a translation. This word order extension is shown to perform better than standard WMD, with promising results against other types of metrics.

158. Meteor++ 2.0: Adopt Syntactic Level Paraphrase Knowledge into Machine Translation Evaluation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Yinuo Guo, Junfeng Hu
This paper describes Meteor++ 2.0, our submission to the WMT19 Metric Shared Task. The well known Meteor metric improves machine translation evaluation by introducing paraphrase knowledge. However, it only focuses on the lexical level and utilizes consecutive n-grams paraphrases. In this work, we take into consideration syntactic level paraphrase knowledge, which sometimes may be skip-grams. We describe how such knowledge can be extracted from Paraphrase Database (PPDB) and integrated into Meteor-based metrics. Experiments on WMT15 and WMT17 evaluation datasets show that the newly proposed metric outperforms all previous versions of Meteor.

159. EED: Extended Edit Distance Measure for Machine Translation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Peter Stanchev, Weiyue Wang, Hermann Ney
Over the years a number of machine translation metrics have been developed in order to evaluate the accuracy and quality of machine-generated translations. Metrics such as BLEU and TER have been used for decades. However, with the rapid progress of machine translation systems, the need for better metrics is growing. This paper proposes an extension of the edit distance, which achieves better human correlation, whilst remaining fast, flexible and easy to understand.

160. Filtering Pseudo-References by Paraphrasing for Automatic Evaluation of Machine Translation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Ryoma Yoshimura, Hiroki Shimanaka, Yukio Matsumura, Hayahide Yamagishi, Mamoru Komachi
In this paper, we introduce our participation in the WMT 2019 Metric Shared Task. We propose an improved version of sentence BLEU using filtered pseudo-references. We propose a method to filter pseudo-references by paraphrasing for automatic evaluation of machine translation (MT). We use the outputs of off-the-shelf MT systems as pseudo-references filtered by paraphrasing in addition to a single human reference (gold reference). We use BERT fine-tuned with paraphrase corpus to filter pseudo-references by checking the paraphrasability with the gold reference. Our experimental results of the WMT 2016 and 2017 datasets show that our method achieved higher correlation with human evaluation than the sentence BLEU (SentBLEU) baselines with a single reference and with unfiltered pseudo-references.

161. Naver Labs Europe’s Systems for the WMT19 Machine Translation Robustness Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Alexandre Berard, Ioan Calapodescu, Claude Roux
This paper describes the systems that we submitted to the WMT19 Machine Translation robustness task. This task aims to improve MT’s robustness to noise found on social media, like informal language, spelling mistakes and other orthographic variations. The organizers provide parallel data extracted from a social media website in two language pairs: French-English and Japanese-English (one for each language direction). The goal is to obtain the best scores on unseen test sets from the same source, according to automatic metrics (BLEU) and human evaluation. We propose one single and one ensemble system for each translation direction. Our ensemble models ranked first in all language pairs, according to BLEU evaluation. We discuss the pre-processing choices that we made, and present our solutions for robustness to noise and domain adaptation.

162. NICT’s Supervised Neural Machine Translation Systems for the WMT19 Translation Robustness Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Raj Dabre, Eiichiro Sumita
In this paper we describe our neural machine translation (NMT) systems for Japanese↔English translation which we submitted to the translation robustness task. We focused on leveraging transfer learning via fine tuning to improve translation quality. We used a fairly well established domain adaptation technique called Mixed Fine Tuning (MFT) (Chu et. al., 2017) to improve translation quality for Japanese↔English. We also trained bi-directional NMT models instead of uni-directional ones as the former are known to be quite robust, especially in low-resource scenarios. However, given the noisy nature of the in-domain training data, the improvements we obtained are rather modest.

163. NTT’s Machine Translation Systems for WMT19 Robustness Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Soichiro Murakami, Makoto Morishita, Tsutomu Hirao, Masaaki Nagata
This paper describes NTT’s submission to the WMT19 robustness task. This task mainly focuses on translating noisy text (e.g., posts on Twitter), which presents different difficulties from typical translation tasks such as news. Our submission combined techniques including utilization of a synthetic corpus, domain adaptation, and a placeholder mechanism, which significantly improved over the previous baseline. Experimental results revealed the placeholder mechanism, which temporarily replaces the non-standard tokens including emojis and emoticons with special placeholder tokens during translation, improves translation accuracy even with noisy texts.

164. Robust Machine Translation with Domain Sensitive Pseudo-Sources: Baidu-OSU WMT19 MT Robustness Shared Task System Report [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Renjie Zheng, Hairong Liu, Mingbo Ma, Baigong Zheng, Liang Huang
This paper describes the machine translation system developed jointly by Baidu Research and Oregon State University for WMT 2019 Machine Translation Robustness Shared Task. Translation of social media is a very challenging problem, since its style is very different from normal parallel corpora (e.g. News) and also include various types of noises. To make it worse, the amount of social media parallel corpora is extremely limited. In this paper, we use a domain sensitive training method which leverages a large amount of parallel data from popular domains together with a little amount of parallel data from social media. Furthermore, we generate a parallel dataset with pseudo noisy source sentences which are back-translated from monolingual data using a model trained by a similar domain sensitive way. In this way, we achieve more than 10 BLEU improvement in both En-Fr and Fr-En translation compared with the baseline methods.

165. Improving Robustness of Neural Machine Translation with Multi-task Learning [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1)
  Shuyan Zhou, Xiangkai Zeng, Yingqi Zhou, Antonios Anastasopoulos, Graham Neubig
While neural machine translation (NMT) achieves remarkable performance on clean, in-domain text, performance is known to degrade drastically when facing text which is full of typos, grammatical errors and other varieties of noise. In this work, we propose a multi-task learning algorithm for transformer-based MT systems that is more resilient to this noise. We describe our submission to the WMT 2019 Robustness shared task based on this method. Our model achieves a BLEU score of 32.8 on the shared task French to English dataset, which is 7.1 BLEU points higher than the baseline vanilla transformer trained with clean text.

166. Proceedings of the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2) [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Christof Monz, Matteo Negri, Aurélie Névéol, Mariana Neves, Matt Post, Marco Turchi, Karin Verspoor


167. Findings of the WMT 2019 Biomedical Translation Shared Task: Evaluation for MEDLINE Abstracts and Biomedical Terminologies [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Rachel Bawden, Kevin Bretonnel Cohen, Cristian Grozea, Antonio Jimeno Yepes, Madeleine Kittner, Martin Krallinger, Nancy Mah, Aurelie Neveol, Mariana Neves, Felipe Soares, Amy Siu, Karin Verspoor, Maika Vicente Navarro
In the fourth edition of the WMT Biomedical Translation task, we considered a total of six languages, namely Chinese (zh), English (en), French (fr), German (de), Portuguese (pt), and Spanish (es). We performed an evaluation of automatic translations for a total of 10 language directions, namely, zh/en, en/zh, fr/en, en/fr, de/en, en/de, pt/en, en/pt, es/en, and en/es. We provided training data based on MEDLINE abstracts for eight of the 10 language pairs and test sets for all of them. In addition to that, we offered a new sub-task for the translation of terms in biomedical terminologies for the en/es language direction. Higher BLEU scores (close to 0.5) were obtained for the es/en, en/es and en/pt test sets, as well as for the terminology sub-task. After manual validation of the primary runs, some submissions were judged to be better than the reference translations, for instance, for de/en, en/es and es/en.

168. RTM Stacking Results for Machine Translation Performance Prediction [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Ergun Biçici
We obtain new results using referential translation machines with increased number of learning models in the set of results that are stacked to obtain a better mixture of experts prediction. We combine features extracted from the word-level predictions with the sentence- or document-level features, which significantly improve the results on the training sets but decrease the test set results.

169. Unbabel’s Participation in the WMT19 Translation Quality Estimation Shared Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Fabio Kepler, Jonay Trénous, Marcos Treviso, Miguel Vera, António Góis, M. Amin Farajian, António V. Lopes, André F. T. Martins
We present the contribution of the Unbabel team to the WMT 2019 Shared Task on Quality Estimation. We participated on the word, sentence, and document-level tracks, encompassing 3 language pairs: English-German, English-Russian, and English-French. Our submissions build upon the recent OpenKiwi framework: We combine linear, neural, and predictor-estimator systems with new transfer learning approaches using BERT and XLM pre-trained models. We compare systems individually and propose new ensemble techniques for word and sentence-level predictions. We also propose a simple technique for converting word labels into document-level predictions. Overall, our submitted systems achieve the best results on all tracks and language pairs by a considerable margin.

170. Quality Estimation and Translation Metrics via Pre-trained Word and Sentence Embeddings [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Elizaveta Yankovskaya, Andre Tättar, Mark Fishel
We propose the use of pre-trained embeddings as features of a regression model for sentence-level quality estimation of machine translation. In our work we combine freely available BERT and LASER multilingual embeddings to train a neural-based regression model. In the second proposed method we use as an input features not only pre-trained embeddings, but also log probability of any machine translation (MT) system. Both methods are applied to several language pairs and are evaluated both as a classical quality estimation system (predicting the HTER score) as well as an MT metric (predicting human judgements of translation quality).

171. SOURCE: SOURce-Conditional Elmo-style Model for Machine Translation Quality Estimation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Junpei Zhou, Zhisong Zhang, Zecong Hu
Quality estimation (QE) of machine translation (MT) systems is a task of growing importance. It reduces the cost of post-editing, allowing machine-translated text to be used in formal occasions. In this work, we describe our submission system in WMT 2019 sentence-level QE task. We mainly explore the utilization of pre-trained translation models in QE and adopt a bi-directional translation-like strategy. The strategy is similar to ELMo, but additionally conditions on source sentences. Experiments on WMT QE dataset show that our strategy, which makes the pre-training slightly harder, can bring improvements for QE. In WMT-2019 QE task, our system ranked in the second place on En-De NMT dataset and the third place on En-Ru NMT dataset.

172. Terminology-Aware Segmentation and Domain Feature for the WMT19 Biomedical Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Casimiro Pio Carrino, Bardia Rafieian, Marta R. Costa-jussà, José A. R. Fonollosa
In this work, we give a description of the TALP-UPC systems submitted for the WMT19 Biomedical Translation Task. Our proposed strategy is NMT model-independent and relies only on one ingredient, a biomedical terminology list. We first extracted such a terminology list by labelling biomedical words in our training dataset using the BabelNet API. Then, we designed a data preparation strategy to insert the terms information at a token level. Finally, we trained the Transformer model with this terms-informed data. Our best-submitted system ranked 2nd and 3rd for Spanish-English and English-Spanish translation directions, respectively.

173. Exploring Transfer Learning and Domain Data Selection for the Biomedical Translation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Noor-e- Hira, Sadaf Abdul Rauf, Kiran Kiani, Ammara Zafar, Raheel Nawaz
Transfer Learning and Selective data training are two of the many approaches being extensively investigated to improve the quality of Neural Machine Translation systems. This paper presents a series of experiments by applying transfer learning and selective data training for participation in the Bio-medical shared task of WMT19. We have used Information Retrieval to selectively choose related sentences from out-of-domain data and used them as additional training data using transfer learning. We also report the effect of tokenization on translation model performance.

174. Huawei’s NMT Systems for the WMT 2019 Biomedical Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Wei Peng, Jianfeng Liu, Liangyou Li, Qun Liu
This paper describes Huawei’s neural machine translation systems for the WMT 2019 biomedical translation shared task. We trained and fine-tuned our systems on a combination of out-of-domain and in-domain parallel corpora for six translation directions covering English–Chinese, English–French and English–German language pairs. Our submitted systems achieve the best BLEU scores on English–French and English–German language pairs according to the official evaluation results. In the English–Chinese translation task, our systems are in the second place. The enhanced performance is attributed to more in-domain training and more sophisticated models developed. Development of translation models and transfer learning (or domain adaptation) methods has significantly contributed to the progress of the task.

175. UCAM Biomedical Translation at WMT19: Transfer Learning Multi-domain Ensembles [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Danielle Saunders, Felix Stahlberg, Bill Byrne
The 2019 WMT Biomedical translation task involved translating Medline abstracts. We approached this using transfer learning to obtain a series of strong neural models on distinct domains, and combining them into multi-domain ensembles. We further experimented with an adaptive language-model ensemble weighting scheme. Our submission achieved the best submitted results on both directions of English-Spanish.

176. BSC Participation in the WMT Translation of Biomedical Abstracts [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Felipe Soares, Martin Krallinger
This paper describes the machine translation systems developed by the Barcelona Supercomputing (BSC) team for the biomedical translation shared task of WMT19. Our system is based on Neural Machine Translation unsing the OpenNMT-py toolkit and Transformer architecture. We participated in four translation directions for the English/Spanish and English/Portuguese language pairs. To create our training data, we concatenated several parallel corpora, both from in-domain and out-of-domain sources, as well as terminological resources from UMLS.

177. The MLLP-UPV Spanish-Portuguese and Portuguese-Spanish Machine Translation Systems for WMT19 Similar Language Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Pau Baquero-Arnal, Javier Iranzo-Sánchez, Jorge Civera, Alfons Juan
This paper describes the participation of the MLLP research group of the Universitat Politècnica de València in the WMT 2019 Similar Language Translation Shared Task. We have submitted systems for the Portuguese ↔ Spanish language pair, in both directions. We have submitted systems based on the Transformer architecture as well as an in development novel architecture which we have called 2D alternating RNN. We have carried out domain adaptation through fine-tuning.

178. The TALP-UPC System for the WMT Similar Language Task: Statistical vs Neural Machine Translation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Magdalena Biesialska, Lluis Guardia, Marta R. Costa-jussà
Although the problem of similar language translation has been an area of research interest for many years, yet it is still far from being solved. In this paper, we study the performance of two popular approaches: statistical and neural. We conclude that both methods yield similar results; however, the performance varies depending on the language pair. While the statistical approach outperforms the neural one by a difference of 6 BLEU points for the Spanish-Portuguese language pair, the proposed neural model surpasses the statistical one by a difference of 2 BLEU points for Czech-Polish. In the former case, the language similarity (based on perplexity) is much higher than in the latter case. Additionally, we report negative results for the system combination with back-translation. Our TALP-UPC system submission won 1st place for Czech->Polish and 2nd place for Spanish->Portuguese in the official evaluation of the 1st WMT Similar Language Translation task.

179. Machine Translation from an Intercomprehension Perspective [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Yu Chen, Tania Avgustinova
Within the first shared task on machine translation between similar languages, we present our first attempts on Czech to Polish machine translation from an intercomprehension perspective. We propose methods based on the mutual intelligibility of the two languages, taking advantage of their orthographic and phonological similarity, in the hope to improve over our baselines. The translation results are evaluated using BLEU. On this metric, none of our proposals could outperform the baselines on the final test set. The current setups are rather preliminary, and there are several potential improvements we can try in the future.

180. Utilizing Monolingual Data in NMT for Similar Languages: Submission to Similar Language Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Jyotsana Khatri, Pushpak Bhattacharyya
This paper describes our submission to Shared Task on Similar Language Translation in Fourth Conference on Machine Translation (WMT 2019). We submitted three systems for Hindi -> Nepali direction in which we have examined the performance of a RNN based NMT system, a semi-supervised NMT system where monolingual data of both languages is utilized using the architecture by and a system trained with extra synthetic sentences generated using copy of source and target sentences without using any additional monolingual data.

181. Neural Machine Translation: Hindi-Nepali [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Sahinur Rahman Laskar, Partha Pakray, Sivaji Bandyopadhyay
With the extensive use of Machine Translation (MT) technology, there is progressively interest in directly translating between pairs of similar languages. Because the main challenge is to overcome the limitation of available parallel data to produce a precise MT output. Current work relies on the Neural Machine Translation (NMT) with attention mechanism for the similar language translation of WMT19 shared task in the context of Hindi-Nepali pair. The NMT systems trained the Hindi-Nepali parallel corpus and tested, analyzed in Hindi ⇔ Nepali translation. The official result declared at WMT19 shared task, which shows that our NMT system obtained Bilingual Evaluation Understudy (BLEU) score 24.6 for primary configuration in Nepali to Hindi translation. Also, we have achieved BLEU score 53.7 (Hindi to Nepali) and 49.1 (Nepali to Hindi) in contrastive system type.

182. NICT’s Machine Translation Systems for the WMT19 Similar Language Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Benjamin Marie, Raj Dabre, Atsushi Fujita
This paper presents the NICT’s participation in the WMT19 shared Similar Language Translation Task. We participated in the Spanish-Portuguese task. For both translation directions, we prepared state-of-the-art statistical (SMT) and neural (NMT) machine translation systems. Our NMT systems with the Transformer architecture were trained on the provided parallel data enlarged with a large quantity of back-translated monolingual data. Our primary submission to the task is the result of a simple combination of our SMT and NMT systems. According to BLEU, our systems were ranked second and third respectively for the Portuguese-to-Spanish and Spanish-to-Portuguese translation directions. For contrastive experiments, we also submitted outputs generated with an unsupervised SMT system.

183. Panlingua-KMI MT System for Similar Language Translation Task at WMT 2019 [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Atul Kr. Ojha, Ritesh Kumar, Akanksha Bansal, Priya Rani
The present paper enumerates the development of Panlingua-KMI Machine Translation (MT) systems for Hindi ↔ Nepali language pair, designed as part of the Similar Language Translation Task at the WMT 2019 Shared Task. The Panlingua-KMI team conducted a series of experiments to explore both the phrase-based statistical (PBSMT) and neural methods (NMT). Among the 11 MT systems prepared under this task, 6 PBSMT systems were prepared for Nepali-Hindi, 1 PBSMT for Hindi-Nepali and 2 NMT systems were developed for Nepali↔Hindi. The results show that PBSMT could be an effective method for developing MT systems for closely-related languages. Our Hindi-Nepali PBSMT system was ranked 2nd among the 13 systems submitted for the pair and our Nepali-Hindi PBSMTsystem was ranked 4th among the 12 systems submitted for the task.

184. UDS–DFKI Submission to the WMT2019 Czech–Polish Similar Language Translation Shared Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Santanu Pal, Marcos Zampieri, Josef van Genabith
In this paper we present the UDS-DFKI system submitted to the Similar Language Translation shared task at WMT 2019. The first edition of this shared task featured data from three pairs of similar languages: Czech and Polish, Hindi and Nepali, and Portuguese and Spanish. Participants could choose to participate in any of these three tracks and submit system outputs in any translation direction. We report the results obtained by our system in translating from Czech to Polish and comment on the impact of out-of-domain test data in the performance of our system. UDS-DFKI achieved competitive performance ranking second among ten teams in Czech to Polish translation.

185. Neural Machine Translation of Low-Resource and Similar Languages with Backtranslation [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Michael Przystupa, Muhammad Abdul-Mageed
We present our contribution to the WMT19 Similar Language Translation shared task. We investigate the utility of neural machine translation on three low-resource, similar language pairs: Spanish – Portuguese, Czech – Polish, and Hindi – Nepali. Since state-of-the-art neural machine translation systems still require large amounts of bitext, which we do not have for the pairs we consider, we focus primarily on incorporating monolingual data into our models with backtranslation. In our analysis, we found Transformer models to work best on Spanish – Portuguese and Czech – Polish translation, whereas LSTMs with global attention worked best on Hindi – Nepali translation.

186. The University of Helsinki Submissions to the WMT19 Similar Language Translation Task [PDF] 返回目录
  ACL 2019. the Fourth Conference on Machine Translation (Volume 3: Shared Task Papers, Day 2)
  Yves Scherrer, Raúl Vázquez, Sami Virpioja
This paper describes the University of Helsinki Language Technology group’s participation in the WMT 2019 similar language translation task. We trained neural machine translation models for the language pairs Czech <-> Polish and Spanish <-> Portuguese. Our experiments focused on different subword segmentation methods, and in particular on the comparison of a cognate-aware segmentation method, Cognate Morfessor, with character segmentation and unsupervised segmentation methods for which the data from different languages were simply concatenated. We did not observe major benefits from cognate-aware segmentation methods, but further research may be needed to explore larger parts of the parameter space. Character-level models proved to be competitive for translation between Spanish and Portuguese, but they are slower in training and decoding.

187. Incorporating Source-Side Phrase Structures into Neural Machine Translation [PDF] 返回目录
  CL 2019.
  Akiko Eriguchi, Kazuma Hashimoto, Yoshimasa Tsuruoka
Neural machine translation (NMT) has shown great success as a new alternative to the traditional Statistical Machine Translation model in multiple languages. Early NMT models are based on sequence-to-sequence learning that encodes a sequence of source words into a vector space and generates another sequence of target words from the vector. In those NMT models, sentences are simply treated as sequences of words without any internal structure. In this article, we focus on the role of the syntactic structure of source sentences and propose a novel end-to-end syntactic NMT model, which we call a tree-to-sequence NMT model, extending a sequence-to-sequence model with the source-side phrase structure. Our proposed model has an attention mechanism that enables the decoder to generate a translated word while softly aligning it with phrases as well as words of the source sentence. We have empirically compared the proposed model with sequence-to-sequence models in various settings on Chinese-to-Japanese and English-to-Japanese translation tasks. Our experimental results suggest that the use of syntactic structure can be beneficial when the training data set is small, but is not as effective as using a bi-directional encoder. As the size of training data set increases, the benefits of using a syntactic tree tends to diminish.

188. Contextualized Translations of Phrasal Verbs with Distributional Compositional Semantics and Monolingual Corpora [PDF] 返回目录
  CL 2019.
  Pablo Gamallo, Susana Sotelo, José Ramom Pichel, Mikel Artetxe
This article describes a compositional distributional method to generate contextualized senses of words and identify their appropriate translations in the target language using monolingual corpora. Word translation is modeled in the same way as contextualization of word meaning, but in a bilingual vector space. The contextualization of meaning is carried out by means of distributional composition within a structured vector space with syntactic dependencies, and the bilingual space is created by means of transfer rules and a bilingual dictionary. A phrase in the source language, consisting of a head and a dependent, is translated into the target language by selecting both the nearest neighbor of the head given the dependent, and the nearest neighbor of the dependent given the head. This process is expanded to larger phrases by means of incremental composition. Experiments were performed on English and Spanish monolingual corpora in order to translate phrasal verbs in context. A new bilingual data set to evaluate strategies aimed at translating phrasal verbs in restricted syntactic domains has been created and released.

189. Explicit Cross-lingual Pre-training for Unsupervised Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Shuo Ren, Yu Wu, Shujie Liu, Ming Zhou, Shuai Ma
Pre-training has proven to be effective in unsupervised machine translation due to its ability to model deep context information in cross-lingual scenarios. However, the cross-lingual information obtained from shared BPE spaces is inexplicit and limited. In this paper, we propose a novel cross-lingual pre-training method for unsupervised machine translation by incorporating explicit cross-lingual training signals. Specifically, we first calculate cross-lingual n-gram embeddings and infer an n-gram translation table from them. With those n-gram translation pairs, we propose a new pre-training model called Cross-lingual Masked Language Model (CMLM), which randomly chooses source n-grams in the input text stream and predicts their translation candidates at each time step. Experiments show that our method can incorporate beneficial cross-lingual information into pre-trained models. Taking pre-trained CMLM models as the encoder and decoder, we significantly improve the performance of unsupervised machine translation.

190. Latent Part-of-Speech Sequences for Neural Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Xuewen Yang, Yingru Liu, Dongliang Xie, Xin Wang, Niranjan Balasubramanian
Learning target side syntactic structure has been shown to improve Neural Machine Translation (NMT). However, incorporating syntax through latent variables introduces additional complexity in inference, as the models need to marginalize over the latent syntactic structures. To avoid this, models often resort to greedy search which only allows them to explore a limited portion of the latent space. In this work, we introduce a new latent variable model, LaSyn, that captures the co-dependence between syntax and semantics, while allowing for effective and efficient inference over the latent space. LaSyn decouples direct dependence between successive latent variables, which allows its decoder to exhaustively search through the latent syntactic choices, while keeping decoding speed proportional to the size of the latent variable vocabulary. We implement LaSyn by modifying a transformer-based NMT system and design a neural expectation maximization algorithm that we regularize with part-of-speech information as the latent sequences. Evaluations on four different MT tasks show that incorporating target side syntax with LaSyn improves both translation quality, and also provides an opportunity to improve diversity.

191. Improving Back-Translation with Uncertainty-based Confidence Estimation [PDF] 返回目录
  EMNLP 2019.
  Shuo Wang, Yang Liu, Chao Wang, Huanbo Luan, Maosong Sun
While back-translation is simple and effective in exploiting abundant monolingual corpora to improve low-resource neural machine translation (NMT), the synthetic bilingual corpora generated by NMT models trained on limited authentic bilingual data are inevitably noisy. In this work, we propose to quantify the confidence of NMT model predictions based on model uncertainty. With word- and sentence-level confidence measures based on uncertainty, it is possible for back-translation to better cope with noise in synthetic bilingual corpora. Experiments on Chinese-English and English-German translation tasks show that uncertainty-based confidence estimation significantly improves the performance of back-translation.

192. Towards Linear Time Neural Machine Translation with Capsule Networks [PDF] 返回目录
  EMNLP 2019.
  Mingxuan Wang
In this study, we first investigate a novel capsule network with dynamic routing for linear time Neural Machine Translation (NMT), referred as CapsNMT. CapsNMT uses an aggregation mechanism to map the source sentence into a matrix with pre-determined size, and then applys a deep LSTM network to decode the target sequence from the source representation. Unlike the previous work (CITATION) to store the source sentence with a passive and bottom-up way, the dynamic routing policy encodes the source sentence with an iterative process to decide the credit attribution between nodes from lower and higher layers. CapsNMT has two core properties: it runs in time that is linear in the length of the sequences and provides a more flexible way to aggregate the part-whole information of the source sentence. On WMT14 English-German task and a larger WMT14 English-French task, CapsNMT achieves comparable results with the Transformer system. To the best of our knowledge, this is the first work that capsule networks have been empirically investigated for sequence to sequence problems.

193. Iterative Dual Domain Adaptation for Neural Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Jiali Zeng, Yang Liu, Jinsong Su, Yubing Ge, Yaojie Lu, Yongjing Yin, Jiebo Luo
Previous studies on the domain adaptation for neural machine translation (NMT) mainly focus on the one-pass transferring out-of-domain translation knowledge to in-domain NMT model. In this paper, we argue that such a strategy fails to fully extract the domain-shared translation knowledge, and repeatedly utilizing corpora of different domains can lead to better distillation of domain-shared translation knowledge. To this end, we propose an iterative dual domain adaptation framework for NMT. Specifically, we first pretrain in-domain and out-of-domain NMT models using their own training corpora respectively, and then iteratively perform bidirectional translation knowledge transfer (from in-domain to out-of-domain and then vice versa) based on knowledge distillation until the in-domain NMT model convergences. Furthermore, we extend the proposed framework to the scenario of multiple out-of-domain training corpora, where the above-mentioned transfer is performed sequentially between the in-domain and each out-of-domain NMT models in the ascending order of their domain similarities. Empirical results on Chinese-English and English-German translation tasks demonstrate the effectiveness of our framework.

194. Multi-agent Learning for Neural Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Tianchi Bi, Hao Xiong, Zhongjun He, Hua Wu, Haifeng Wang
Conventional Neural Machine Translation (NMT) models benefit from the training with an additional agent, e.g., dual learning, and bidirectional decoding with one agent decod- ing from left to right and the other decoding in the opposite direction. In this paper, we extend the training framework to the multi-agent sce- nario by introducing diverse agents in an in- teractive updating process. At training time, each agent learns advanced knowledge from others, and they work together to improve translation quality. Experimental results on NIST Chinese-English, IWSLT 2014 German- English, WMT 2014 English-German and large-scale Chinese-English translation tasks indicate that our approach achieves absolute improvements over the strong baseline sys- tems and shows competitive performance on all tasks.

195. Pivot-based Transfer Learning for Neural Machine Translation between Non-English Languages [PDF] 返回目录
  EMNLP 2019.
  Yunsu Kim, Petre Petrov, Pavel Petrushkov, Shahram Khadivi, Hermann Ney
We present effective pre-training strategies for neural machine translation (NMT) using parallel corpora involving a pivot language, i.e., source-pivot and pivot-target, leading to a significant improvement in source-target translation. We propose three methods to increase the relation among source, pivot, and target languages in the pre-training: 1) step-wise training of a single model for different language pairs, 2) additional adapter component to smoothly connect pre-trained encoder and decoder, and 3) cross-lingual encoder training via autoencoding of the pivot language. Our methods greatly outperform multilingual models up to +2.6% BLEU in WMT 2019 French-German and German-Czech tasks. We show that our improvements are valid also in zero-shot/zero-resource scenarios.

196. Context-Aware Monolingual Repair for Neural Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Elena Voita, Rico Sennrich, Ivan Titov
Modern sentence-level NMT systems often produce plausible translations of isolated sentences. However, when put in context, these translations may end up being inconsistent with each other. We propose a monolingual DocRepair model to correct inconsistencies between sentence-level translations. DocRepair performs automatic post-editing on a sequence of sentence-level translations, refining translations of sentences in context of each other. For training, the DocRepair model requires only monolingual document-level data in the target language. It is trained as a monolingual sequence-to-sequence model that maps inconsistent groups of sentences into consistent ones. The consistent groups come from the original training data; the inconsistent groups are obtained by sampling round-trip translations for each isolated sentence. We show that this approach successfully imitates inconsistencies we aim to fix: using contrastive evaluation, we show large improvements in the translation of several contextual phenomena in an English-Russian translation task, as well as improvements in the BLEU score. We also conduct a human evaluation and show a strong preference of the annotators to corrected translations over the baseline ones. Moreover, we analyze which discourse phenomena are hard to capture using monolingual data only.

197. Multi-Granularity Self-Attention for Neural Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Jie Hao, Xing Wang, Shuming Shi, Jinfeng Zhang, Zhaopeng Tu
Current state-of-the-art neural machine translation (NMT) uses a deep multi-head self-attention network with no explicit phrase information. However, prior work on statistical machine translation has shown that extending the basic translation unit from words to phrases has produced substantial improvements, suggesting the possibility of improving NMT performance from explicit modeling of phrases. In this work, we present multi-granularity self-attention (Mg-Sa): a neural network that combines multi-head self-attention and phrase modeling. Specifically, we train several attention heads to attend to phrases in either n-gram or syntactic formalisms. Moreover, we exploit interactions among phrases to enhance the strength of structure modeling – a commonly-cited weakness of self-attention. Experimental results on WMT14 English-to-German and NIST Chinese-to-English translation tasks show the proposed approach consistently improves performance. Targeted linguistic analysis reveal that Mg-Sa indeed captures useful phrase information at various levels of granularities.

198. One Model to Learn Both: Zero Pronoun Prediction and Translation [PDF] 返回目录
  EMNLP 2019.
  Longyue Wang, Zhaopeng Tu, Xing Wang, Shuming Shi
Zero pronouns (ZPs) are frequently omitted in pro-drop languages, but should be recalled in non-pro-drop languages. This discourse phenomenon poses a significant challenge for machine translation (MT) when translating texts from pro-drop to non-pro-drop languages. In this paper, we propose a unified and discourse-aware ZP translation approach for neural MT models. Specifically, we jointly learn to predict and translate ZPs in an end-to-end manner, allowing both components to interact with each other. In addition, we employ hierarchical neural networks to exploit discourse-level context, which is beneficial for ZP prediction and thus translation. Experimental results on both Chinese-English and Japanese-English data show that our approach significantly and accumulatively improves both translation performance and ZP prediction accuracy over not only baseline but also previous works using external ZP prediction models. Extensive analyses confirm that the performance improvement comes from the alleviation of different kinds of errors especially caused by subjective ZPs.

199. Dynamic Past and Future for Neural Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Zaixiang Zheng, Shujian Huang, Zhaopeng Tu, Xin-Yu Dai, Jiajun Chen
Previous studies have shown that neural machine translation (NMT) models can benefit from explicitly modeling translated () and untranslated () source contents as recurrent states (CITATION). However, this less interpretable recurrent process hinders its power to model the dynamic updating of and contents during decoding. In this paper, we propose to model the dynamic principles by explicitly separating source words into groups of translated and untranslated contents through parts-to-wholes assignment. The assignment is learned through a novel variant of routing-by-agreement mechanism (CITATION), namely Guided Dynamic Routing, where the translating status at each decoding step guides the routing process to assign each source word to its associated group (i.e., translated or untranslated content) represented by a capsule, enabling translation to be made from holistic context. Experiments show that our approach achieves substantial improvements over both Rnmt and Transformer by producing more adequate translations. Extensive analysis demonstrates that our method is highly interpretable, which is able to recognize the translated and untranslated contents as expected.

200. Revisit Automatic Error Detection for Wrong and Missing Translation – A Supervised Approach [PDF] 返回目录
  EMNLP 2019.
  Wenqiang Lei, Weiwen Xu, Ai Ti Aw, Yuanxin Xiang, Tat Seng Chua
While achieving great fluency, current machine translation (MT) techniques are bottle-necked by adequacy issues. To have a closer study of these issues and accelerate model development, we propose automatic detecting adequacy errors in MT hypothesis for MT model evaluation. To do that, we annotate missing and wrong translations, the two most prevalent issues for current neural machine translation model, in 15000 Chinese-English translation pairs. We build a supervised alignment model for translation error detection (AlignDet) based on a simple Alignment Triangle strategy to set the benchmark for automatic error detection task. We also discuss the difficulties of this task and the benefits of this task for existing evaluation metrics.

201. Towards Understanding Neural Machine Translation with Word Importance [PDF] 返回目录
  EMNLP 2019.
  Shilin He, Zhaopeng Tu, Xing Wang, Longyue Wang, Michael Lyu, Shuming Shi
Although neural machine translation (NMT) has advanced the state-of-the-art on various language pairs, the interpretability of NMT remains unsatisfactory. In this work, we propose to address this gap by focusing on understanding the input-output behavior of NMT models. Specifically, we measure the word importance by attributing the NMT output to every input word through a gradient-based method. We validate the approach on a couple of perturbation operations, language pairs, and model architectures, demonstrating its superiority on identifying input words with higher influence on translation performance. Encouragingly, the calculated importance can serve as indicators of input words that are under-translated by NMT models. Furthermore, our analysis reveals that words of certain syntactic categories have higher importance while the categories vary across language pairs, which can inspire better design principles of NMT architectures for multi-lingual translation.

202. Multilingual Neural Machine Translation with Language Clustering [PDF] 返回目录
  EMNLP 2019.
  Xu Tan, Jiale Chen, Di He, Yingce Xia, Tao Qin, Tie-Yan Liu
Multilingual neural machine translation (NMT), which translates multiple languages using a single model, is of great practical importance due to its advantages in simplifying the training process, reducing online maintenance costs, and enhancing low-resource and zero-shot translation. Given there are thousands of languages in the world and some of them are very different, it is extremely burdensome to handle them all in a single model or use a separate model for each language pair. Therefore, given a fixed resource budget, e.g., the number of models, how to determine which languages should be supported by one model is critical to multilingual NMT, which, unfortunately, has been ignored by previous work. In this work, we develop a framework that clusters languages into different groups and trains one multilingual model for each cluster. We study two methods for language clustering: (1) using prior knowledge, where we cluster languages according to language family, and (2) using language embedding, in which we represent each language by an embedding vector and cluster them in the embedding space. In particular, we obtain the embedding vectors of all the languages by training a universal neural machine translation model. Our experiments on 23 languages show that the first clustering method is simple and easy to understand but leading to suboptimal translation accuracy, while the second method sufficiently captures the relationship among languages well and improves the translation accuracy for almost all the languages over baseline methods.

203. Entity Projection via Machine Translation for Cross-Lingual NER [PDF] 返回目录
  EMNLP 2019.
  Alankar Jain, Bhargavi Paranjape, Zachary C. Lipton
Although over 100 languages are supported by strong off-the-shelf machine translation systems, only a subset of them possess large annotated corpora for named entity recognition. Motivated by this fact, we leverage machine translation to improve annotation-projection approaches to cross-lingual named entity recognition. We propose a system that improves over prior entity-projection methods by: (a) leveraging machine translation systems twice: first for translating sentences and subsequently for translating entities; (b) matching entities based on orthographic and phonetic similarity; and (c) identifying matches based on distributional statistics derived from the dataset. Our approach improves upon current state-of-the-art methods for cross-lingual named entity recognition on 5 diverse languages by an average of 4.1 points. Further, our method achieves state-of-the-art F_1 scores for Armenian, outperforming even a monolingual model trained on Armenian source data.

204. Multilingual word translation using auxiliary languages [PDF] 返回目录
  EMNLP 2019.
  Hagai Taitelbaum, Gal Chechik, Jacob Goldberger
Current multilingual word translation methods are focused on jointly learning mappings from each language to a shared space. The actual translation, however, is still performed as an isolated bilingual task. In this study we propose a multilingual translation procedure that uses all the learned mappings to translate a word from one language to another. For each source word, we first search for the most relevant auxiliary languages. We then use the translations to these languages to form an improved representation of the source word. Finally, this representation is used for the actual translation to the target language. Experiments on a standard multilingual word translation benchmark demonstrate that our model outperforms state of the art results.

205. Simpler and Faster Learning of Adaptive Policies for Simultaneous Translation [PDF] 返回目录
  EMNLP 2019.
  Baigong Zheng, Renjie Zheng, Mingbo Ma, Liang Huang
Simultaneous translation is widely useful but remains challenging. Previous work falls into two main categories: (a) fixed-latency policies such as Ma et al. (2019) and (b) adaptive policies such as Gu et al. (2017). The former are simple and effective, but have to aggressively predict future content due to diverging source-target word order; the latter do not anticipate, but suffer from unstable and inefficient training. To combine the merits of both approaches, we propose a simple supervised-learning framework to learn an adaptive policy from oracle READ/WRITE sequences generated from parallel text. At each step, such an oracle sequence chooses to WRITE the next target word if the available source sentence context provides enough information to do so, otherwise READ the next source word. Experiments on German<=>English show that our method, without retraining the underlying NMT model, can learn flexible policies with better BLEU scores and similar latencies compared to previous work.

206. Recurrent Positional Embedding for Neural Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita
In the Transformer network architecture, positional embeddings are used to encode order dependencies into the input representation. However, this input representation only involves static order dependencies based on discrete numerical information, that is, are independent of word content. To address this issue, this work proposes a recurrent positional embedding approach based on word vector. In this approach, these recurrent positional embeddings are learned by a recurrent neural network, encoding word content-based order dependencies into the input representation. They are then integrated into the existing multi-head self-attention model as independent heads or part of each head. The experimental results revealed that the proposed approach improved translation performance over that of the state-of-the-art Transformer baseline in WMT’14 English-to-German and NIST Chinese-to-English translation tasks.

207. Machine Translation for Machines: the Sentiment Classification Use Case [PDF] 返回目录
  EMNLP 2019.
  Amirhossein Tebbifakhr, Luisa Bentivogli, Matteo Negri, Marco Turchi
We propose a neural machine translation (NMT) approach that, instead of pursuing adequacy and fluency (“human-oriented” quality criteria), aims to generate translations that are best suited as input to a natural language processing component designed for a specific downstream task (a “machine-oriented” criterion). Towards this objective, we present a reinforcement learning technique based on a new candidate sampling strategy, which exploits the results obtained on the downstream task as weak feedback. Experiments in sentiment classification of Twitter data in German and Italian show that feeding an English classifier with “machine-oriented” translations significantly improves its performance. Classification results outperform those obtained with translations produced by general-purpose NMT models as well as by an approach based on reinforcement learning. Moreover, our results on both languages approximate the classification accuracy computed on gold standard English tweets.

208. HABLex: Human Annotated Bilingual Lexicons for Experiments in Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Brian Thompson, Rebecca Knowles, Xuan Zhang, Huda Khayrallah, Kevin Duh, Philipp Koehn
Bilingual lexicons are valuable resources used by professional human translators. While these resources can be easily incorporated in statistical machine translation, it is unclear how to best do so in the neural framework. In this work, we present the HABLex dataset, designed to test methods for bilingual lexicon integration into neural machine translation. Our data consists of human generated alignments of words and phrases in machine translation test sets in three language pairs (Russian-English, Chinese-English, and Korean-English), resulting in clean bilingual lexicons which are well matched to the reference. We also present two simple baselines - constrained decoding and continued training - and an improvement to continued training to address overfitting.

209. Handling Syntactic Divergence in Low-resource Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Chunting Zhou, Xuezhe Ma, Junjie Hu, Graham Neubig
Despite impressive empirical successes of neural machine translation (NMT) on standard benchmarks, limited parallel data impedes the application of NMT models to many language pairs. Data augmentation methods such as back-translation make it possible to use monolingual data to help alleviate these issues, but back-translation itself fails in extreme low-resource scenarios, especially for syntactically divergent languages. In this paper, we propose a simple yet effective solution, whereby target-language sentences are re-ordered to match the order of the source and used as an additional source of training-time supervision. Experiments with simulated low-resource Japanese-to-English, and real low-resource Uyghur-to-English scenarios find significant improvements over other semi-supervised alternatives.

210. Speculative Beam Search for Simultaneous Translation [PDF] 返回目录
  EMNLP 2019.
  Renjie Zheng, Mingbo Ma, Baigong Zheng, Liang Huang
Beam search is universally used in (full-sentence) machine translation but its application to simultaneous translation remains highly non-trivial, where output words are committed on the fly. In particular, the recently proposed wait-k policy (Ma et al., 2018) is a simple and effective method that (after an initial wait) commits one output word on receiving each input word, making beam search seemingly inapplicable. To address this challenge, we propose a new speculative beam search algorithm that hallucinates several steps into the future in order to reach a more accurate decision by implicitly benefiting from a target language model. This idea makes beam search applicable for the first time to the generation of a single word in each step. Experiments over diverse language pairs show large improvement compared to previous work.

211. Exploiting Multilingualism through Multistage Fine-Tuning for Low-Resource Neural Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Raj Dabre, Atsushi Fujita, Chenhui Chu
This paper highlights the impressive utility of multi-parallel corpora for transfer learning in a one-to-many low-resource neural machine translation (NMT) setting. We report on a systematic comparison of multistage fine-tuning configurations, consisting of (1) pre-training on an external large (209k–440k) parallel corpus for English and a helping target language, (2) mixed pre-training or fine-tuning on a mixture of the external and low-resource (18k) target parallel corpora, and (3) pure fine-tuning on the target parallel corpora. Our experiments confirm that multi-parallel corpora are extremely useful despite their scarcity and content-wise redundancy thus exhibiting the true power of multilingualism. Even when the helping target language is not one of the target languages of our concern, our multistage fine-tuning can give 3–9 BLEU score gains over a simple one-to-one model.

212. Unsupervised Domain Adaptation for Neural Machine Translation with Domain-Aware Feature Embeddings [PDF] 返回目录
  EMNLP 2019.
  Zi-Yi Dou, Junjie Hu, Antonios Anastasopoulos, Graham Neubig
The recent success of neural machine translation models relies on the availability of high quality, in-domain data. Domain adaptation is required when domain-specific data is scarce or nonexistent. Previous unsupervised domain adaptation strategies include training the model with in-domain copied monolingual or back-translated data. However, these methods use generic representations for text regardless of domain shift, which makes it infeasible for translation models to control outputs conditional on a specific domain. In this work, we propose an approach that adapts models with domain-aware feature embeddings, which are learned via an auxiliary language modeling task. Our approach allows the model to assign domain-specific representations to words and output sentences in the desired domain. Our empirical results demonstrate the effectiveness of the proposed strategy, achieving consistent improvements in multiple experimental settings. In addition, we show that combining our method with back translation can further improve the performance of the model.

213. Encoders Help You Disambiguate Word Senses in Neural Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Gongbo Tang, Rico Sennrich, Joakim Nivre
Neural machine translation (NMT) has achieved new state-of-the-art performance in translating ambiguous words. However, it is still unclear which component dominates the process of disambiguation. In this paper, we explore the ability of NMT encoders and decoders to disambiguate word senses by evaluating hidden states and investigating the distributions of self-attention. We train a classifier to predict whether a translation is correct given the representation of an ambiguous noun. We find that encoder hidden states outperform word embeddings significantly which indicates that encoders adequately encode relevant information for disambiguation into hidden states. In contrast to encoders, the effect of decoder is different in models with different architectures. Moreover, the attention weights and attention entropy show that self-attention can detect ambiguous nouns and distribute more attention to the context.

214. Enhancing Context Modeling with a Query-Guided Capsule Network for Document-level Translation [PDF] 返回目录
  EMNLP 2019.
  Zhengxin Yang, Jinchao Zhang, Fandong Meng, Shuhao Gu, Yang Feng, Jie Zhou
Context modeling is essential to generate coherent and consistent translation for Document-level Neural Machine Translations. The widely used method for document-level translation usually compresses the context information into a representation via hierarchical attention networks. However, this method neither considers the relationship between context words nor distinguishes the roles of context words. To address this problem, we propose a query-guided capsule networks to cluster context information into different perspectives from which the target translation may concern. Experiment results show that our method can significantly outperform strong baselines on multiple data sets of different domains.

215. Simple, Scalable Adaptation for Neural Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Ankur Bapna, Orhan Firat
Fine-tuning pre-trained Neural Machine Translation (NMT) models is the dominant approach for adapting to new languages and domains. However, fine-tuning requires adapting and maintaining a separate model for each target task. We propose a simple yet efficient approach for adaptation in NMT. Our proposed approach consists of injecting tiny task specific adapter layers into a pre-trained model. These lightweight adapters, with just a small fraction of the original model size, adapt the model to multiple individual tasks simultaneously. We evaluate our approach on two tasks: (i) Domain Adaptation and (ii) Massively Multilingual NMT. Experiments on domain adaptation demonstrate that our proposed approach is on par with full fine-tuning on various domains, dataset sizes and model capacities. On a massively multilingual dataset of 103 languages, our adaptation approach bridges the gap between individual bilingual models and one massively multilingual model for most language pairs, paving the way towards universal machine translation.

216. Controlling Text Complexity in Neural Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Sweta Agrawal, Marine Carpuat
This work introduces a machine translation task where the output is aimed at audiences of different levels of target language proficiency. We collect a high quality dataset of news articles available in English and Spanish, written for diverse grade levels and propose a method to align segments across comparable bilingual articles. The resulting dataset makes it possible to train multi-task sequence to sequence models that can translate and simplify text jointly. We show that these multi-task models outperform pipeline approaches that translate and simplify text independently.

217. Hierarchical Modeling of Global Context for Document-Level Neural Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Xin Tan, Longyin Zhang, Deyi Xiong, Guodong Zhou
Document-level machine translation (MT) remains challenging due to the difficulty in efficiently using document context for translation. In this paper, we propose a hierarchical model to learn the global context for document-level neural machine translation (NMT). This is done through a sentence encoder to capture intra-sentence dependencies and a document encoder to model document-level inter-sentence consistency and coherence. With this hierarchical architecture, we feedback the extracted global document context to each word in a top-down fashion to distinguish different translations of a word according to its specific surrounding context. In addition, since large-scale in-domain document-level parallel corpora are usually unavailable, we use a two-step training strategy to take advantage of a large-scale corpus with out-of-domain parallel sentence pairs and a small-scale corpus with in-domain parallel document pairs to achieve the domain adaptability. Experimental results on several benchmark corpora show that our proposed model can significantly improve document-level translation performance over several strong NMT baselines.

218. Evaluating Pronominal Anaphora in Machine Translation: An Evaluation Measure and a Test Suite [PDF] 返回目录
  EMNLP 2019.
  Prathyusha Jwalapuram, Shafiq Joty, Irina Temnikova, Preslav Nakov
The ongoing neural revolution in machine translation has made it easier to model larger contexts beyond the sentence-level, which can potentially help resolve some discourse-level ambiguities such as pronominal anaphora, thus enabling better translations. Unfortunately, even when the resulting improvements are seen as substantial by humans, they remain virtually unnoticed by traditional automatic evaluation measures like BLEU, as only a few words end up being affected. Thus, specialized evaluation measures are needed. With this aim in mind, we contribute an extensive, targeted dataset that can be used as a test suite for pronoun translation, covering multiple source languages and different pronoun errors drawn from real system translations, for English. We further propose an evaluation measure to differentiate good and bad pronoun translations. We also conduct a user study to report correlations with human judgments.

219. IMaT: Unsupervised Text Attribute Transfer via Iterative Matching and Translation [PDF] 返回目录
  EMNLP 2019.
  Zhijing Jin, Di Jin, Jonas Mueller, Nicholas Matthews, Enrico Santus
Text attribute transfer aims to automatically rewrite sentences such that they possess certain linguistic attributes, while simultaneously preserving their semantic content. This task remains challenging due to a lack of supervised parallel data. Existing approaches try to explicitly disentangle content and attribute information, but this is difficult and often results in poor content-preservation and ungrammaticality. In contrast, we propose a simpler approach, Iterative Matching and Translation (IMaT), which: (1) constructs a pseudo-parallel corpus by aligning a subset of semantically similar sentences from the source and the target corpora; (2) applies a standard sequence-to-sequence model to learn the attribute transfer; (3) iteratively improves the learned transfer function by refining imperfections in the alignment. In sentiment modification and formality transfer tasks, our method outperforms complex state-of-the-art systems by a large margin. As an auxiliary contribution, we produce a publicly-available test set with human-generated transfer references.

220. The Challenges of Optimizing Machine Translation for Low Resource Cross-Language Information Retrieval [PDF] 返回目录
  EMNLP 2019.
  Constantine Lignos, Daniel Cohen, Yen-Chieh Lien, Pratik Mehta, W. Bruce Croft, Scott Miller
When performing cross-language information retrieval (CLIR) for lower-resourced languages, a common approach is to retrieve over the output of machine translation (MT). However, there is no established guidance on how to optimize the resulting MT-IR system. In this paper, we examine the relationship between the performance of MT systems and both neural and term frequency-based IR models to identify how CLIR performance can be best predicted from MT quality. We explore performance at varying amounts of MT training data, byte pair encoding (BPE) merge operations, and across two IR collections and retrieval models. We find that the choice of IR collection can substantially affect the predictive power of MT tuning decisions and evaluation, potentially introducing dissociations between MT-only and overall CLIR performance.

221. A Multi-Pairwise Extension of Procrustes Analysis for Multilingual Word Translation [PDF] 返回目录
  EMNLP 2019.
  Hagai Taitelbaum, Gal Chechik, Jacob Goldberger
In this paper we present a novel approach to simultaneously representing multiple languages in a common space. Procrustes Analysis (PA) is commonly used to find the optimal orthogonal word mapping in the bilingual case. The proposed Multi Pairwise Procrustes Analysis (MPPA) is a natural extension of the PA algorithm to multilingual word mapping. Unlike previous PA extensions that require a k-way dictionary, this approach requires only pairwise bilingual dictionaries that are much easier to construct.

222. Exploiting Monolingual Data at Scale for Neural Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Lijun Wu, Yiren Wang, Yingce Xia, Tao Qin, Jianhuang Lai, Tie-Yan Liu
While target-side monolingual data has been proven to be very useful to improve neural machine translation (briefly, NMT) through back translation, source-side monolingual data is not well investigated. In this work, we study how to use both the source-side and target-side monolingual data for NMT, and propose an effective strategy leveraging both of them. First, we generate synthetic bitext by translating monolingual data from the two domains into the other domain using the models pretrained on genuine bitext. Next, a model is trained on a noised version of the concatenated synthetic bitext where each source sequence is randomly corrupted. Finally, the model is fine-tuned on the genuine bitext and a clean version of a subset of the synthetic bitext without adding any noise. Our approach achieves state-of-the-art results on WMT16, WMT17, WMT18 English↔German translations and WMT19 German→French translations, which demonstrate the effectiveness of our method. We also conduct a comprehensive study on how each part in the pipeline works.

223. Machine Translation With Weakly Paired Documents [PDF] 返回目录
  EMNLP 2019.
  Lijun Wu, Jinhua Zhu, Di He, Fei Gao, Tao Qin, Jianhuang Lai, Tie-Yan Liu
Neural machine translation, which achieves near human-level performance in some languages, strongly relies on the large amounts of parallel sentences, which hinders its applicability to low-resource language pairs. Recent works explore the possibility of unsupervised machine translation with monolingual data only, leading to much lower accuracy compared with the supervised one. Observing that weakly paired bilingual documents are much easier to collect than bilingual sentences, e.g., from Wikipedia, news websites or books, in this paper, we investigate training translation models with weakly paired bilingual documents. Our approach contains two components. 1) We provide a simple approach to mine implicitly bilingual sentence pairs from document pairs which can then be used as supervised training signals. 2) We leverage the topic consistency of two weakly paired documents and learn the sentence translation model by constraining the word distribution-level alignments. We evaluate our method on weakly paired documents from Wikipedia on six tasks, the widely used WMT16 German↔English, WMT13 Spanish↔English and WMT16 Romanian↔English translation tasks. We obtain 24.1/30.3, 28.1/27.6 and 30.1/27.6 BLEU points separately, outperforming previous results by more than 5 BLEU points in each direction and reducing the gap between unsupervised translation and supervised translation up to 50%.

224. The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives [PDF] 返回目录
  EMNLP 2019.
  Elena Voita, Rico Sennrich, Ivan Titov
We seek to understand how the representations of individual tokens and the structure of the learned feature space evolve between layers in deep neural networks under different learning objectives. We chose the Transformers for our analysis as they have been shown effective with various tasks, including machine translation (MT), standard left-to-right language models (LM) and masked language modeling (MLM). Previous work used black-box probing tasks to show that the representations learned by the Transformer differ significantly depending on the objective. In this work, we use canonical correlation analysis and mutual information estimators to study how information flows across Transformer layers and observe that the choice of the objective determines this process. For example, as you go from bottom to top layers, information about the past in left-to-right language models gets vanished and predictions about the future get formed. In contrast, for MLM, representations initially acquire information about the context around the token, partially forgetting the token identity and producing a more generalized token representation. The token identity then gets recreated at the top MLM layers.

225. Understanding Data Augmentation in Neural Machine Translation: Two Perspectives towards Generalization [PDF] 返回目录
  EMNLP 2019.
  Guanlin Li, Lemao Liu, Guoping Huang, Conghui Zhu, Tiejun Zhao
Many Data Augmentation (DA) methods have been proposed for neural machine translation. Existing works measure the superiority of DA methods in terms of their performance on a specific test set, but we find that some DA methods do not exhibit consistent improvements across translation tasks. Based on the observation, this paper makes an initial attempt to answer a fundamental question: what benefits, which are consistent across different methods and tasks, does DA in general obtain? Inspired by recent theoretic advances in deep learning, the paper understands DA from two perspectives towards the generalization ability of a model: input sensitivity and prediction margin, which are defined independent of specific test set thereby may lead to findings with relatively low variance. Extensive experiments show that relatively consistent benefits across five DA methods and four translation tasks are achieved regarding both perspectives.

226. Simple and Effective Noisy Channel Modeling for Neural Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Kyra Yee, Yann Dauphin, Michael Auli
Previous work on neural noisy channel modeling relied on latent variable models that incrementally process the source and target sentence. This makes decoding decisions based on partial source prefixes even though the full source is available. We pursue an alternative approach based on standard sequence to sequence models which utilize the entire source. These models perform remarkably well as channel models, even though they have neither been trained on, nor designed to factor over incomplete target sentences. Experiments with neural language models trained on billions of words show that noisy channel models can outperform a direct model by up to 3.2 BLEU on WMT’17 German-English translation. We evaluate on four language-pairs and our channel models consistently outperform strong alternatives such right-to-left reranking models and ensembles of direct models.

227. Hint-Based Training for Non-Autoregressive Machine Translation [PDF] 返回目录
  EMNLP 2019.
  Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Liwei Wang, Tie-Yan Liu
Due to the unparallelizable nature of the autoregressive factorization, AutoRegressive Translation (ART) models have to generate tokens sequentially during decoding and thus suffer from high inference latency. Non-AutoRegressive Translation (NART) models were proposed to reduce the inference time, but could only achieve inferior translation accuracy. In this paper, we proposed a novel approach to leveraging the hints from hidden states and word alignments to help the training of NART models. The results achieve significant improvement over previous NART models for the WMT14 En-De and De-En datasets and are even comparable to a strong LSTM-based ART baseline but one order of magnitude faster in inference.

228. The FLORES Evaluation Datasets for Low-Resource Machine Translation: Nepali–English and Sinhala–English [PDF] 返回目录
  EMNLP 2019.
  Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, Marc’Aurelio Ranzato
For machine translation, a vast majority of language pairs in the world are considered low-resource because they have little parallel data available. Besides the technical challenges of learning with limited supervision, it is difficult to evaluate methods trained on low-resource language pairs because of the lack of freely and publicly available benchmarks. In this work, we introduce the FLORES evaluation datasets for Nepali–English and Sinhala– English, based on sentences translated from Wikipedia. Compared to English, these are languages with very different morphology and syntax, for which little out-of-domain parallel data is available and for which relatively large amounts of monolingual data are freely available. We describe our process to collect and cross-check the quality of translations, and we report baseline performance using several learning settings: fully supervised, weakly supervised, semi-supervised, and fully unsupervised. Our experiments demonstrate that current state-of-the-art methods perform rather poorly on this benchmark, posing a challenge to the research community working on low-resource MT. Data and code to reproduce our experiments are available at https://github.com/facebookresearch/flores.

229. INMT: Interactive Neural Machine Translation Prediction [PDF] 返回目录
  EMNLP 2019. System Demonstrations
  Sebastin Santy, Sandipan Dandapat, Monojit Choudhury, Kalika Bali
In this paper, we demonstrate an Interactive Machine Translation interface, that assists human translators with on-the-fly hints and suggestions. This makes the end-to-end translation process faster, more efficient and creates high-quality translations. We augment the OpenNMT backend with a mechanism to accept the user input and generate conditioned translations.

230. Proceedings of the 6th Workshop on Asian Translation [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Toshiaki Nakazawa, Chenchen Ding, Raj Dabre, Anoop Kunchukuttan, Nobushige Doi, Yusuke Oda, Ondřej Bojar, Shantipriya Parida, Isao Goto, Hidaya Mino


231. Overview of the 6th Workshop on Asian Translation [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Toshiaki Nakazawa, Nobushige Doi, Shohei Higashiyama, Chenchen Ding, Raj Dabre, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukuttan, Yusuke Oda, Shantipriya Parida, Ondřej Bojar, Sadao Kurohashi
This paper presents the results of the shared tasks from the 6th workshop on Asian translation (WAT2019) including Ja↔En, Ja↔Zh scientific paper translation subtasks, Ja↔En, Ja↔Ko, Ja↔En patent translation subtasks, Hi↔En, My↔En, Km↔En, Ta↔En mixed domain subtasks and Ru↔Ja news commentary translation task. For the WAT2019, 25 teams participated in the shared tasks. We also received 10 research paper submissions out of which 61 were accepted. About 400 translation results were submitted to the automatic evaluation server, and selected submis- sions were manually evaluated.

232. Compact and Robust Models for Japanese-English Character-level Machine Translation [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Jinan Dai, Kazunori Yamaguchi
Character-level translation has been proved to be able to achieve preferable translation quality without explicit segmentation, but training a character-level model needs a lot of hardware resources. In this paper, we introduced two character-level translation models which are mid-gated model and multi-attention model for Japanese-English translation. We showed that the mid-gated model achieved the better performance with respect to BLEU scores. We also showed that a relatively narrow beam of width 4 or 5 was sufficient for the mid-gated model. As for unknown words, we showed that the mid-gated model could somehow translate the one containing Katakana by coining out a close word. We also showed that the model managed to produce tolerable results for heavily noised sentences, even though the model was trained with the dataset without noise.

233. Controlling Japanese Honorifics in English-to-Japanese Neural Machine Translation [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Weston Feely, Eva Hasler, Adrià de Gispert
In the Japanese language different levels of honorific speech are used to convey respect, deference, humility, formality and social distance. In this paper, we present a method for controlling the level of formality of Japanese output in English-to-Japanese neural machine translation (NMT). By using heuristics to identify honorific verb forms, we classify Japanese sentences as being one of three levels of informal, polite, or formal speech in parallel text. The English source side is marked with a feature that identifies the level of honorific speech present in the Japanese target side. We use this parallel text to train an English-Japanese NMT model capable of producing Japanese translations in different honorific speech styles for the same English input sentence.

234. English to Hindi Multi-modal Neural Machine Translation and Hindi Image Captioning [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Sahinur Rahman Laskar, Rohit Pratap Singh, Partha Pakray, Sivaji Bandyopadhyay
With the widespread use of Machine Trans-lation (MT) techniques, attempt to minimizecommunication gap among people from di-verse linguistic backgrounds. We have par-ticipated in Workshop on Asian Transla-tion 2019 (WAT2019) multi-modal translationtask. There are three types of submissiontrack namely, multi-modal translation, Hindi-only image captioning and text-only transla-tion for English to Hindi translation. The mainchallenge is to provide a precise MT output.The multi-modal concept incorporates textualand visual features in the translation task. Inthis work, multi-modal translation track re-lies on pre-trained convolutional neural net-works (CNN) with Visual Geometry Grouphaving 19 layered (VGG19) to extract imagefeatures and attention-based Neural MachineTranslation (NMT) system for translation.The merge-model of recurrent neural network(RNN) and CNN is used for the Hindi-onlyimage captioning. The text-only translationtrack is based on the transformer model of theNMT system. The official results evaluated atWAT2019 translation task, which shows thatour multi-modal NMT system achieved Bilin-gual Evaluation Understudy (BLEU) score20.37, Rank-based Intuitive Bilingual Eval-uation Score (RIBES) 0.642838, Adequacy-Fluency Metrics (AMFM) score 0.668260 forchallenge test data and BLEU score 40.55,RIBES 0.760080, AMFM score 0.770860 forevaluation test data in English to Hindi multi-modal translation respectively.

235. Supervised and Unsupervised Machine Translation for Myanmar-English and Khmer-English [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Benjamin Marie, Hour Kaing, Aye Myat Mon, Chenchen Ding, Atsushi Fujita, Masao Utiyama, Eiichiro Sumita
This paper presents the NICT’s supervised and unsupervised machine translation systems for the WAT2019 Myanmar-English and Khmer-English translation tasks. For all the translation directions, we built state-of-the-art supervised neural (NMT) and statistical (SMT) machine translation systems, using monolingual data cleaned and normalized. Our combination of NMT and SMT performed among the best systems for the four translation directions. We also investigated the feasibility of unsupervised machine translation for low-resource and distant language pairs and confirmed observations of previous work showing that unsupervised MT is still largely unable to deal with them.

236. English-Myanmar Supervised and Unsupervised NMT: NICT’s Machine Translation Systems at WAT-2019 [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Rui Wang, Haipeng Sun, Kehai Chen, Chenchen Ding, Masao Utiyama, Eiichiro Sumita
This paper presents the NICT’s participation (team ID: NICT) in the 6th Workshop on Asian Translation (WAT-2019) shared translation task, specifically Myanmar (Burmese) - English task in both translation directions. We built neural machine translation (NMT) systems for these tasks. Our NMT systems were trained with language model pretraining. Back-translation technology is adopted to NMT. Our NMT systems rank the third in English-to-Myanmar and the second in Myanmar-to-English according to BLEU score.

237. UCSMNLP: Statistical Machine Translation for WAT 2019 [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Aye Thida, Nway Nway Han, Sheinn Thawtar Oo, Khin Thet Htar
This paper represents UCSMNLP’s submission to the WAT 2019 Translation Tasks focusing on the Myanmar-English translation. Phrase based statistical machine translation (PBSMT) system is built by using other resources: Name Entity Recognition (NER) corpus and bilingual dictionary which is created by Google Translate (GT). This system is also adopted with listwise reranking process in order to improve the quality of translation and tuning is done by changing initial distortion weight. The experimental results show that PBSMT using other resources with initial distortion weight (0.4) and listwise reranking function outperforms the baseline system.

238. NTT Neural Machine Translation Systems at WAT 2019 [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Makoto Morishita, Jun Suzuki, Masaaki Nagata
In this paper, we describe our systems that were submitted to the translation shared tasks at WAT 2019. This year, we participated in two distinct types of subtasks, a scientific paper subtask and a timely disclosure subtask, where we only considered English-to-Japanese and Japanese-to-English translation directions. We submitted two systems (En-Ja and Ja-En) for the scientific paper subtask and two systems (Ja-En, texts, items) for the timely disclosure subtask. Three of our four systems obtained the best human evaluation performances. We also confirmed that our new additional web-crawled parallel corpus improves the performance in unconstrained settings.

239. Neural Machine Translation System using a Content-equivalently Translated Parallel Corpus for the Newswire Translation Tasks at WAT 2019 [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Hideya Mino, Hitoshi Ito, Isao Goto, Ichiro Yamada, Hideki Tanaka, Takenobu Tokunaga
This paper describes NHK and NHK Engineering System (NHK-ES)’s submission to the newswire translation tasks of WAT 2019 in both directions of Japanese→English and English→Japanese. In addition to the JIJI Corpus that was officially provided by the task organizer, we developed a corpus of 0.22M sentence pairs by manually, translating Japanese news sentences into English content- equivalently. The content-equivalent corpus was effective for improving translation quality, and our systems achieved the best human evaluation scores in the newswire translation tasks at WAT 2019.

240. Facebook AI’s WAT19 Myanmar-English Translation Task Submission [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Peng-Jen Chen, Jiajun Shen, Matthew Le, Vishrav Chaudhary, Ahmed El-Kishky, Guillaume Wenzek, Myle Ott, Marc’Aurelio Ranzato
This paper describes Facebook AI’s submission to the WAT 2019 Myanmar-English translation task. Our baseline systems are BPE-based transformer models. We explore methods to leverage monolingual data to improve generalization, including self-training, back-translation and their combination. We further improve results by using noisy channel re-ranking and ensembling. We demonstrate that these techniques can significantly improve not only a system trained with additional monolingual data, but even the baseline system trained exclusively on the provided small parallel dataset. Our system ranks first in both directions according to human evaluation and BLEU, with a gain of over 8 BLEU points above the second best system.

241. Combining Translation Memory with Neural Machine Translation [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Akiko Eriguchi, Spencer Rarrick, Hitokazu Matsushita
In this paper, we report our submission systems (geoduck) to the Timely Disclosure task on the 6th Workshop on Asian Translation (WAT) (Nakazawa et al., 2019). Our system employs a combined approach of translation memory and Neural Machine Translation (NMT) models, where we can select final translation outputs from either a translation memory or an NMT system, when the similarity score of a test source sentence exceeds the predefined threshold. We observed that this combination approach significantly improves the translation performance on the Timely Disclosure corpus, as compared to a standalone NMT system. We also conducted source-based direct assessment on the final output, and we discuss the comparison between human references and each system’s output.

242. LTRC-MT Simple & Effective Hindi-English Neural Machine Translation Systems at WAT 2019 [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Vikrant Goyal, Dipti Misra Sharma
This paper describes the Neural Machine Translation systems of IIIT-Hyderabad (LTRC-MT) for WAT 2019 Hindi-English shared task. We experimented with both Recurrent Neural Networks & Transformer architectures. We also show the results of our experiments of training NMT models using additional data via backtranslation.

243. Supervised neural machine translation based on data augmentation and improved training & inference process [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Yixuan Tong, Liang Liang, Boyan Liu, Shanshan Jiang, Bin Dong
This is the second time for SRCB to participate in WAT. This paper describes the neural machine translation systems for the shared translation tasks of WAT 2019. We participated in ASPEC tasks and submitted results on English-Japanese, Japanese-English, Chinese-Japanese, and Japanese-Chinese four language pairs. We employed the Transformer model as the baseline and experimented relative position representation, data augmentation, deep layer model, ensemble. Experiments show that all these methods can yield substantial improvements.

244. Our Neural Machine Translation Systems for WAT 2019 [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Wei Yang, Jun Ogata
In this paper, we describe our Neural Machine Translation (NMT) systems for the WAT 2019 translation tasks we focus on. This year we participate in scientific paper tasks and focus on the language pair between English and Japanese. We use Transformer model through our work in this paper to explore and experience the powerful of the Transformer architecture relying on self-attention mechanism. We use different NMT toolkit/library as the implementation of training the Transformer model. For word segmentation, we use different subword segmentation strategies while using different toolkit/library. We not only give the translation accuracy obtained based on absolute position encodings that introduced in the Transformer model, but also report the the improvements in translation accuracy while replacing absolute position encodings with relative position representations. We also ensemble several independent trained Transformer models to further improve the translation accuracy.

245. Japanese-Russian TMU Neural Machine Translation System using Multilingual Model for WAT 2019 [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Aizhan Imankulova, Masahiro Kaneko, Mamoru Komachi
We introduce our system that is submitted to the News Commentary task (Japanese<->Russian) of the 6th Workshop on Asian Translation. The goal of this shared task is to study extremely low resource situations for distant language pairs. It is known that using parallel corpora of different language pair as training data is effective for multilingual neural machine translation model in extremely low resource scenarios. Therefore, to improve the translation quality of Japanese<->Russian language pair, our method leverages other in-domain Japanese-English and English-Russian parallel corpora as additional training data for our multilingual NMT model.

246. NLPRL at WAT2019: Transformer-based Tamil – English Indic Task Neural Machine Translation System [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Amit Kumar, Anil Kumar Singh
This paper describes the Machine Translation system for Tamil-English Indic Task organized at WAT 2019. We use Transformer- based architecture for Neural Machine Translation.

247. Idiap NMT System for WAT 2019 Multimodal Translation Task [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Shantipriya Parida, Ondřej Bojar, Petr Motlicek
This paper describes the Idiap submission to WAT 2019 for the English-Hindi Multi-Modal Translation Task. We have used the state-of-the-art Transformer model and utilized the IITB English-Hindi parallel corpus as an additional data source. Among the different tracks of the multi-modal task, we have participated in the “Text-Only” track for the evaluation and challenge test sets. Our submission tops in its track among the competitors in terms of both automatic and manual evaluation. Based on automatic scores, our text-only submission also outperforms systems that consider visual information in the “multi-modal translation” task.

248. WAT2019: English-Hindi Translation on Hindi Visual Genome Dataset [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Loitongbam Sanayai Meetei, Thoudam Doren Singh, Sivaji Bandyopadhyay
A multimodal translation is a task of translating a source language to a target language with the help of a parallel text corpus paired with images that represent the contextual details of the text. In this paper, we carried out an extensive comparison to evaluate the benefits of using a multimodal approach on translating text in English to a low resource language, Hindi as a part of WAT2019 shared task. We carried out the translation of English to Hindi in three separate tasks with both the evaluation and challenge dataset. First, by using only the parallel text corpora, then through an image caption generation approach and, finally with the multimodal approach. Our experiment shows a significant improvement in the result with the multimodal approach than the other approach.

249. UCSYNLP-Lab Machine Translation Systems for WAT 2019 [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Yimon ShweSin, Win Pa Pa, KhinMar Soe
This paper describes the UCSYNLP-Lab submission to WAT 2019 for Myanmar-English translation tasks in both direction. We have used the neural machine translation systems with attention model and utilized the UCSY-corpus and ALT corpus. In NMT with attention model, we use the word segmentation level as well as syllable segmentation level. Especially, we made the UCSY-corpus to be cleaned in WAT 2019. Therefore, the UCSY corpus for WAT 2019 is not identical to those used in WAT 2018. Experiments show that the translation systems can produce the substantial improvements.

250. Sentiment Aware Neural Machine Translation [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Chenglei Si, Kui Wu, Ai Ti Aw, Min-Yen Kan
Sentiment ambiguous lexicons refer to words where their polarity depends strongly on con- text. As such, when the context is absent, their translations or their embedded sentence ends up (incorrectly) being dependent on the training data. While neural machine translation (NMT) has achieved great progress in recent years, most systems aim to produce one single correct translation for a given source sentence. We investigate the translation variation in two sentiment scenarios. We perform experiments to study the preservation of sentiment during translation with three different methods that we propose. We conducted tests with both sentiment and non-sentiment bearing contexts to examine the effectiveness of our methods. We show that NMT can generate both positive- and negative-valent translations of a source sentence, based on a given input sentiment label. Empirical evaluations show that our valence-sensitive embedding (VSE) method significantly outperforms a sequence-to-sequence (seq2seq) baseline, both in terms of BLEU score and ambiguous word translation accuracy in test, given non-sentiment bearing contexts.

251. Overcoming the Rare Word Problem for low-resource language pairs in Neural Machine Translation [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Thi-Vinh Ngo, Thanh-Le Ha, Phuong-Thai Nguyen, Le-Minh Nguyen
Among the six challenges of neural machine translation (NMT) coined by (Koehn and Knowles, 2017), rare-word problem is considered the most severe one, especially in translation of low-resource languages. In this paper, we propose three solutions to address the rare words in neural machine translation systems. First, we enhance source context to predict the target words by connecting directly the source embeddings to the output of the attention component in NMT. Second, we propose an algorithm to learn morphology of unknown words for English in supervised way in order to minimize the adverse effect of rare-word problem. Finally, we exploit synonymous relation from the WordNet to overcome out-of-vocabulary (OOV) problem of NMT. We evaluate our approaches on two low-resource language pairs: English-Vietnamese and Japanese-Vietnamese. In our experiments, we have achieved significant improvements of up to roughly +1.0 BLEU points in both language pairs.

252. Neural Arabic Text Diacritization: State of the Art Results and a Novel Approach for Machine Translation [PDF] 返回目录
  EMNLP 2019. the 6th Workshop on Asian Translation
  Ali Fadel, Ibraheem Tuffaha, Bara’ Al-Jawarneh, Mahmoud Al-Ayyoub
In this work, we present several deep learning models for the automatic diacritization of Arabic text. Our models are built using two main approaches, viz. Feed-Forward Neural Network (FFNN) and Recurrent Neural Network (RNN), with several enhancements such as 100-hot encoding, embeddings, Conditional Random Field (CRF) and Block-Normalized Gradient (BNG). The models are tested on the only freely available benchmark dataset and the results show that our models are either better or on par with other models, which require language-dependent post-processing steps, unlike ours. Moreover, we show that diacritics in Arabic can be used to enhance the models of NLP tasks such as Machine Translation (MT) by proposing the Translation over Diacritization (ToD) approach.

253. Neural Speech Translation using Lattice Transformations and Graph Networks [PDF] 返回目录
  EMNLP 2019. the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13)
  Daniel Beck, Trevor Cohn, Gholamreza Haffari
Speech translation systems usually follow a pipeline approach, using word lattices as an intermediate representation. However, previous work assume access to the original transcriptions used to train the ASR system, which can limit applicability in real scenarios. In this work we propose an approach for speech translation through lattice transformations and neural models based on graph networks. Experimental results show that our approach reaches competitive performance without relying on transcriptions, while also being orders of magnitude faster than previous work.

254. Multilingual Whispers: Generating Paraphrases with Translation [PDF] 返回目录
  EMNLP 2019. the 5th Workshop on Noisy User-generated Text (W-NUT 2019)
  Christian Federmann, Oussama Elachqar, Chris Quirk
Naturally occurring paraphrase data, such as multiple news stories about the same event, is a useful but rare resource. This paper compares translation-based paraphrase gathering using human, automatic, or hybrid techniques to monolingual paraphrasing by experts and non-experts. We gather translations, paraphrases, and empirical human quality assessments of these approaches. Neural machine translation techniques, especially when pivoting through related languages, provide a relatively robust source of paraphrases with diversity comparable to expert human paraphrases. Surprisingly, human translators do not reliably outperform neural systems. The resulting data release will not only be a useful test set, but will also allow additional explorations in translation and paraphrase quality assessments and relationships.

255. Training on Synthetic Noise Improves Robustness to Natural Noise in Machine Translation [PDF] 返回目录
  EMNLP 2019. the 5th Workshop on Noisy User-generated Text (W-NUT 2019)
  Vladimir Karpukhin, Omer Levy, Jacob Eisenstein, Marjan Ghazvininejad
Contemporary machine translation systems achieve greater coverage by applying subword models such as BPE and character-level CNNs, but these methods are highly sensitive to orthographical variations such as spelling mistakes. We show how training on a mild amount of random synthetic noise can dramatically improve robustness to these variations, without diminishing performance on clean text. We focus on translation performance on natural typos, and show that robustness to such noise can be achieved using a balanced diet of simple synthetic noises at training time, without access to the natural noise data or distribution.

256. Improving Neural Machine Translation Robustness via Data Augmentation: Beyond Back-Translation [PDF] 返回目录
  EMNLP 2019. the 5th Workshop on Noisy User-generated Text (W-NUT 2019)
  Zhenhao Li, Lucia Specia
Neural Machine Translation (NMT) models have been proved strong when translating clean texts, but they are very sensitive to noise in the input. Improving NMT models robustness can be seen as a form of “domain” adaption to noise. The recently created Machine Translation on Noisy Text task corpus provides noisy-clean parallel data for a few language pairs, but this data is very limited in size and diversity. The state-of-the-art approaches are heavily dependent on large volumes of back-translated data. This paper has two main contributions: Firstly, we propose new data augmentation methods to extend limited noisy data and further improve NMT robustness to noise while keeping the models small. Secondly, we explore the effect of utilizing noise from external data in the form of speech transcripts and show that it could help robustness.

257. Phonetic Normalization for Machine Translation of User Generated Content [PDF] 返回目录
  EMNLP 2019. the 5th Workshop on Noisy User-generated Text (W-NUT 2019)
  José Carlos Rosales Núñez, Djamé Seddah, Guillaume Wisniewski
We present an approach to correct noisy User Generated Content (UGC) in French aiming to produce a pretreatement pipeline to improve Machine Translation for this kind of non-canonical corpora. In order to do so, we have implemented a character-based neural model phonetizer to produce IPA pronunciations of words. In this way, we intend to correct grammar, vocabulary and accentuation errors often present in noisy UGC corpora. Our method leverages on the fact that some errors are due to confusion induced by words with similar pronunciation which can be corrected using a phonetic look-up table to produce normalization candidates. These potential corrections are then encoded in a lattice and ranked using a language model to output the most probable corrected phrase. Compare to using other phonetizers, our method boosts a transformer-based machine translation system on UGC.

258. Proceedings of the 3rd Workshop on Neural Generation and Translation [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Alexandra Birch, Andrew Finch, Hiroaki Hayashi, Ioannis Konstas, Thang Luong, Graham Neubig, Yusuke Oda, Katsuhito Sudoh


259. Findings of the Third Workshop on Neural Generation and Translation [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Hiroaki Hayashi, Yusuke Oda, Alexandra Birch, Ioannis Konstas, Andrew Finch, Minh-Thang Luong, Graham Neubig, Katsuhito Sudoh
This document describes the findings of the Third Workshop on Neural Generation and Translation, held in concert with the annual conference of the Empirical Methods in Natural Language Processing (EMNLP 2019). First, we summarize the research trends of papers presented in the proceedings. Second, we describe the results of the two shared tasks 1) efficient neural machine translation (NMT) where participants were tasked with creating NMT systems that are both accurate and efficient, and 2) document generation and translation (DGT) where participants were tasked with developing systems that generate summaries from structured data, potentially with assistance from text in another language.

260. Recycling a Pre-trained BERT Encoder for Neural Machine Translation [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Kenji Imamura, Eiichiro Sumita
In this paper, a pre-trained Bidirectional Encoder Representations from Transformers (BERT) model is applied to Transformer-based neural machine translation (NMT). In contrast to monolingual tasks, the number of unlearned model parameters in an NMT decoder is as huge as the number of learned parameters in the BERT model. To train all the models appropriately, we employ two-stage optimization, which first trains only the unlearned parameters by freezing the BERT model, and then fine-tunes all the sub-models. In our experiments, stable two-stage optimization was achieved, in contrast the BLEU scores of direct fine-tuning were extremely low. Consequently, the BLEU scores of the proposed method were better than those of the Transformer base model and the same model without pre-training. Additionally, we confirmed that NMT with the BERT encoder is more effective in low-resource settings.

261. Domain Differential Adaptation for Neural Machine Translation [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Zi-Yi Dou, Xinyi Wang, Junjie Hu, Graham Neubig
Neural networks are known to be data hungry and domain sensitive, but it is nearly impossible to obtain large quantities of labeled data for every domain we are interested in. This necessitates the use of domain adaptation strategies. One common strategy encourages generalization by aligning the global distribution statistics between source and target domains, but one drawback is that the statistics of different domains or tasks are inherently divergent, and smoothing over these differences can lead to sub-optimal performance. In this paper, we propose the framework of Domain Differential Adaptation (DDA), where instead of smoothing over these differences we embrace them, directly modeling the difference between domains using models in a related task. We then use these learned domain differentials to adapt models for the target task accordingly. Experimental results on domain adaptation for neural machine translation demonstrate the effectiveness of this strategy, achieving consistent improvements over other alternative adaptation strategies in multiple experimental settings.

262. Zero-Resource Neural Machine Translation with Monolingual Pivot Data [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Anna Currey, Kenneth Heafield
Zero-shot neural machine translation (NMT) is a framework that uses source-pivot and target-pivot parallel data to train a source-target NMT system. An extension to zero-shot NMT is zero-resource NMT, which generates pseudo-parallel corpora using a zero-shot system and further trains the zero-shot system on that data. In this paper, we expand on zero-resource NMT by incorporating monolingual data in the pivot language into training; since the pivot language is usually the highest-resource language of the three, we expect monolingual pivot-language data to be most abundant. We propose methods for generating pseudo-parallel corpora using pivot-language monolingual data and for leveraging the pseudo-parallel corpora to improve the zero-shot NMT system. We evaluate these methods for a high-resource language pair (German-Russian) using English as the pivot. We show that our proposed methods yield consistent improvements over strong zero-shot and zero-resource baselines and even catch up to pivot-based models in BLEU (while not requiring the two-pass inference that pivot models require).

263. On the use of BERT for Neural Machine Translation [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Stephane Clinchant, Kweon Woo Jung, Vassilina Nikoulina
Exploiting large pretrained models for various NMT tasks have gained a lot of visibility recently. In this work we study how BERT pretrained models could be exploited for supervised Neural Machine Translation. We compare various ways to integrate pretrained BERT model with NMT model and study the impact of the monolingual data used for BERT training on the final translation quality. We use WMT-14 English-German, IWSLT15 English-German and IWSLT14 English-Russian datasets for these experiments. In addition to standard task test set evaluation, we perform evaluation on out-of-domain test sets and noise injected test sets, in order to assess how BERT pretrained representations affect model robustness.

264. Machine Translation of Restaurant Reviews: New Corpus for Domain Adaptation and Robustness [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Alexandre Berard, Ioan Calapodescu, Marc Dymetman, Claude Roux, Jean-Luc Meunier, Vassilina Nikoulina
We share a French-English parallel corpus of Foursquare restaurant reviews, and define a new task to encourage research on Neural Machine Translation robustness and domain adaptation, in a real-world scenario where better-quality MT would be greatly beneficial. We discuss the challenges of such user-generated content, and train good baseline models that build upon the latest techniques for MT robustness. We also perform an extensive evaluation (automatic and human) that shows significant improvements over existing online systems. Finally, we propose task-specific metrics based on sentiment analysis or translation accuracy of domain-specific polysemous words.

265. Adaptively Scheduled Multitask Learning: The Case of Low-Resource Neural Machine Translation [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Poorya Zaremoodi, Gholamreza Haffari
Neural Machine Translation (NMT), a data-hungry technology, suffers from the lack of bilingual data in low-resource scenarios. Multitask learning (MTL) can alleviate this issue by injecting inductive biases into NMT, using auxiliary syntactic and semantic tasks. However, an effective training schedule is required to balance the importance of tasks to get the best use of the training signal. The role of training schedule becomes even more crucial in biased-MTL where the goal is to improve one (or a subset) of tasks the most, e.g. translation quality. Current approaches for biased-MTL are based on brittle hand-engineered heuristics that require trial and error, and should be (re-)designed for each learning scenario. To the best of our knowledge, ours is the first work on adaptively and dynamically changing the training schedule in biased-MTL. We propose a rigorous approach for automatically reweighing the training data of the main and auxiliary tasks throughout the training process based on their contributions to the generalisability of the main NMT task. Our experiments on translating from English to Vietnamese/Turkish/Spanish show improvements of up to +1.2 BLEU points, compared to strong baselines. Additionally, our analyses shed light on the dynamic of needs throughout the training of NMT: from syntax to semantic.

266. On the Importance of Word Boundaries in Character-level Neural Machine Translation [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Duygu Ataman, Orhan Firat, Mattia A. Di Gangi, Marcello Federico, Alexandra Birch
Neural Machine Translation (NMT) models generally perform translation using a fixed-size lexical vocabulary, which is an important bottleneck on their generalization capability and overall translation quality. The standard approach to overcome this limitation is to segment words into subword units, typically using some external tools with arbitrary heuristics, resulting in vocabulary units not optimized for the translation task. Recent studies have shown that the same approach can be extended to perform NMT directly at the level of characters, which can deliver translation accuracy on-par with subword-based models, on the other hand, this requires relatively deeper networks. In this paper, we propose a more computationally-efficient solution for character-level NMT which implements a hierarchical decoding architecture where translations are subsequently generated at the level of words and characters. We evaluate different methods for open-vocabulary NMT in the machine translation task from English into five languages with distinct morphological typology, and show that the hierarchical decoding model can reach higher translation accuracy than the subword-level NMT model using significantly fewer parameters, while demonstrating better capacity in learning longer-distance contextual and grammatical dependencies than the standard character-level NMT model.

267. A Margin-based Loss with Synthetic Negative Samples for Continuous-output Machine Translation [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Gayatri Bhat, Sachin Kumar, Yulia Tsvetkov
Neural models that eliminate the softmax bottleneck by generating word embeddings (rather than multinomial distributions over a vocabulary) attain faster training with fewer learnable parameters. These models are currently trained by maximizing densities of pretrained target embeddings under von Mises-Fisher distributions parameterized by corresponding model-predicted embeddings. This work explores the utility of margin-based loss functions in optimizing such models. We present syn-margin loss, a novel margin-based loss that uses a synthetic negative sample constructed from only the predicted and target embeddings at every step. The loss is efficient to compute, and we use a geometric analysis to argue that it is more consistent and interpretable than other margin-based losses. Empirically, we find that syn-margin provides small but significant improvements over both vMF and standard margin-based losses in continuous-output neural machine translation.

268. Mixed Multi-Head Self-Attention for Neural Machine Translation [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Hongyi Cui, Shohei Iida, Po-Hsuan Hung, Takehito Utsuro, Masaaki Nagata
Recently, the Transformer becomes a state-of-the-art architecture in the filed of neural machine translation (NMT). A key point of its high-performance is the multi-head self-attention which is supposed to allow the model to independently attend to information from different representation subspaces. However, there is no explicit mechanism to ensure that different attention heads indeed capture different features, and in practice, redundancy has occurred in multiple heads. In this paper, we argue that using the same global attention in multiple heads limits multi-head self-attention’s capacity for learning distinct features. In order to improve the expressiveness of multi-head self-attention, we propose a novel Mixed Multi-Head Self-Attention (MMA) which models not only global and local attention but also forward and backward attention in different attention heads. This enables the model to learn distinct representations explicitly among multiple heads. In our experiments on both WAT17 English-Japanese as well as IWSLT14 German-English translation task, we show that, without increasing the number of parameters, our models yield consistent and significant improvements (0.9 BLEU scores on average) over the strong Transformer baseline.

269. Interrogating the Explanatory Power of Attention in Neural Machine Translation [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Pooya Moradi, Nishant Kambhatla, Anoop Sarkar
Attention models have become a crucial component in neural machine translation (NMT). They are often implicitly or explicitly used to justify the model’s decision in generating a specific token but it has not yet been rigorously established to what extent attention is a reliable source of information in NMT. To evaluate the explanatory power of attention for NMT, we examine the possibility of yielding the same prediction but with counterfactual attention models that modify crucial aspects of the trained attention model. Using these counterfactual attention mechanisms we assess the extent to which they still preserve the generation of function and content words in the translation process. Compared to a state of the art attention model, our counterfactual attention models produce 68% of function words and 21% of content words in our German-English dataset. Our experiments demonstrate that attention models by themselves cannot reliably explain the decisions made by a NMT model.

270. Auto-Sizing the Transformer Network: Improving Speed, Efficiency, and Performance for Low-Resource Machine Translation [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Kenton Murray, Jeffery Kinnison, Toan Q. Nguyen, Walter Scheirer, David Chiang
Neural sequence-to-sequence models, particularly the Transformer, are the state of the art in machine translation. Yet these neural networks are very sensitive to architecture and hyperparameter settings. Optimizing these settings by grid or random search is computationally expensive because it requires many training runs. In this paper, we incorporate architecture search into a single training run through auto-sizing, which uses regularization to delete neurons in a network over the course of training. On very low-resource language pairs, we show that auto-sizing can improve BLEU scores by up to 3.9 points while removing one-third of the parameters from the model.

271. Learning to Generate Word- and Phrase-Embeddings for Efficient Phrase-Based Neural Machine Translation [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Chan Young Park, Yulia Tsvetkov
Neural machine translation (NMT) often fails in one-to-many translation, e.g., in the translation of multi-word expressions, compounds, and collocations. To improve the translation of phrases, phrase-based NMT systems have been proposed; these typically combine word-based NMT with external phrase dictionaries or with phrase tables from phrase-based statistical MT systems. These solutions introduce a significant overhead of additional resources and computational costs. In this paper, we introduce a phrase-based NMT model built upon continuous-output NMT, in which the decoder generates embeddings of words or phrases. The model uses a fertility module, which guides the decoder to generate embeddings of sequences of varying lengths. We show that our model learns to translate phrases better, performing on par with state of the art phrase-based NMT. Since our model does not resort to softmax computation over a huge vocabulary of phrases, its training time is about 112x faster than the baseline.

272. Monash University’s Submissions to the WNGT 2019 Document Translation Task [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Sameen Maruf, Gholamreza Haffari
We describe the work of Monash University for the shared task of Rotowire document translation organised by the 3rd Workshop on Neural Generation and Translation (WNGT 2019). We submitted systems for both directions of the English-German language pair. Our main focus is on employing an established document-level neural machine translation model for this task. We achieve a BLEU score of 39.83 (41.46 BLEU per WNGT evaluation) for En-De and 45.06 (47.39 BLEU per WNGT evaluation) for De-En translation directions on the Rotowire test set. All experiments conducted in the process are also described.

273. University of Edinburgh’s submission to the Document-level Generation and Translation Shared Task [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Ratish Puduppully, Jonathan Mallinson, Mirella Lapata
The University of Edinburgh participated in all six tracks: NLG, MT, and MT+NLG with both English and German as targeted languages. For the NLG track, we submitted a multilingual system based on the Content Selection and Planning model of Puduppully et al (2019). For the MT track, we submitted Transformer-based Neural Machine Translation models, where out-of-domain parallel data was augmented with in-domain data extracted from monolingual corpora. Our MT+NLG systems disregard the structured input data and instead rely exclusively on the source summaries.

274. Naver Labs Europe’s Systems for the Document-Level Generation and Translation Task at WNGT 2019 [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Fahimeh Saleh, Alexandre Berard, Ioan Calapodescu, Laurent Besacier
Recently, neural models led to significant improvements in both machine translation (MT) and natural language generation tasks (NLG). However, generation of long descriptive summaries conditioned on structured data remains an open challenge. Likewise, MT that goes beyond sentence-level context is still an open issue (e.g., document-level MT or MT with metadata). To address these challenges, we propose to leverage data from both tasks and do transfer learning between MT, NLG, and MT with source-side metadata (MT+NLG). First, we train document-based MT systems with large amounts of parallel data. Then, we adapt these models to pure NLG and MT+NLG tasks by fine-tuning with smaller amounts of domain-specific data. This end-to-end NLG approach, without data selection and planning, outperforms the previous state of the art on the Rotowire NLG task. We participated to the “Document Generation and Translation” task at WNGT 2019, and ranked first in all tracks.

275. From Research to Production and Back: Ludicrously Fast Neural Machine Translation [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Young Jin Kim, Marcin Junczys-Dowmunt, Hany Hassan, Alham Fikri Aji, Kenneth Heafield, Roman Grundkiewicz, Nikolay Bogoychev
This paper describes the submissions of the “Marian” team to the WNGT 2019 efficiency shared task. Taking our dominating submissions to the previous edition of the shared task as a starting point, we develop improved teacher-student training via multi-agent dual-learning and noisy backward-forward translation for Transformer-based student models. For efficient CPU-based decoding, we propose pre-packed 8-bit matrix products, improved batched decoding, cache-friendly student architectures with parameter sharing and light-weight RNN-based decoder architectures. GPU-based decoding benefits from the same architecture changes, from pervasive 16-bit inference and concurrent streams. These modifications together with profiler-based C++ code optimization allow us to push the Pareto frontier established during the 2018 edition towards 24x (CPU) and 14x (GPU) faster models at comparable or higher BLEU values. Our fastest CPU model is more than 4x faster than last year’s fastest submission at more than 3 points higher BLEU. Our fastest GPU model at 1.5 seconds translation time is slightly faster than last year’s fastest RNN-based submissions, but outperforms them by more than 4 BLEU and 10 BLEU points respectively.

276. Selecting, Planning, and Rewriting: A Modular Approach for Data-to-Document Generation and Translation [PDF] 返回目录
  EMNLP 2019. the 3rd Workshop on Neural Generation and Translation
  Lesly Miculicich, Marc Marone, Hany Hassan
In this paper, we report our system submissions to all 6 tracks of the WNGT 2019 shared task on Document-Level Generation and Translation. The objective is to generate a textual document from either structured data: generation task, or a document in a different language: translation task. For the translation task, we focused on adapting a large scale system trained on WMT data by fine tuning it on the RotoWire data. For the generation task, we participated with two systems based on a selection and planning model followed by (a) a simple language model generation, and (b) a GPT-2 pre-trained language model approach. The selection and planning module chooses a subset of table records in order, and the language models produce text given such a subset.

277. Back-Translation as Strategy to Tackle the Lack of Corpus in Natural Language Generation from Semantic Representations [PDF] 返回目录
  EMNLP 2019. the 2nd Workshop on Multilingual Surface Realisation (MSR 2019)
  Marco Antonio Sobrevilla Cabezudo, Simon Mille, Thiago Pardo
This paper presents an exploratory study that aims to evaluate the usefulness of back-translation in Natural Language Generation (NLG) from semantic representations for non-English languages. Specifically, Abstract Meaning Representation and Brazilian Portuguese (BP) are chosen as semantic representation and language, respectively. Two methods (focused on Statistical and Neural Machine Translation) are evaluated on two datasets (one automatically generated and another one human-generated) to compare the performance in a real context. Also, several cuts according to quality measures are performed to evaluate the importance (or not) of the data quality in NLG. Results show that there are still many improvements to be made but this is a promising approach.

278. Understanding the Effect of Textual Adversaries in Multimodal Machine Translation [PDF] 返回目录
  EMNLP 2019. the Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN)
  Koel Dutta Chowdhury, Desmond Elliott
It is assumed that multimodal machine translation systems are better than text-only systems at translating phrases that have a direct correspondence in the image. This assumption has been challenged in experiments demonstrating that state-of-the-art multimodal systems perform equally well in the presence of randomly selected images, but, more recently, it has been shown that masking entities from the source language sentence during training can help to overcome this problem. In this paper, we conduct experiments with both visual and textual adversaries in order to understand the role of incorrect textual inputs to such systems. Our results show that when the source language sentence contains mistakes, multimodal translation systems do not leverage the additional visual signal to produce the correct translation. We also find that the degradation of translation performance caused by textual adversaries is significantly higher than by visual adversaries.

279. Proceedings of the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019) [PDF] 返回目录
  EMNLP 2019. the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019)
  Andrei Popescu-Belis, Sharid Loáiciga, Christian Hardmeier, Deyi Xiong


280. Context-Aware Neural Machine Translation Decoding [PDF] 返回目录
  EMNLP 2019. the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019)
  Eva Martínez Garcia, Carles Creus, Cristina España-Bonet
This work presents a decoding architecture that fuses the information from a neural translation model and the context semantics enclosed in a semantic space language model based on word embeddings. The method extends the beam search decoding process and therefore can be applied to any neural machine translation framework. With this, we sidestep two drawbacks of current document-level systems: (i) we do not modify the training process so there is no increment in training time, and (ii) we do not require document-level an-notated data. We analyze the impact of the fusion system approach and its parameters on the final translation quality for English–Spanish. We obtain consistent and statistically significant improvements in terms of BLEU and METEOR and we observe how the fused systems are able to handle synonyms to propose more adequate translations as well as help the system to disambiguate among several translation candidates for a word.

281. When and Why is Document-level Context Useful in Neural Machine Translation? [PDF] 返回目录
  EMNLP 2019. the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019)
  Yunsu Kim, Duc Thanh Tran, Hermann Ney
Document-level context has received lots of attention for compensating neural machine translation (NMT) of isolated sentences. However, recent advances in document-level NMT focus on sophisticated integration of the context, explaining its improvement with only a few selected examples or targeted test sets. We extensively quantify the causes of improvements by a document-level model in general test sets, clarifying the limit of the usefulness of document-level context in NMT. We show that most of the improvements are not interpretable as utilizing the context. We also show that a minimal encoding is sufficient for the context modeling and very long context is not helpful for NMT.

282. Data augmentation using back-translation for context-aware neural machine translation [PDF] 返回目录
  EMNLP 2019. the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019)
  Amane Sugiyama, Naoki Yoshinaga
A single sentence does not always convey information that is enough to translate it into other languages. Some target languages need to add or specialize words that are omitted or ambiguous in the source languages (e.g, zero pronouns in translating Japanese to English or epicene pronouns in translating English to French). To translate such ambiguous sentences, we need contexts beyond a single sentence, and have so far explored context-aware neural machine translation (NMT). However, a large amount of parallel corpora is not easily available to train accurate context-aware NMT models. In this study, we first obtain large-scale pseudo parallel corpora by back-translating monolingual data, and then investigate its impact on the translation accuracy of context-aware NMT models. We evaluated context-aware NMT models trained with small parallel corpora and the large-scale pseudo parallel corpora on English-Japanese and English-French datasets to demonstrate the large impact of the data augmentation for context-aware NMT models.

283. Context-aware Neural Machine Translation with Coreference Information [PDF] 返回目录
  EMNLP 2019. the Fourth Workshop on Discourse in Machine Translation (DiscoMT 2019)
  Takumi Ohtani, Hidetaka Kamigaito, Masaaki Nagata, Manabu Okumura
We present neural machine translation models for translating a sentence in a text by using a graph-based encoder which can consider coreference relations provided within the text explicitly. The graph-based encoder can dynamically encode the source text without attending to all tokens in the text. In experiments, our proposed models provide statistically significant improvement to the previous approach of at most 0.9 points in the BLEU score on the OpenSubtitle2018 English-to-Japanese data set. Experimental results also show that the graph-based encoder can handle a longer text well, compared with the previous approach.

284. Learning Multimodal Graph-to-Graph Translation for Molecule Optimization [PDF] 返回目录
  ICLR 2019.
  Wengong Jin, Kevin Yang, Regina Barzilay, Tommi S. Jaakkola
We view molecule optimization as a graph-to-graph translation problem. The goal is to learn to map from one molecular graph to another with better properties based on an available corpus of paired molecules. Since molecules can be optimized in different ways, there are multiple viable translations for each input graph. A key challenge is therefore to model diverse translation outputs. Our primary contributions include a junction tree encoder-decoder for learning diverse graph translations along with a novel adversarial training method for aligning distributions of molecules. Diverse output distributions in our model are explicitly realized by low-dimensional latent vectors that modulate the translation process. We evaluate our model on multiple molecule optimization tasks and show that our model outperforms previous state-of-the-art baselines by a significant margin.

285. Identifying and Controlling Important Neurons in Neural Machine Translation [PDF] 返回目录
  ICLR 2019.
  Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, James R. Glass
Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.

286. A Universal Music Translation Network [PDF] 返回目录
  ICLR 2019.
  Noam Mor, Lior Wolf, Adam Polyak, Yaniv Taigman
We present a method for translating music across musical instruments and styles. This method is based on unsupervised training of a multi-domain wavenet autoencoder, with a shared encoder and a domain-independent latent space that is trained end-to-end on waveforms. Employing a diverse training dataset and large net capacity, the single encoder allows us to translate also from musical domains that were not seen during training. We evaluate our method on a dataset collected from professional musicians, and achieve convincing translations. We also study the properties of the obtained translation and demonstrate translating even from a whistle, potentially enabling the creation of instrumental music by untrained humans.

287. Harmonic Unpaired Image-to-image Translation [PDF] 返回目录
  ICLR 2019.
  Rui Zhang, Tomas Pfister, Jia Li
The recent direction of unpaired image-to-image translation is on one hand very exciting as it alleviates the big burden in obtaining label-intensive pixel-to-pixel supervision, but it is on the other hand not fully satisfactory due to the presence of artifacts and degenerated transformations. In this paper, we take a manifold view of the problem by introducing a smoothness term over the sample graph to attain harmonic functions to enforce consistent mappings during the translation. We develop HarmonicGAN to learn bi-directional translations between the source and the target domains. With the help of similarity-consistency, the inherent self-consistency property of samples can be maintained. Distance metrics defined on two types of features including histogram and CNN are exploited. Under an identical problem setting as CycleGAN, without additional manual inputs and only at a small training-time cost, HarmonicGAN demonstrates a significant qualitative and quantitative improvement over the state of the art, as well as improved interpretability. We show experimental results in a number of applications including medical imaging, object transfiguration, and semantic labeling. We outperform the competing methods in all tasks, and for a medical imaging task in particular our method turns CycleGAN from a failure to a success, halving the mean-squared error, and generating images that radiologists prefer over competing methods in 95% of cases.

288. Multilingual Neural Machine Translation with Knowledge Distillation [PDF] 返回目录
  ICLR 2019.
  Xu Tan, Yi Ren, Di He, Tao Qin, Zhou Zhao, Tie-Yan Liu
Multilingual machine translation, which translates multiple languages with a single model, has attracted much attention due to its efficiency of offline training and online serving. However, traditional multilingual translation usually yields inferior accuracy compared with the counterpart using individual models for each language pair, due to language diversity and model capacity limitations. In this paper, we propose a distillation-based approach to boost the accuracy of multilingual machine translation. Specifically, individual models are first trained and regarded as teachers, and then the multilingual model is trained to fit the training data and match the outputs of individual models simultaneously through knowledge distillation. Experiments on IWSLT, WMT and Ted talk translation datasets demonstrate the effectiveness of our method. Particularly, we show that one model is enough to handle multiple languages (up to 44 languages in our experiment), with comparable or even better accuracy than individual models.

289. Exemplar Guided Unsupervised Image-to-Image Translation with Semantic Consistency [PDF] 返回目录
  ICLR 2019.
  Liqian Ma, Xu Jia, Stamatios Georgoulis, Tinne Tuytelaars, Luc Van Gool
Image-to-image translation has recently received significant attention due to advances in deep learning. Most works focus on learning either a one-to-one mapping in an unsupervised way or a many-to-many mapping in a supervised way. However, a more practical setting is many-to-many mapping in an unsupervised way, which is harder due to the lack of supervision and the complex inner- and cross-domain variations. To alleviate these issues, we propose the Exemplar Guided & Semantically Consistent Image-to-image Translation (EGSC-IT) network which conditions the translation process on an exemplar image in the target domain. We assume that an image comprises of a content component which is shared across domains, and a style component specific to each domain. Under the guidance of an exemplar from the target domain we apply Adaptive Instance Normalization to the shared content component, which allows us to transfer the style information of the target domain to the source domain. To avoid semantic inconsistencies during translation that naturally appear due to the large inner- and cross-domain variations, we introduce the concept of feature masks that provide coarse semantic guidance without requiring the use of any semantic labels. Experimental results on various datasets show that EGSC-IT does not only translate the source image to diverse instances in the target domain, but also preserves the semantic consistency during the process.

290. Multilingual Neural Machine Translation With Soft Decoupled Encoding [PDF] 返回目录
  ICLR 2019.
  Xinyi Wang, Hieu Pham, Philip Arthur, Graham Neubig
Multilingual training of neural machine translation (NMT) systems has led to impressive accuracy improvements on low-resource languages. However, there are still significant challenges in efficiently learning word representations in the face of paucity of data. In this paper, we propose Soft Decoupled Encoding (SDE), a multilingual lexicon encoding framework specifically designed to share lexical-level information intelligently without requiring heuristic preprocessing such as pre-segmenting the data. SDE represents a word by its spelling through a character encoding, and its semantic meaning through a latent embedding space shared by all languages. Experiments on a standard dataset of four low-resource languages show consistent improvements over strong multilingual NMT baselines, with gains of up to 2 BLEU on one of the tested languages, achieving the new state-of-the-art on all four language pairs.

291. InstaGAN: Instance-aware Image-to-Image Translation [PDF] 返回目录
  ICLR 2019.
  Sangwoo Mo, Minsu Cho, Jinwoo Shin
Unsupervised image-to-image translation has gained considerable attention due to the recent impressive progress based on generative adversarial networks (GANs). However, previous methods often fail in challenging cases, in particular, when an image has multiple target instances and a translation task involves significant changes in shape, e.g., translating pants to skirts in fashion images. To tackle the issues, we propose a novel method, coined instance-aware GAN (InstaGAN), that incorporates the instance information (e.g., object segmentation masks) and improves multi-instance transfiguration. The proposed method translates both an image and the corresponding set of instance attributes while maintaining the permutation invariance property of the instances. To this end, we introduce a context preserving loss that encourages the network to learn the identity function outside of target instances. We also propose a sequential mini-batch inference/training technique that handles multiple instances with a limited GPU memory and enhances the network to generalize better for multiple instances. Our comparative evaluation demonstrates the effectiveness of the proposed method on different image datasets, in particular, in the aforementioned challenging cases. Code and results are available in https://github.com/sangwoomo/instagan

292. Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation [PDF] 返回目录
  ICML 2019.
  Shani Gamrian, Yoav Goldberg
Despite the remarkable success of Deep RL in learning control policies from raw pixels, the resulting models do not generalize. We demonstrate that a trained agent fails completely when facing small visual changes, and that fine-tuning—the common transfer learning paradigm—fails to adapt to these changes, to the extent that it is faster to re-train the model from scratch. We show that by separating the visual transfer task from the control policy we achieve substantially better sample efficiency and transfer behavior, allowing an agent trained on the source task to transfer well to the target tasks. The visual mapping from the target to the source domain is performed using unaligned GANs, resulting in a control policy that can be further improved using imitation learning from imperfect demonstrations. We demonstrate the approach on synthetic visual variants of the Breakout game, as well as on transfer between subsequent levels of Road Fighter, a Nintendo car-driving game. A visualization of our approach can be seen in \url{https://youtu.be/4mnkzYyXMn4} and \url{https://youtu.be/KCGTrQi6Ogo}.

293. Mixture Models for Diverse Machine Translation: Tricks of the Trade [PDF] 返回目录
  ICML 2019.
  Tianxiao Shen, Myle Ott, Michael Auli, Marc'Aurelio Ranzato
Mixture models trained via EM are among the simplest, most widely used and well understood latent variable models in the machine learning literature. Surprisingly, these models have been hardly explored in text generation applications such as machine translation. In principle, they provide a latent variable to control generation and produce a diverse set of hypotheses. In practice, however, mixture models are prone to degeneracies—often only one component gets trained or the latent variable is simply ignored. We find that disabling dropout noise in responsibility computation is critical to successful training. In addition, the design choices of parameterization, prior distribution, hard versus soft EM and online versus offline assignment can dramatically affect model performance. We develop an evaluation protocol to assess both quality and diversity of generations against multiple references, and provide an extensive empirical study of several mixture model variants. Our analysis shows that certain types of mixture models are more robust and offer the best trade-off between translation quality and diversity compared to variational models and diverse decoding approaches.\footnote{Code to reproduce the results in this paper is available at \url{https://github.com/pytorch/fairseq}}

294. Dense Temporal Convolution Network for Sign Language Translation [PDF] 返回目录
  IJCAI 2019.
  Dan Guo, Shuo Wang, Qi Tian, Meng Wang
The sign language translation (SLT) which aims at translating a sign language video into natural language is a weakly supervised task, given that there is no exact mapping relationship between visual actions and textual words in a sentence label. To align the sign language actions and translate them into the respective words automatically, this paper proposes a dense temporal convolution network, termed DenseTCN which captures the actions in hierarchical views. Within this network, a temporal convolution (TC) is designed to learn the short-term correlation among adjacent features and further extended to a dense hierarchical structure. In the kth TC layer, we integrate the outputs of all preceding layers together: (1) The TC in a deeper layer essentially has larger receptive fields, which captures long-term temporal context by the hierarchical content transition. (2) The integration addresses the SLT problem by different views, including embedded short-term and extended longterm sequential learning. Finally, we adopt the CTC loss and a fusion strategy to learn the featurewise classification and generate the translated sentence. The experimental results on two popular sign language benchmarks, i.e. PHOENIX and USTCConSents, demonstrate the effectiveness of our proposed method in terms of various measurements.

295. Connectionist Temporal Modeling of Video and Language: a Joint Model for Translation and Sign Labeling [PDF] 返回目录
  IJCAI 2019.
  Dan Guo, Shengeng Tang, Meng Wang
Online sign interpretation suffers from challenges presented by hybrid semantics learning among sequential variations of visual representations, sign linguistics, and textual grammars. This paper proposes a Connectionist Temporal Modeling (CTM) network for sentence translation and sign labeling. To acquire short-term temporal correlations, a Temporal Convolution Pyramid (TCP) module is performed on 2D CNN features to realize (2D+1D)=pseudo 3D' CNN features. CTM aligns the pseudo 3D' with the original 3D CNN clip features and fuses them. Next, we implement a connectionist decoding scheme for long-term sequential learning. Here, we embed dynamic programming into the decoding scheme, which learns temporal mapping among features, sign labels, and the generated sentence directly. The solution using dynamic programming to sign labeling is considered as pseudo labels. Finally, we utilize the pseudo supervision cues in an end-to-end framework. A joint objective function is designed to measure feature correlation, entropy regularization on sign labeling, and probability maximization on sentence decoding. The experimental results using the RWTH-PHOENIX-Weather and USTC-CSL datasets demonstrate the effectiveness of the proposed approach.

296. Deliberation Learning for Image-to-Image Translation [PDF] 返回目录
  IJCAI 2019.
  Tianyu He, Yingce Xia, Jianxin Lin, Xu Tan, Di He, Tao Qin, Zhibo Chen
Image-to-image translation, which transfers an image from a source domain to a target one, has attracted much attention in both academia and industry. The major approach is to adopt an encoder-decoder based framework, where the encoder extracts features from the input image and then the decoder decodes the features and generates an image in the target domain as the output. In this paper, we go beyond this learning framework by considering an additional polishing step on the output image. Polishing an image is very common in human's daily life, such as editing and beautifying a photo in Photoshop after taking/generating it by a digital camera. Such a deliberation process is shown to be very helpful and important in practice and thus we believe it will also be helpful for image translation. Inspired by the success of deliberation network in natural language processing, we extend deliberation process to the field of image translation. We verify our proposed method on four two-domain translation tasks and one multi-domain translation task. Both the qualitative and quantitative results demonstrate the effectiveness of our method.

297. Image-to-Image Translation with Multi-Path Consistency Regularization [PDF] 返回目录
  IJCAI 2019.
  Jianxin Lin, Yingce Xia, Yijun Wang, Tao Qin, Zhibo Chen
Image translation across different domains has attracted much attention in both machine learning and computer vision communities. Taking the translation from a source domain to a target domain as an example, existing algorithms mainly rely on two kinds of loss for training: One is the discrimination loss, which is used to differentiate images generated by the models and natural images; the other is the reconstruction loss, which measures the difference between an original image and the reconstructed version. In this work, we introduce a new kind of loss, multi-path consistency loss, which evaluates the differences between direct translation from source domain to target domain and indirect translation from source domain to an auxiliary domain to target domain, to regularize training. For multi-domain translation (at least, three) which focuses on building translation models between any two domains, at each training iteration, we randomly select three domains, set them respectively as the source, auxiliary and target domains, build the multi-path consistency loss and optimize the network. For two-domain translation, we need to introduce an additional auxiliary domain and construct the multi-path consistency loss. We conduct various experiments to demonstrate the effectiveness of our proposed methods, including face-to-face translation, paint-to-photo translation, and de-raining/de-noising translation.

298. From Words to Sentences: A Progressive Learning Approach for Zero-resource Machine Translation with Visual Pivots [PDF] 返回目录
  IJCAI 2019.
  Shizhe Chen, Qin Jin, Jianlong Fu
The neural machine translation model has suffered from the lack of large-scale parallel corpora. In contrast, we humans can learn multi-lingual translations even without parallel texts by referring our languages to the external world. To mimic such human learning behavior, we employ images as pivots to enable zero-resource translation learning. However, a picture tells a thousand words, which makes multi-lingual sentences pivoted by the same image noisy as mutual translations and thus hinders the translation model learning. In this work, we propose a progressive learning approach for image-pivoted zero-resource machine translation. Since words are less diverse when grounded in the image, we first learn word-level translation with image pivots, and then progress to learn the sentence-level translation by utilizing the learned word translation to suppress noises in image-pivoted multi-lingual sentences. Experimental results on two widely used image-pivot translation datasets, IAPR-TC12 and Multi30k, show that the proposed approach significantly outperforms other state-of-the-art methods.

299. Polygon-Net: A General Framework for Jointly Boosting Multiple Unsupervised Neural Machine Translation Models [PDF] 返回目录
  IJCAI 2019.
  Chang Xu, Tao Qin, Gang Wang, Tie-Yan Liu
Neural machine translation (NMT) has achieved great success. However, collecting large-scale parallel data for training is costly and laborious. Recently, unsupervised neural machine translation has attracted more and more attention, due to its demand for monolingual corpus only, which is common and easy to obtain, and its great potentials for the low-resource or even zero-resource machine translation. In this work, we propose a general framework called Polygon-Net, which leverages multi auxiliary languages for jointly boosting unsupervised neural machine translation models. Specifically, we design a novel loss function for multi-language unsupervised neural machine translation. In addition, different from the literature that just updating one or two models individually, Polygon-Net enables multiple unsupervised models in the framework to update in turn and enhance each other for the first time. In this way, multiple unsupervised translation models are associated with each other for training to achieve better performance. Experiments on the benchmark datasets including UN Corpus and WMT show that our approach significantly improves over the two-language based methods, and achieves better performance with more languages introduced to the framework.

300. Pre-training on high-resource speech recognition improves low-resource speech-to-text translation [PDF] 返回目录
  NAACL 2019.
  Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, Sharon Goldwater
We present a simple approach to improve direct speech-to-text translation (ST) when the source language is low-resource: we pre-train the model on a high-resource automatic speech recognition (ASR) task, and then fine-tune its parameters for ST. We demonstrate that our approach is effective by pre-training on 300 hours of English ASR data to improve Spanish English ST from 10.8 to 20.2 BLEU when only 20 hours of Spanish-English ST training data are available. Through an ablation study, we find that the pre-trained encoder (acoustic model) accounts for most of the improvement, despite the fact that the shared language in these tasks is the target language text, not the source language audio. Applying this insight, we show that pre-training on ASR helps ST even when the ASR language differs from both source and target ST languages: pre-training on French ASR also improves Spanish-English ST. Finally, we show that the approach improves performance on a true low-resource task: pre-training on a combination of English ASR and French ASR improves Mboshi-French ST, where only 4 hours of data are available, from 3.5 to 7.1 BLEU.

301. ReWE: Regressing Word Embeddings for Regularization of Neural Machine Translation Systems [PDF] 返回目录
  NAACL 2019.
  Inigo Jauregi Unanue, Ehsan Zare Borzeshi, Nazanin Esmaili, Massimo Piccardi
Regularization of neural machine translation is still a significant problem, especially in low-resource settings. To mollify this problem, we propose regressing word embeddings (ReWE) as a new regularization technique in a system that is jointly trained to predict the next word in the translation (categorical value) and its word embedding (continuous value). Such a joint training allows the proposed system to learn the distributional properties represented by the word embeddings, empirically improving the generalization to unseen sentences. Experiments over three translation datasets have showed a consistent improvement over a strong baseline, ranging between 0.91 and 2.4 BLEU points, and also a marked improvement over a state-of-the-art system.

302. Lost in Machine Translation: A Method to Reduce Meaning Loss [PDF] 返回目录
  NAACL 2019.
  Reuben Cohn-Gordon, Noah Goodman
A desideratum of high-quality translation systems is that they preserve meaning, in the sense that two sentences with different meanings should not translate to one and the same sentence in another language. However, state-of-the-art systems often fail in this regard, particularly in cases where the source and target languages partition the “meaning space” in different ways. For instance, “I cut my finger.” and “I cut my finger off.” describe different states of the world but are translated to French (by both Fairseq and Google Translate) as “Je me suis coupé le doigt.”, which is ambiguous as to whether the finger is detached. More generally, translation systems are typically many-to-one (non-injective) functions from source to target language, which in many cases results in important distinctions in meaning being lost in translation. Building on Bayesian models of informative utterance production, we present a method to define a less ambiguous translation system in terms of an underlying pre-trained neural sequence-to-sequence model. This method increases injectivity, resulting in greater preservation of meaning as measured by improvement in cycle-consistency, without impeding translation quality (measured by BLEU score).

303. Bi-Directional Differentiable Input Reconstruction for Low-Resource Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Xing Niu, Weijia Xu, Marine Carpuat
We aim to better exploit the limited amounts of parallel text available in low-resource settings by introducing a differentiable reconstruction loss for neural machine translation (NMT). This loss compares original inputs to reconstructed inputs, obtained by back-translating translation hypotheses into the input language. We leverage differentiable sampling and bi-directional NMT to train models end-to-end, without introducing additional parameters. This approach achieves small but consistent BLEU improvements on four language pairs in both translation directions, and outperforms an alternative differentiable reconstruction strategy based on hidden states.

304. Code-Switching for Enhancing NMT with Pre-Specified Translation [PDF] 返回目录
  NAACL 2019.
  Kai Song, Yue Zhang, Heng Yu, Weihua Luo, Kun Wang, Min Zhang
Leveraging user-provided translation to constrain NMT has practical significance. Existing methods can be classified into two main categories, namely the use of placeholder tags for lexicon words and the use of hard constraints during decoding. Both methods can hurt translation fidelity for various reasons. We investigate a data augmentation method, making code-switched training data by replacing source phrases with their target translations. Our method does not change the MNT model or decoding algorithm, allowing the model to learn lexicon translations by copying source-side target words. Extensive experiments show that our method achieves consistent improvements over existing approaches, improving translation of constrained words without hurting unconstrained words.

305. Understanding and Improving Hidden Representations for Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Guanlin Li, Lemao Liu, Xintong Li, Conghui Zhu, Tiejun Zhao, Shuming Shi
Multilayer architectures are currently the gold standard for large-scale neural machine translation. Existing works have explored some methods for understanding the hidden representations, however, they have not sought to improve the translation quality rationally according to their understanding. Towards understanding for performance improvement, we first artificially construct a sequence of nested relative tasks and measure the feature generalization ability of the learned hidden representation over these tasks. Based on our understanding, we then propose to regularize the layer-wise representations with all tree-induced tasks. To overcome the computational bottleneck resulting from the large number of regularization terms, we design efficient approximation methods by selecting a few coarse-to-fine tasks for regularization. Extensive experiments on two widely-used datasets demonstrate the proposed methods only lead to small extra overheads in training but no additional overheads in testing, and achieve consistent improvements (up to +1.3 BLEU) compared to the state-of-the-art translation model.

306. Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting [PDF] 返回目录
  NAACL 2019.
  J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, Benjamin Van Durme
Lexically-constrained sequence decoding allows for explicit positive or negative phrase-based constraints to be placed on target output strings in generation tasks such as machine translation or monolingual text rewriting. We describe vectorized dynamic beam allocation, which extends work in lexically-constrained decoding to work with batching, leading to a five-fold improvement in throughput when working with positive constraints. Faster decoding enables faster exploration of constraint strategies: we illustrate this via data augmentation experiments with a monolingual rewriter applied to the tasks of natural language inference, question answering and machine translation, showing improvements in all three.

307. Syntax-Enhanced Neural Machine Translation with Syntax-Aware Word Representations [PDF] 返回目录
  NAACL 2019.
  Meishan Zhang, Zhenghua Li, Guohong Fu, Min Zhang
Syntax has been demonstrated highly effective in neural machine translation (NMT). Previous NMT models integrate syntax by representing 1-best tree outputs from a well-trained parsing system, e.g., the representative Tree-RNN and Tree-Linearization methods, which may suffer from error propagation. In this work, we propose a novel method to integrate source-side syntax implicitly for NMT. The basic idea is to use the intermediate hidden representations of a well-trained end-to-end dependency parser, which are referred to as syntax-aware word representations (SAWRs). Then, we simply concatenate such SAWRs with ordinary word embeddings to enhance basic NMT models. The method can be straightforwardly integrated into the widely-used sequence-to-sequence (Seq2Seq) NMT models. We start with a representative RNN-based Seq2Seq baseline system, and test the effectiveness of our proposed method on two benchmark datasets of the Chinese-English and English-Vietnamese translation tasks, respectively. Experimental results show that the proposed approach is able to bring significant BLEU score improvements on the two datasets compared with the baseline, 1.74 points for Chinese-English translation and 0.80 point for English-Vietnamese translation, respectively. In addition, the approach also outperforms the explicit Tree-RNN and Tree-Linearization methods.

308. Competence-based Curriculum Learning for Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Emmanouil Antonios Platanios, Otilia Stretcu, Graham Neubig, Barnabas Poczos, Tom Mitchell
Current state-of-the-art NMT systems use large neural networks that are not only slow to train, but also often require many heuristics and optimization tricks, such as specialized learning rate schedules and large batch sizes. This is undesirable as it requires extensive hyperparameter tuning. In this paper, we propose a curriculum learning framework for NMT that reduces training time, reduces the need for specialized heuristics or large batch sizes, and results in overall better performance. Our framework consists of a principled way of deciding which training samples are shown to the model at different times during training, based on the estimated difficulty of a sample and the current competence of the model. Filtering training samples in this manner prevents the model from getting stuck in bad local optima, making it converge faster and reach a better solution than the common approach of uniformly sampling training examples. Furthermore, the proposed method can be easily applied to existing NMT models by simply modifying their input data pipelines. We show that our framework can help improve the training time and the performance of both recurrent neural network models and Transformers, achieving up to a 70% decrease in training time, while at the same time obtaining accuracy improvements of up to 2.2 BLEU.

309. Extract and Edit: An Alternative to Back-Translation for Unsupervised Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Jiawei Wu, Xin Wang, William Yang Wang
The overreliance on large parallel corpora significantly limits the applicability of machine translation systems to the majority of language pairs. Back-translation has been dominantly used in previous approaches for unsupervised neural machine translation, where pseudo sentence pairs are generated to train the models with a reconstruction loss. However, the pseudo sentences are usually of low quality as translation errors accumulate during training. To avoid this fundamental issue, we propose an alternative but more effective approach, extract-edit, to extract and then edit real sentences from the target monolingual corpora. Furthermore, we introduce a comparative translation loss to evaluate the translated target sentences and thus train the unsupervised translation systems. Experiments show that the proposed approach consistently outperforms the previous state-of-the-art unsupervised machine translation systems across two benchmarks (English-French and English-German) and two low-resource language pairs (English-Romanian and English-Russian) by more than 2 (up to 3.63) BLEU points.

310. Consistency by Agreement in Zero-Shot Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Maruan Al-Shedivat, Ankur Parikh
Generalization and reliability of multilingual translation often highly depend on the amount of available parallel data for each language pair of interest. In this paper, we focus on zero-shot generalization—a challenging setup that tests models on translation directions they have not been optimized for at training time. To solve the problem, we (i) reformulate multilingual translation as probabilistic inference, (ii) define the notion of zero-shot consistency and show why standard training often results in models unsuitable for zero-shot tasks, and (iii) introduce a consistent agreement-based training method that encourages the model to produce equivalent translations of parallel sentences in auxiliary languages. We test our multilingual NMT models on multiple public zero-shot translation benchmarks (IWSLT17, UN corpus, Europarl) and show that agreement-based learning often results in 2-3 BLEU zero-shot improvement over strong baselines without any loss in performance on supervised translation directions.

311. Learning to Stop in Structured Prediction for Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Mingbo Ma, Renjie Zheng, Liang Huang
Beam search optimization (Wiseman and Rush, 2016) resolves many issues in neural machine translation. However, this method lacks principled stopping criteria and does not learn how to stop during training, and the model naturally prefers longer hypotheses during the testing time in practice since they use the raw score instead of the probability-based score. We propose a novel ranking method which enables an optimal beam search stop- ping criteria. We further introduce a structured prediction loss function which penalizes suboptimal finished candidates produced by beam search during training. Experiments of neural machine translation on both synthetic data and real languages (German→English and Chinese→English) demonstrate our pro- posed methods lead to better length and BLEU score.

312. Curriculum Learning for Domain Adaptation in Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Xuan Zhang, Pamela Shapiro, Gaurav Kumar, Paul McNamee, Marine Carpuat, Kevin Duh
We introduce a curriculum learning approach to adapt generic neural machine translation models to a specific domain. Samples are grouped by their similarities to the domain of interest and each group is fed to the training algorithm with a particular schedule. This approach is simple to implement on top of any neural framework or architecture, and consistently outperforms both unadapted and adapted baselines in experiments with two distinct domains and two language pairs.

313. Improving Robustness of Machine Translation with Synthetic Noise [PDF] 返回目录
  NAACL 2019.
  Vaibhav Vaibhav, Sumeet Singh, Craig Stewart, Graham Neubig
Modern Machine Translation (MT) systems perform remarkably well on clean, in-domain text. However most of the human generated text, particularly in the realm of social media, is full of typos, slang, dialect, idiolect and other noise which can have a disastrous impact on the accuracy of MT. In this paper we propose methods to enhance the robustness of MT systems by emulating naturally occurring noise in otherwise clean data. Synthesizing noise in this manner we are ultimately able to make a vanilla MT system more resilient to naturally occurring noise, partially mitigating loss in accuracy resulting therefrom.

314. Non-Parametric Adaptation for Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Ankur Bapna, Orhan Firat
Neural Networks trained with gradient descent are known to be susceptible to catastrophic forgetting caused by parameter shift during the training process. In the context of Neural Machine Translation (NMT) this results in poor performance on heterogeneous datasets and on sub-tasks like rare phrase translation. On the other hand, non-parametric approaches are immune to forgetting, perfectly complementing the generalization ability of NMT. However, attempts to combine non-parametric or retrieval based approaches with NMT have only been successful on narrow domains, possibly due to over-reliance on sentence level retrieval. We propose a novel n-gram level retrieval approach that relies on local phrase level similarities, allowing us to retrieve neighbors that are useful for translation even when overall sentence similarity is low. We complement this with an expressive neural network, allowing our model to extract information from the noisy retrieved context. We evaluate our Semi-parametric NMT approach on a heterogeneous dataset composed of WMT, IWSLT, JRC-Acquis and OpenSubtitles, and demonstrate gains on all 4 evaluation sets. The Semi-parametric nature of our approach also opens the door for non-parametric domain adaptation, demonstrating strong inference-time adaptation performance on new domains without the need for any parameter updates.

315. Online Distilling from Checkpoints for Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Hao-Ran Wei, Shujian Huang, Ran Wang, Xin-yu Dai, Jiajun Chen
Current predominant neural machine translation (NMT) models often have a deep structure with large amounts of parameters, making these models hard to train and easily suffering from over-fitting. A common practice is to utilize a validation set to evaluate the training process and select the best checkpoint. Average and ensemble techniques on checkpoints can lead to further performance improvement. However, as these methods do not affect the training process, the system performance is restricted to the checkpoints generated in original training procedure. In contrast, we propose an online knowledge distillation method. Our method on-the-fly generates a teacher model from checkpoints, guiding the training process to obtain better performance. Experiments on several datasets and language pairs show steady improvement over a strong self-attention-based baseline system. We also provide analysis on data-limited setting against over-fitting. Furthermore, our method leads to an improvement in a machine reading experiment as well.

316. MuST-C: a Multilingual Speech Translation Corpus [PDF] 返回目录
  NAACL 2019.
  Mattia A. Di Gangi, Roldano Cattoni, Luisa Bentivogli, Matteo Negri, Marco Turchi
Current research on spoken language translation (SLT) has to confront with the scarcity of sizeable and publicly available training corpora. This problem hinders the adoption of neural end-to-end approaches, which represent the state of the art in the two parent tasks of SLT: automatic speech recognition and machine translation. To fill this gap, we created MuST-C, a multilingual speech translation corpus whose size and quality will facilitate the training of end-to-end systems for SLT from English into 8 languages. For each target language, MuST-C comprises at least 385 hours of audio recordings from English TED Talks, which are automatically aligned at the sentence level with their manual transcriptions and translations. Together with a description of the corpus creation methodology (scalable to add new data and cover new languages), we provide an empirical verification of its quality and SLT results computed with a state-of-the-art approach on each language direction.

317. Improving Neural Machine Translation with Neural Syntactic Distance [PDF] 返回目录
  NAACL 2019.
  Chunpeng Ma, Akihiro Tamura, Masao Utiyama, Eiichiro Sumita, Tiejun Zhao
The explicit use of syntactic information has been proved useful for neural machine translation (NMT). However, previous methods resort to either tree-structured neural networks or long linearized sequences, both of which are inefficient. Neural syntactic distance (NSD) enables us to represent a constituent tree using a sequence whose length is identical to the number of words in the sentence. NSD has been used for constituent parsing, but not in machine translation. We propose five strategies to improve NMT with NSD. Experiments show that it is not trivial to improve NMT with NSD; however, the proposed strategies are shown to improve translation performance of the baseline model (+2.1 (En–Ja), +1.3 (Ja–En), +1.2 (En–Ch), and +1.0 (Ch–En) BLEU).

318. Measuring Immediate Adaptation Performance for Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Patrick Simianer, Joern Wuebker, John DeNero
Incremental domain adaptation, in which a system learns from the correct output for each input immediately after making its prediction for that input, can dramatically improve system performance for interactive machine translation. Users of interactive systems are sensitive to the speed of adaptation and how often a system repeats mistakes, despite being corrected. Adaptation is most commonly assessed using corpus-level BLEU- or TER-derived metrics that do not explicitly take adaptation speed into account. We find that these metrics often do not capture immediate adaptation effects, such as zero-shot and one-shot learning of domain-specific lexical items. To this end, we propose new metrics that directly evaluate immediate adaptation performance for machine translation. We use these metrics to choose the most suitable adaptation method from a range of different adaptation techniques for neural machine translation systems.

319. Differentiable Sampling with Flexible Reference Word Order for Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Weijia Xu, Xing Niu, Marine Carpuat
Despite some empirical success at correcting exposure bias in machine translation, scheduled sampling algorithms suffer from a major drawback: they incorrectly assume that words in the reference translations and in sampled sequences are aligned at each time step. Our new differentiable sampling algorithm addresses this issue by optimizing the probability that the reference can be aligned with the sampled output, based on a soft alignment predicted by the model itself. As a result, the output distribution at each time step is evaluated with respect to the whole predicted sequence. Experiments on IWSLT translation tasks show that our approach improves BLEU compared to maximum likelihood and scheduled sampling baselines. In addition, our approach is simpler to train with no need for sampling schedule and yields models that achieve larger improvements with smaller beam sizes.

320. Reinforcement Learning based Curriculum Optimization for Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Gaurav Kumar, George Foster, Colin Cherry, Maxim Krikun
We consider the problem of making efficient use of heterogeneous training data in neural machine translation (NMT). Specifically, given a training dataset with a sentence-level feature such as noise, we seek an optimal curriculum, or order for presenting examples to the system during training. Our curriculum framework allows examples to appear an arbitrary number of times, and thus generalizes data weighting, filtering, and fine-tuning schemes. Rather than relying on prior knowledge to design a curriculum, we use reinforcement learning to learn one automatically, jointly with the NMT system, in the course of a single training run. We show that this approach can beat uniform baselines on Paracrawl and WMT English-to-French datasets by +3.4 and +1.3 BLEU respectively. Additionally, we match the performance of strong filtering baselines and hand-designed, state-of-the-art curricula.

321. Overcoming Catastrophic Forgetting During Domain Adaptation of Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, Philipp Koehn
Continued training is an effective method for domain adaptation in neural machine translation. However, in-domain gains from adaptation come at the expense of general-domain performance. In this work, we interpret the drop in general-domain performance as catastrophic forgetting of general-domain knowledge. To mitigate it, we adapt Elastic Weight Consolidation (EWC)—a machine learning method for learning a new task without forgetting previous tasks. Our method retains the majority of general-domain performance lost in continued training without degrading in-domain performance, outperforming the previous state-of-the-art. We also explore the full range of general-domain performance available when some in-domain degradation is acceptable.

322. Learning to Navigate Unseen Environments: Back Translation with Environmental Dropout [PDF] 返回目录
  NAACL 2019.
  Hao Tan, Licheng Yu, Mohit Bansal
A grand goal in AI is to build a robot that can accurately navigate based on natural language instructions, which requires the agent to perceive the scene, understand and ground language, and act in the real-world environment. One key challenge here is to learn to navigate in new environments that are unseen during training. Most of the existing approaches perform dramatically worse in unseen environments as compared to seen ones. In this paper, we present a generalizable navigational agent. Our agent is trained in two stages. The first stage is training via mixed imitation and reinforcement learning, combining the benefits from both off-policy and on-policy optimization. The second stage is fine-tuning via newly-introduced ‘unseen’ triplets (environment, path, instruction). To generate these unseen triplets, we propose a simple but effective ‘environmental dropout’ method to mimic unseen environments, which overcomes the problem of limited seen environment variability. Next, we apply semi-supervised learning (via back-translation) on these dropout environments to generate new paths and instructions. Empirically, we show that our agent is substantially better at generalizability when fine-tuned with these triplets, outperforming the state-of-art approaches by a large margin on the private unseen test set of the Room-to-Room task, and achieving the top rank on the leaderboard.

323. Fluent Translations from Disfluent Speech in End-to-End Speech Translation [PDF] 返回目录
  NAACL 2019.
  Elizabeth Salesky, Matthias Sperber, Alexander Waibel
Spoken language translation applications for speech suffer due to conversational speech phenomena, particularly the presence of disfluencies. With the rise of end-to-end speech translation models, processing steps such as disfluency removal that were previously an intermediate step between speech recognition and machine translation need to be incorporated into model architectures. We use a sequence-to-sequence model to translate from noisy, disfluent speech to fluent text with disfluencies removed using the recently collected ‘copy-edited’ references for the Fisher Spanish-English dataset. We are able to directly generate fluent translations and introduce considerations about how to evaluate success on this task. This work provides a baseline for a new task, implicitly removing disfluencies in end-to-end translation of conversational speech.

324. Neural Machine Translation of Text from Non-Native Speakers [PDF] 返回目录
  NAACL 2019.
  Antonios Anastasopoulos, Alison Lui, Toan Q. Nguyen, David Chiang
Neural Machine Translation (NMT) systems are known to degrade when confronted with noisy data, especially when the system is trained only on clean data. In this paper, we show that augmenting training data with sentences containing artificially-introduced grammatical errors can make the system more robust to such errors. In combination with an automatic grammar error correction system, we can recover 1.0 BLEU out of 2.4 BLEU lost due to grammatical errors. We also present a set of Spanish translations of the JFLEG grammar error correction corpus, which allows for testing NMT robustness to real grammatical errors.

325. Improving Domain Adaptation Translation with Domain Invariant and Specific Information [PDF] 返回目录
  NAACL 2019.
  Shuhao Gu, Yang Feng, Qun Liu
In domain adaptation for neural machine translation, translation performance can benefit from separating features into domain-specific features and common features. In this paper, we propose a method to explicitly model the two kinds of information in the encoder-decoder framework so as to exploit out-of-domain data in in-domain training. In our method, we maintain a private encoder and a private decoder for each domain which are used to model domain-specific information. In the meantime, we introduce a common encoder and a common decoder shared by all the domains which can only have domain-independent information flow through. Besides, we add a discriminator to the shared encoder and employ adversarial training for the whole model to reinforce the performance of information separation and machine translation simultaneously. Experiment results show that our method can outperform competitive baselines greatly on multiple data sets.

326. Selective Attention for Context-aware Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Sameen Maruf, André F. T. Martins, Gholamreza Haffari
Despite the progress made in sentence-level NMT, current systems still fall short at achieving fluent, good quality translation for a full document. Recent works in context-aware NMT consider only a few previous sentences as context and may not scale to entire documents. To this end, we propose a novel and scalable top-down approach to hierarchical attention for context-aware NMT which uses sparse attention to selectively focus on relevant sentences in the document context and then attends to key words in those sentences. We also propose single-level attention approaches based on sentence or word-level information in the context. The document-level context representation, produced from these attention modules, is integrated into the encoder or decoder of the Transformer model depending on whether we use monolingual or bilingual context. Our experiments and evaluation on English-German datasets in different document MT settings show that our selective attention approach not only significantly outperforms context-agnostic baselines but also surpasses context-aware baselines in most cases.

327. Unsupervised Extraction of Partial Translations for Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Benjamin Marie, Atsushi Fujita
In neural machine translation (NMT), monolingual data are usually exploited through a so-called back-translation: sentences in the target language are translated into the source language to synthesize new parallel data. While this method provides more training data to better model the target language, on the source side, it only exploits translations that the NMT system is already able to generate using a model trained on existing parallel data. In this work, we assume that new translation knowledge can be extracted from monolingual data, without relying at all on existing parallel data. We propose a new algorithm for extracting from monolingual data what we call partial translations: pairs of source and target sentences that contain sequences of tokens that are translations of each other. Our algorithm is fully unsupervised and takes only source and target monolingual data as input. Our empirical evaluation points out that our partial translations can be used in combination with back-translation to further improve NMT models. Furthermore, while partial translations are particularly useful for low-resource language pairs, they can also be successfully exploited in resource-rich scenarios to improve translation quality.

328. Revisiting Adversarial Autoencoder for Unsupervised Word Translation with Cycle Consistency and Improved Training [PDF] 返回目录
  NAACL 2019.
  Tasnim Mohiuddin, Shafiq Joty
Adversarial training has shown impressive success in learning bilingual dictionary without any parallel data by mapping monolingual embeddings to a shared space. However, recent work has shown superior performance for non-adversarial methods in more challenging language pairs. In this work, we revisit adversarial autoencoder for unsupervised word translation and propose two novel extensions to it that yield more stable training and improved results. Our method includes regularization terms to enforce cycle consistency and input reconstruction, and puts the target encoders as an adversary against the corresponding discriminator. Extensive experimentations with European, non-European and low-resource languages show that our method is more robust and achieves better performance than recently proposed adversarial and non-adversarial approaches.

329. Addressing word-order Divergence in Multilingual Neural Machine Translation for extremely Low Resource Languages [PDF] 返回目录
  NAACL 2019.
  Rudra Murthy, Anoop Kunchukuttan, Pushpak Bhattacharyya
Transfer learning approaches for Neural Machine Translation (NMT) train a NMT model on an assisting language-target language pair (parent model) which is later fine-tuned for the source language-target language pair of interest (child model), with the target language being the same. In many cases, the assisting language has a different word order from the source language. We show that divergent word order adversely limits the benefits from transfer learning when little to no parallel corpus between the source and target language is available. To bridge this divergence, we propose to pre-order the assisting language sentences to match the word order of the source language and train the parent model. Our experiments on many language pairs show that bridging the word order gap leads to significant improvement in the translation quality in extremely low-resource scenarios.

330. Massively Multilingual Neural Machine Translation [PDF] 返回目录
  NAACL 2019.
  Roee Aharoni, Melvin Johnson, Orhan Firat
Multilingual Neural Machine Translation enables training a single model that supports translation from multiple source languages into multiple target languages. We perform extensive experiments in training massively multilingual NMT models, involving up to 103 distinct languages and 204 translation directions simultaneously. We explore different setups for training such models and analyze the trade-offs between translation quality and various modeling decisions. We report results on the publicly available TED talks multilingual corpus where we show that massively multilingual many-to-many models are effective in low resource settings, outperforming the previous state-of-the-art while supporting up to 59 languages in 116 translation directions in a single model. Our experiments on a large-scale dataset with 103 languages, 204 trained directions and up to one million examples per direction also show promising results, surpassing strong bilingual baselines and encouraging future work on massively multilingual NMT.

331. Probing the Need for Visual Context in Multimodal Machine Translation [PDF] 返回目录
  NAACL 2019.
  Ozan Caglayan, Pranava Madhyastha, Lucia Specia, Loïc Barrault
Current work on multimodal machine translation (MMT) has suggested that the visual modality is either unnecessary or only marginally beneficial. We posit that this is a consequence of the very simple, short and repetitive sentences used in the only available dataset for the task (Multi30K), rendering the source text sufficient as context. In the general case, however, we believe that it is possible to combine visual and textual information in order to ground translations. In this paper we probe the contribution of the visual modality to state-of-the-art MMT models by conducting a systematic analysis where we partially deprive the models from source-side textual context. Our results show that under limited textual context, models are capable of leveraging the visual input to generate better translations. This contradicts the current belief that MMT models disregard the visual modality because of either the quality of the image features or the way they are integrated into the model.

332. Multimodal Machine Translation with Embedding Prediction [PDF] 返回目录
  NAACL 2019. Student Research Workshop
  Tosho Hirasawa, Hayahide Yamagishi, Yukio Matsumura, Mamoru Komachi
Multimodal machine translation is an attractive application of neural machine translation (NMT). It helps computers to deeply understand visual objects and their relations with natural languages. However, multimodal NMT systems suffer from a shortage of available training data, resulting in poor performance for translating rare words. In NMT, pretrained word embeddings have been shown to improve NMT of low-resource domains, and a search-based approach is proposed to address the rare word problem. In this study, we effectively combine these two approaches in the context of multimodal NMT and explore how we can take full advantage of pretrained word embeddings to better translate rare words. We report overall performance improvements of 1.24 METEOR and 2.49 BLEU and achieve an improvement of 7.67 F-score for rare word translation.

333. Train, Sort, Explain: Learning to Diagnose Translation Models [PDF] 返回目录
  NAACL 2019. Demonstrations
  Robert Schwarzenberg, David Harbecke, Vivien Macketanz, Eleftherios Avramidis, Sebastian Möller
Evaluating translation models is a trade-off between effort and detail. On the one end of the spectrum there are automatic count-based methods such as BLEU, on the other end linguistic evaluations by humans, which arguably are more informative but also require a disproportionately high effort. To narrow the spectrum, we propose a general approach on how to automatically expose systematic differences between human and machine translations to human experts. Inspired by adversarial settings, we train a neural text classifier to distinguish human from machine translations. A classifier that performs and generalizes well after training should recognize systematic differences between the two classes, which we uncover with neural explainability methods. Our proof-of-concept implementation, DiaMaT, is open source. Applied to a dataset translated by a state-of-the-art neural Transformer model, DiaMaT achieves a classification accuracy of 75% and exposes meaningful differences between humans and the Transformer, amidst the current discussion about human parity.

334. Neural Machine Translation between Myanmar (Burmese) and Rakhine (Arakanese) [PDF] 返回目录
  NAACL 2019. the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects
  Thazin Myint Oo, Ye Kyaw Thu, Khin Mar Soe
This work explores neural machine translation between Myanmar (Burmese) and Rakhine (Arakanese). Rakhine is a language closely related to Myanmar, often considered a dialect. We implemented three prominent neural machine translation (NMT) systems: recurrent neural networks (RNN), transformer, and convolutional neural networks (CNN). The systems were evaluated on a Myanmar-Rakhine parallel text corpus developed by us. In addition, two types of word segmentation schemes for word embeddings were studied: Word-BPE and Syllable-BPE segmentation. Our experimental results clearly show that the highest quality NMT and statistical machine translation (SMT) performances are obtained with Syllable-BPE segmentation for both types of translations. If we focus on NMT, we find that the transformer with Word-BPE segmentation outperforms CNN and RNN for both Myanmar-Rakhine and Rakhine-Myanmar translation. However, CNN with Syllable-BPE segmentation obtains a higher score than the RNN and transformer.

335. Comparing Pipelined and Integrated Approaches to Dialectal Arabic Neural Machine Translation [PDF] 返回目录
  NAACL 2019. the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects
  Pamela Shapiro, Kevin Duh
When translating diglossic languages such as Arabic, situations may arise where we would like to translate a text but do not know which dialect it is. A traditional approach to this problem is to design dialect identification systems and dialect-specific machine translation systems. However, under the recent paradigm of neural machine translation, shared multi-dialectal systems have become a natural alternative. Here we explore under which conditions it is beneficial to perform dialect identification for Arabic neural machine translation versus using a general system for all dialects.

336. A Blissymbolics Translation System [PDF] 返回目录
  NAACL 2019. the Eighth Workshop on Speech and Language Processing for Assistive Technologies
  Usman Sohail, David Traum
Blissymbolics (Bliss) is a pictographic writing system that is used by people with communication disorders. Bliss attempts to create a writing system that makes words easier to distinguish by using pictographic symbols that encapsulate meaning rather than sound, as the English alphabet does for example. Users of Bliss rely on human interpreters to use Bliss. We created a translation system from Bliss to natural English with the hopes of decreasing the reliance on human interpreters by the Bliss community. We first discuss the basic rules of Blissymbolics. Then we point out some of the challenges associated with developing computer assisted tools for Blissymbolics. Next we talk about our ongoing work in developing a translation system, including current limitations, and future work. We conclude with a set of examples showing the current capabilities of our translation system.

337. Grounded Word Sense Translation [PDF] 返回目录
  NAACL 2019. the Second Workshop on Shortcomings in Vision and Language
  Chiraag Lala, Pranava Madhyastha, Lucia Specia
Recent work on visually grounded language learning has focused on broader applications of grounded representations, such as visual question answering and multimodal machine translation. In this paper we consider grounded word sense translation, i.e. the task of correctly translating an ambiguous source word given the corresponding textual and visual context. Our main objective is to investigate the extent to which images help improve word-level (lexical) translation quality. We do so by first studying the dataset for this task to understand the scope and challenges of the task. We then explore different data settings, image features, and ways of grounding to investigate the gain from using images in each of the combinations. We find that grounding on the image is specially beneficial in weaker unidirectional recurrent translation models. We observe that adding structured image information leads to stronger gains in lexical translation accuracy.

338. Multi-mapping Image-to-Image Translation via Learning Disentanglement [PDF] 返回目录
  NeurIPS 2019.
  Xiaoming Yu, Yuanqi Chen, Shan Liu, Thomas Li, Ge Li
Recent advances of image-to-image translation focus on learning the one-to-many mapping from two aspects: multi-modal translation and multi-domain translation. However, the existing methods only consider one of the two perspectives, which makes them unable to solve each other's problem. To address this issue, we propose a novel unified model, which bridges these two objectives. First, we disentangle the input images into the latent representations by an encoder-decoder architecture with a conditional adversarial training in the feature space. Then, we encourage the generator to learn multi-mappings by a random cross-domain translation. As a result, we can manipulate different parts of the latent representations to perform multi-modal and multi-domain translations simultaneously. Experiments demonstrate that our method outperforms state-of-the-art methods.

339. Flow-based Image-to-Image Translation with Feature Disentanglement [PDF] 返回目录
  NeurIPS 2019.
  Ruho Kondo, Keisuke Kawano, Satoshi Koide, Takuro Kutsuna
Learning non-deterministic dynamics and intrinsic factors from images obtained through physical experiments is at the intersection of machine learning and material science. Disentangling the origins of uncertainties involved in microstructure growth, for example, is of great interest because future states vary due to thermal fluctuation and other environmental factors. To this end we propose a flow-based image-to-image model, called Flow U-Net with Squeeze modules (FUNS), that allows us to disentangle the features while retaining the ability to generate highquality diverse images from condition images. Our model successfully captures probabilistic phenomena by incorporating a U-Net-like architecture into the flowbased model. In addition, our model automatically separates the diversity of target images into condition-dependent/independent parts. We demonstrate that the quality and diversity of the images generated for microstructure growth and CelebA datasets outperform existing variational generative models.

340. Comparing Unsupervised Word Translation Methods Step by Step [PDF] 返回目录
  NeurIPS 2019.
  Mareike Hartmann, Yova Kementchedjhieva, Anders Søgaard
Cross-lingual word vector space alignment is the task of mapping the vocabularies of two languages into a shared semantic space, which can be used for dictionary induction, unsupervised machine translation, and transfer learning. In the unsupervised regime, an initial seed dictionary is learned in the absence of any known correspondences between words, through {\bf distribution matching}, and the seed dictionary is then used to supervise the induction of the final alignment in what is typically referred to as a (possibly iterative) {\bf refinement} step. We focus on the first step and compare distribution matching techniques in the context of language pairs for which mixed training stability and evaluation scores have been reported. We show that, surprisingly, when looking at this initial step in isolation, vanilla GANs are superior to more recent methods, both in terms of precision and robustness. The improvements reported by more recent methods thus stem from the refinement techniques, and we show that we can obtain state-of-the-art performance combining vanilla GANs with such refinement techniques.

341. Neural Machine Translation with Soft Prototype [PDF] 返回目录
  NeurIPS 2019.
  Yiren Wang, Yingce Xia, Fei Tian, Fei Gao, Tao Qin, Cheng Xiang Zhai, Tie-Yan Liu
Neural machine translation models usually use the encoder-decoder framework and generate translation from left to right (or right to left) without fully utilizing the target-side global information. A few recent approaches seek to exploit the global information through two-pass decoding, yet have limitations in translation quality and model efficiency. In this work, we propose a new framework that introduces a soft prototype into the encoder-decoder architecture, which allows the decoder to have indirect access to both past and future information, such that each target word can be generated based on the better global understanding. We further provide an efficient and effective method to generate the prototype. Empirical studies on various neural machine translation tasks show that our approach brings significant improvement in generation quality over the baseline model, with little extra cost in storage and inference time, demonstrating the effectiveness of our proposed framework. Specially, we achieve state-of-the-art results on WMT2014, 2015 and 2017 English to German translation.

342. Explicitly disentangling image content from translation and rotation with spatial-VAE [PDF] 返回目录
  NeurIPS 2019.
  Tristan Bepler, Ellen Zhong, Kotaro Kelley, Edward Brignole, Bonnie Berger
Given an image dataset, we are often interested in finding data generative factors that encode semantic content independently from pose variables such as rotation and translation. However, current disentanglement approaches do not impose any specific structure on the learned latent representations. We propose a method for explicitly disentangling image rotation and translation from other unstructured latent factors in a variational autoencoder (VAE) framework. By formulating the generative model as a function of the spatial coordinate, we make the reconstruction error differentiable with respect to latent translation and rotation parameters. This formulation allows us to train a neural network to perform approximate inference on these latent variables while explicitly constraining them to only represent rotation and translation. We demonstrate that this framework, termed spatial-VAE, effectively learns latent representations that disentangle image rotation and translation from content and improves reconstruction over standard VAEs on several benchmark datasets, including applications to modeling continuous 2-D views of proteins from single particle electron microscopy and galaxies in astronomical images.

343. Semantic Neural Machine Translation Using AMR [PDF] 返回目录
  TACL 2019.
  Linfeng Song, Daniel Gildea, Yue Zhang, Zhiguo Wang, Jinsong Su
It is intuitive that semantic representations can be useful for machine translation, mainly because they can help in enforcing meaning preservation and handling data sparsity (many sentences correspond to one meaning) of machine translation models. On the other hand, little work has been done on leveraging semantics for neural machine translation (NMT). In this work, we study the usefulness of AMR (abstract meaning representation) on NMT. Experiments on a standard English-to-German dataset show that incorporating AMR as additional knowledge can significantly improve a strong attention-based sequence-to-sequence neural translation model.

344. Synchronous Bidirectional Neural Machine Translation [PDF] 返回目录
  TACL 2019.
  Long Zhou, Jiajun Zhang, Chengqing Zong
Existing approaches to neural machine translation (NMT) generate the target language sequence token-by-token from left to right. However, this kind of unidirectional decoding framework cannot make full use of the target-side future contexts which can be produced in a right-to-left decoding direction, and thus suffers from the issue of unbalanced outputs. In this paper, we introduce a synchronous bidirectional–neural machine translation (SB-NMT) that predicts its outputs using left-to-right and right-to-left decoding simultaneously and interactively, in order to leverage both of the history and future information at the same time. Specifically, we first propose a new algorithm that enables synchronous bidirectional decoding in a single model. Then, we present an interactive decoding model in which left-to-right (right-to-left) generation does not only depend on its previously generated outputs, but also relies on future contexts predicted by right-to-left (left-to-right) decoding. We extensively evaluate the proposed SB-NMT model on large-scale NIST Chinese–English, WMT14 English–German, and WMT18 Russian–English translation tasks. Experimental results demonstrate that our model achieves significant improvements over the strong Transformer model by 3.92, 1.49, and 1.04 BLEU points, respectively, and obtains the state-of-the-art performance on Chinese–English and English–German translation tasks.

345. Attention-Passing Models for Robust and Data-Efficient End-to-End Speech Translation [PDF] 返回目录
  TACL 2019.
  Matthias Sperber, Graham Neubig, Jan Niehues, Alex Waibel
Speech translation has traditionally been approached through cascaded models consisting of a speech recognizer trained on a corpus of transcribed speech, and a machine translation system trained on parallel texts. Several recent works have shown the feasibility of collapsing the cascade into a single, direct model that can be trained in an end-to-end fashion on a corpus of translated speech. However, experiments are inconclusive on whether the cascade or the direct model is stronger, and have only been conducted under the unrealistic assumption that both are trained on equal amounts of data, ignoring other available speech recognition and machine translation corpora. In this paper, we demonstrate that direct speech translation models require more data to perform well than cascaded models, and although they allow including auxiliary data through multi-task training, they are poor at exploiting such data, putting them at a severe disadvantage. As a remedy, we propose the use of end- to-end trainable models with two attention mechanisms, the first establishing source speech to source text alignments, the second modeling source to target text alignment. We show that such models naturally decompose into multi-task–trainable recognition and translation tasks and propose an attention-passing technique that alleviates error propagation issues in a previous formulation of a model with two attention stages. Our proposed model outperforms all examined baselines and is able to exploit auxiliary training data much more effectively than direct attentional models.

注:论文列表使用AC论文搜索器整理!