0%

【NLP】 2014-2020 Simultaneous Translation 同传 or 同步翻译相关论文整理

目录

1. Opportunistic Decoding with Timely Correction for Simultaneous Translation, ACL 2020 [PDF] 摘要
2. Simultaneous Translation Policies: From Fixed to Adaptive, ACL 2020 [PDF] 摘要
3. SimulSpeech: End-to-End Simultaneous Speech to Text Translation, ACL 2020 [PDF] 摘要
4. Proceedings of the First Workshop on Automatic Simultaneous Translation, ACL 2020 [PDF] 摘要
5. Dynamic Sentence Boundary Detection for Simultaneous Translation, ACL 2020 [PDF] 摘要
6. ON-TRAC Consortium for End-to-End and Simultaneous Speech Translation Challenge Tasks at IWSLT 2020, ACL 2020 [PDF] 摘要
7. End-to-End Simultaneous Translation System for IWSLT2020 Using Modality Agnostic Meta-Learning, ACL 2020 [PDF] 摘要
8. Re-translation versus Streaming for Simultaneous Translation, ACL 2020 [PDF] 摘要
9. Towards Stream Translation: Adaptive Computation Time for Simultaneous Machine Translation, ACL 2020 [PDF] 摘要
10. Neural Simultaneous Speech Translation Using Alignment-Based Chunking, ACL 2020 [PDF] 摘要
11. The JHU Submission to the 2020 Duolingo Shared Task on Simultaneous Translation and Paraphrase for Language Education, ACL 2020 [PDF] 摘要
12. Simultaneous paraphrasing and translation by fine-tuning Transformer models, ACL 2020 [PDF] 摘要
13. Simultaneous Translation and Paraphrase for Language Education, ACL 2020 [PDF] 摘要
14. Simultaneous Machine Translation with Visual Context, EMNLP 2020 [PDF] 摘要
15. Learning Adaptive Segmentation Policy for Simultaneous Translation, EMNLP 2020 [PDF] 摘要
16. Fluent and Low-latency Simultaneous Speech-to-Speech Translation with Self-adaptive Training, EMNLP 2020 [PDF] 摘要
17. Monotonic Infinite Lookback Attention for Simultaneous Machine Translation, ACL 2019 [PDF] 摘要
18. STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework, ACL 2019 [PDF] 摘要
19. Simultaneous Translation with Flexible Policy via Restricted Imitation Learning, ACL 2019 [PDF] 摘要
20. Simpler and Faster Learning of Adaptive Policies for Simultaneous Translation, EMNLP 2019 [PDF] 摘要
21. Speculative Beam Search for Simultaneous Translation, EMNLP 2019 [PDF] 摘要
22. Lost in Interpretation: Predicting Untranslated Terminology in Simultaneous Interpretation, NAACL 2019 [PDF] 摘要
23. Prediction Improves Simultaneous Neural Machine Translation, EMNLP 2018 [PDF] 摘要
24. Incremental Decoding and Training Methods for Simultaneous Translation in Neural Machine Translation, NAACL 2018 [PDF] 摘要
25. Interpretese vs. Translationese: The Uniqueness of Human Strategies in Simultaneous Interpretation, NAACL 2016 [PDF] 摘要
26. Lecture Translator - Speech translation framework for simultaneous lecture translation, NAACL 2016 [PDF] 摘要
27. Syntax-based Simultaneous Translation through Prediction of Unseen Syntactic Constituents, ACL 2015 [PDF] 摘要
28. Syntax-based Simultaneous Translation through Prediction of Unseen Syntactic Constituents, ACL 2015 [PDF] 摘要
29. Syntax-based Rewriting for Simultaneous Machine Translation, EMNLP 2015 [PDF] 摘要
30. Optimizing Segmentation Strategies for Simultaneous Speech Translation, ACL 2014 [PDF] 摘要
31. Don’t Until the Final Verb Wait: Reinforcement Learning for Simultaneous Machine Translation, EMNLP 2014 [PDF] 摘要

摘要

1. Opportunistic Decoding with Timely Correction for Simultaneous Translation [PDF] 返回目录
  ACL 2020.
  Renjie Zheng, Mingbo Ma, Baigong Zheng, Kaibo Liu, Liang Huang
Simultaneous translation has many important application scenarios and attracts much attention from both academia and industry recently. Most existing frameworks, however, have difficulties in balancing between the translation quality and latency, i.e., the decoding policy is usually either too aggressive or too conservative. We propose an opportunistic decoding technique with timely correction ability, which always (over-)generates a certain mount of extra words at each step to keep the audience on track with the latest information. At the same time, it also corrects, in a timely fashion, the mistakes in the former overgenerated words when observing more source context to ensure high translation quality. Experiments show our technique achieves substantial reduction in latency and up to +3.1 increase in BLEU, with revision rate under 8% in Chinese-to-English and English-to-Chinese translation.

2. Simultaneous Translation Policies: From Fixed to Adaptive [PDF] 返回目录
  ACL 2020.
  Baigong Zheng, Kaibo Liu, Renjie Zheng, Mingbo Ma, Hairong Liu, Liang Huang
Adaptive policies are better than fixed policies for simultaneous translation, since they can flexibly balance the tradeoff between translation quality and latency based on the current context information. But previous methods on obtaining adaptive policies either rely on complicated training process, or underperform simple fixed policies. We design an algorithm to achieve adaptive policies via a simple heuristic composition of a set of fixed policies. Experiments on Chinese -> English and German -> English show that our adaptive policies can outperform fixed ones by up to 4 BLEU points for the same latency, and more surprisingly, it even surpasses the BLEU score of full-sentence translation in the greedy mode (and very close to beam mode), but with much lower latency.

3. SimulSpeech: End-to-End Simultaneous Speech to Text Translation [PDF] 返回目录
  ACL 2020.
  Yi Ren, Jinglin Liu, Xu Tan, Chen Zhang, Tao Qin, Zhou Zhao, Tie-Yan Liu
In this work, we develop SimulSpeech, an end-to-end simultaneous speech to text translation system which translates speech in source language to text in target language concurrently. SimulSpeech consists of a speech encoder, a speech segmenter and a text decoder, where 1) the segmenter builds upon the encoder and leverages a connectionist temporal classification (CTC) loss to split the input streaming speech in real time, 2) the encoder-decoder attention adopts a wait-k strategy for simultaneous translation. SimulSpeech is more challenging than previous cascaded systems (with simultaneous automatic speech recognition (ASR) and simultaneous neural machine translation (NMT)). We introduce two novel knowledge distillation methods to ensure the performance: 1) Attention-level knowledge distillation transfers the knowledge from the multiplication of the attention matrices of simultaneous NMT and ASR models to help the training of the attention mechanism in SimulSpeech; 2) Data-level knowledge distillation transfers the knowledge from the full-sentence NMT model and also reduces the complexity of data distribution to help on the optimization of SimulSpeech. Experiments on MuST-C English-Spanish and English-German spoken language translation datasets show that SimulSpeech achieves reasonable BLEU scores and lower delay compared to full-sentence end-to-end speech to text translation (without simultaneous translation), and better performance than the two-stage cascaded simultaneous translation model in terms of BLEU scores and translation delay.

4. Proceedings of the First Workshop on Automatic Simultaneous Translation [PDF] 返回目录
  ACL 2020. the First Workshop on Automatic Simultaneous Translation
  Hua Wu, Collin Cherry, Liang Huang, Zhongjun He, Mark Liberman, James Cross, Yang Liu


5. Dynamic Sentence Boundary Detection for Simultaneous Translation [PDF] 返回目录
  ACL 2020. the First Workshop on Automatic Simultaneous Translation
  Ruiqing Zhang, Chuanqiang Zhang
Simultaneous Translation is a great challenge in which translation starts before the source sentence finished. Most studies take transcription as input and focus on balancing translation quality and latency for each sentence. However, most ASR systems can not provide accurate sentence boundaries in realtime. Thus it is a key problem to segment sentences for the word streaming before translation. In this paper, we propose a novel method for sentence boundary detection that takes it as a multi-class classification task under the end-to-end pre-training framework. Experiments show significant improvements both in terms of translation quality and latency.

6. ON-TRAC Consortium for End-to-End and Simultaneous Speech Translation Challenge Tasks at IWSLT 2020 [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Maha Elbayad, Ha Nguyen, Fethi Bougares, Natalia Tomashenko, Antoine Caubrière, Benjamin Lecouteux, Yannick Estève, Laurent Besacier
This paper describes the ON-TRAC Consortium translation systems developed for two challenge tracks featured in the Evaluation Campaign of IWSLT 2020, offline speech translation and simultaneous speech translation. ON-TRAC Consortium is composed of researchers from three French academic laboratories: LIA (Avignon Université), LIG (Université Grenoble Alpes), and LIUM (Le Mans Université). Attention-based encoder-decoder models, trained end-to-end, were used for our submissions to the offline speech translation track. Our contributions focused on data augmentation and ensembling of multiple models. In the simultaneous speech translation track, we build on Transformer-based wait-k models for the text-to-text subtask. For speech-to-text simultaneous translation, we attach a wait-k MT system to a hybrid ASR system. We propose an algorithm to control the latency of the ASR+MT cascade and achieve a good latency-quality trade-off on both subtasks.

7. End-to-End Simultaneous Translation System for IWSLT2020 Using Modality Agnostic Meta-Learning [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Hou Jeung Han, Mohd Abbas Zaidi, Sathish Reddy Indurthi, Nikhil Kumar Lakumarapu, Beomseok Lee, Sangha Kim
In this paper, we describe end-to-end simultaneous speech-to-text and text-to-text translation systems submitted to IWSLT2020 online translation challenge. The systems are built by adding wait-k and meta-learning approaches to the Transformer architecture. The systems are evaluated on different latency regimes. The simultaneous text-to-text translation achieved a BLEU score of 26.38 compared to the competition baseline score of 14.17 on the low latency regime (Average latency ≤ 3). The simultaneous speech-to-text system improves the BLEU score by 7.7 points over the competition baseline for the low latency regime (Average Latency ≤ 1000).

8. Re-translation versus Streaming for Simultaneous Translation [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, George Foster
There has been great progress in improving streaming machine translation, a simultaneous paradigm where the system appends to a growing hypothesis as more source content becomes available. We study a related problem in which revisions to the hypothesis beyond strictly appending words are permitted. This is suitable for applications such as live captioning an audio feed. In this setting, we compare custom streaming approaches to re-translation, a straightforward strategy where each new source token triggers a distinct translation from scratch. We find re-translation to be as good or better than state-of-the-art streaming systems, even when operating under constraints that allow very few revisions. We attribute much of this success to a previously proposed data-augmentation technique that adds prefix-pairs to the training data, which alongside wait-k inference forms a strong baseline for streaming translation. We also highlight re-translation’s ability to wrap arbitrarily powerful MT systems with an experiment showing large improvements from an upgrade to its base model.

9. Towards Stream Translation: Adaptive Computation Time for Simultaneous Machine Translation [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Felix Schneider, Alexander Waibel
Simultaneous machine translation systems rely on a policy to schedule read and write operations in order to begin translating a source sentence before it is complete. In this paper, we demonstrate the use of Adaptive Computation Time (ACT) as an adaptive, learned policy for simultaneous machine translation using the transformer model and as a more numerically stable alternative to Monotonic Infinite Lookback Attention (MILk). We achieve state-of-the-art results in terms of latency-quality tradeoffs. We also propose a method to use our model on unsegmented input, i.e. without sentence boundaries, simulating the condition of translating output from automatic speech recognition. We present first benchmark results on this task.

10. Neural Simultaneous Speech Translation Using Alignment-Based Chunking [PDF] 返回目录
  ACL 2020. the 17th International Conference on Spoken Language Translation
  Patrick Wilken, Tamer Alkhouli, Evgeny Matusov, Pavel Golik
In simultaneous machine translation, the objective is to determine when to produce a partial translation given a continuous stream of source words, with a trade-off between latency and quality. We propose a neural machine translation (NMT) model that makes dynamic decisions when to continue feeding on input or generate output words. The model is composed of two main components: one to dynamically decide on ending a source chunk, and another that translates the consumed chunk. We train the components jointly and in a manner consistent with the inference conditions. To generate chunked training data, we propose a method that utilizes word alignment while also preserving enough context. We compare models with bidirectional and unidirectional encoders of different depths, both on real speech and text input. Our results on the IWSLT 2020 English-to-German task outperform a wait-k baseline by 2.6 to 3.7% BLEU absolute.

11. The JHU Submission to the 2020 Duolingo Shared Task on Simultaneous Translation and Paraphrase for Language Education [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Huda Khayrallah, Jacob Bremerman, Arya D. McCarthy, Kenton Murray, Winston Wu, Matt Post
This paper presents the Johns Hopkins University submission to the 2020 Duolingo Shared Task on Simultaneous Translation and Paraphrase for Language Education (STAPLE). We participated in all five language tasks, placing first in each. Our approach involved a language-agnostic pipeline of three components: (1) building strong machine translation systems on general-domain data, (2) fine-tuning on Duolingo-provided data, and (3) generating n-best lists which are then filtered with various score-based techniques. In addi- tion to the language-agnostic pipeline, we attempted a number of linguistically-motivated approaches, with, unfortunately, little success. We also find that improving BLEU performance of the beam-search generated translation does not necessarily improve on the task metric—weighted macro F1 of an n-best list.

12. Simultaneous paraphrasing and translation by fine-tuning Transformer models [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Rakesh Chada
This paper describes the third place submission to the shared task on simultaneous translation and paraphrasing for language education at the 4th workshop on Neural Generation and Translation (WNGT) for ACL 2020. The final system leverages pre-trained translation models and uses a Transformer architecture combined with an oversampling strategy to achieve a competitive performance. This system significantly outperforms the baseline on Hungarian (27% absolute improvement in Weighted Macro F1 score) and Portuguese (33% absolute improvement) languages.

13. Simultaneous Translation and Paraphrase for Language Education [PDF] 返回目录
  ACL 2020. the Fourth Workshop on Neural Generation and Translation
  Stephen Mayhew, Klinton Bicknell, Chris Brust, Bill McDowell, Will Monroe, Burr Settles
We present the task of Simultaneous Translation and Paraphrasing for Language Education (STAPLE). Given a prompt in one language, the goal is to generate a diverse set of correct translations that language learners are likely to produce. This is motivated by the need to create and maintain large, high-quality sets of acceptable translations for exercises in a language-learning application, and synthesizes work spanning machine translation, MT evaluation, automatic paraphrasing, and language education technology. We developed a novel corpus with unique properties for five languages (Hungarian, Japanese, Korean, Portuguese, and Vietnamese), and report on the results of a shared task challenge which attracted 20 teams to solve the task. In our meta-analysis, we focus on three aspects of the resulting systems: external training corpus selection, model architecture and training decisions, and decoding and filtering strategies. We find that strong systems start with a large amount of generic training data, and then fine-tune with in-domain data, sampled according to our provided learner response frequencies.

14. Simultaneous Machine Translation with Visual Context [PDF] 返回目录
  EMNLP 2020. Long Paper
  Ozan Caglayan, Julia Ive, Veneta Haralampieva, Pranava Madhyastha, Loïc Barrault, Lucia Specia
Simultaneous machine translation (SiMT) aims to translate a continuous input text stream into another language with the lowest latency and highest quality possible. The translation thus has to start with an incomplete source text, which is read progressively, creating the need for anticipation. In this paper, we seek to understand whether the addition of visual information can compensate for the missing source context. To this end, we analyse the impact of different multimodal approaches and visual features on state-of-the-art SiMT frameworks. Our results show that visual context is helpful and that visually-grounded models based on explicit object region information are much better than commonly used global features, reaching up to 3 BLEU points improvement under low latency scenarios. Our qualitative analysis illustrates cases where only the multimodal systems are able to translate correctly from English into gender-marked languages, as well as deal with differences in word order, such as adjective-noun placement between English and French.

15. Learning Adaptive Segmentation Policy for Simultaneous Translation [PDF] 返回目录
  EMNLP 2020. Long Paper
  Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, Haifeng Wang
Balancing accuracy and latency is a great challenge for simultaneous translation. To achieve high accuracy, the model usually needs to wait for more streaming text before translation, which results in increased latency. However, keeping low latency would probably hurt accuracy. Therefore, it is essential to segment the ASR output into appropriate units for translation. Inspired by human interpreters, we propose a novel adaptive segmentation policy for simultaneous translation. The policy learns to segment the source text by considering possible translations produced by the translation model, maintaining consistency between the segmentation and translation. Experimental results on Chinese-English and German-English translation show that our method achieves a better accuracy-latency trade-off over recently proposed state-of-the-art methods.

16. Fluent and Low-latency Simultaneous Speech-to-Speech Translation with Self-adaptive Training [PDF] 返回目录
  EMNLP 2020. Findings Short Paper
  Renjie Zheng, Mingbo Ma, Baigong Zheng, Kaibo Liu, Jiahong Yuan, Kenneth Church, Liang Huang
Simultaneous speech-to-speech translation is an extremely challenging but widely useful scenario that aims to generate target-language speech only a few seconds behind the source-language speech. In addition, we have to continuously translate a speech of multiple sentences, but all recent solutions merely focus on the single-sentence scenario. As a result, current approaches will accumulate more and more latencies in later sentences when the speaker talks faster and introduce unnatural pauses into translated speech when the speaker talks slower. To overcome these issues, we propose Self-Adaptive Translation which flexibly adjusts the length of translations to accommodate different source speech rates. At similar levels of translation quality (as measured by BLEU), our method generates more fluent target speech latency than the baseline, in both Zh<->En directions.

17. Monotonic Infinite Lookback Attention for Simultaneous Machine Translation [PDF] 返回目录
  ACL 2019.
  Naveen Arivazhagan, Colin Cherry, Wolfgang Macherey, Chung-Cheng Chiu, Semih Yavuz, Ruoming Pang, Wei Li, Colin Raffel
Simultaneous machine translation begins to translate each source sentence before the source speaker is finished speaking, with applications to live and streaming scenarios. Simultaneous systems must carefully schedule their reading of the source sentence to balance quality against latency. We present the first simultaneous translation system to learn an adaptive schedule jointly with a neural machine translation (NMT) model that attends over all source tokens read thus far. We do so by introducing Monotonic Infinite Lookback (MILk) attention, which maintains both a hard, monotonic attention head to schedule the reading of the source sentence, and a soft attention head that extends from the monotonic head back to the beginning of the source. We show that MILk’s adaptive schedule allows it to arrive at latency-quality trade-offs that are favorable to those of a recently proposed wait-k strategy for many latency values.

18. STACL: Simultaneous Translation with Implicit Anticipation and Controllable Latency using Prefix-to-Prefix Framework [PDF] 返回目录
  ACL 2019.
  Mingbo Ma, Liang Huang, Hao Xiong, Renjie Zheng, Kaibo Liu, Baigong Zheng, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Hua Wu, Haifeng Wang
Simultaneous translation, which translates sentences before they are finished, is use- ful in many scenarios but is notoriously dif- ficult due to word-order differences. While the conventional seq-to-seq framework is only suitable for full-sentence translation, we pro- pose a novel prefix-to-prefix framework for si- multaneous translation that implicitly learns to anticipate in a single translation model. Within this framework, we present a very sim- ple yet surprisingly effective “wait-k” policy trained to generate the target sentence concur- rently with the source sentence, but always k words behind. Experiments show our strat- egy achieves low latency and reasonable qual- ity (compared to full-sentence translation) on 4 directions: zh↔en and de↔en.

19. Simultaneous Translation with Flexible Policy via Restricted Imitation Learning [PDF] 返回目录
  ACL 2019.
  Baigong Zheng, Renjie Zheng, Mingbo Ma, Liang Huang
Simultaneous translation is widely useful but remains one of the most difficult tasks in NLP. Previous work either uses fixed-latency policies, or train a complicated two-staged model using reinforcement learning. We propose a much simpler single model that adds a “delay” token to the target vocabulary, and design a restricted dynamic oracle to greatly simplify training. Experiments on Chinese <-> English simultaneous translation show that our work leads to flexible policies that achieve better BLEU scores and lower latencies compared to both fixed and RL-learned policies.

20. Simpler and Faster Learning of Adaptive Policies for Simultaneous Translation [PDF] 返回目录
  EMNLP 2019.
  Baigong Zheng, Renjie Zheng, Mingbo Ma, Liang Huang
Simultaneous translation is widely useful but remains challenging. Previous work falls into two main categories: (a) fixed-latency policies such as Ma et al. (2019) and (b) adaptive policies such as Gu et al. (2017). The former are simple and effective, but have to aggressively predict future content due to diverging source-target word order; the latter do not anticipate, but suffer from unstable and inefficient training. To combine the merits of both approaches, we propose a simple supervised-learning framework to learn an adaptive policy from oracle READ/WRITE sequences generated from parallel text. At each step, such an oracle sequence chooses to WRITE the next target word if the available source sentence context provides enough information to do so, otherwise READ the next source word. Experiments on German<=>English show that our method, without retraining the underlying NMT model, can learn flexible policies with better BLEU scores and similar latencies compared to previous work.

21. Speculative Beam Search for Simultaneous Translation [PDF] 返回目录
  EMNLP 2019.
  Renjie Zheng, Mingbo Ma, Baigong Zheng, Liang Huang
Beam search is universally used in (full-sentence) machine translation but its application to simultaneous translation remains highly non-trivial, where output words are committed on the fly. In particular, the recently proposed wait-k policy (Ma et al., 2018) is a simple and effective method that (after an initial wait) commits one output word on receiving each input word, making beam search seemingly inapplicable. To address this challenge, we propose a new speculative beam search algorithm that hallucinates several steps into the future in order to reach a more accurate decision by implicitly benefiting from a target language model. This idea makes beam search applicable for the first time to the generation of a single word in each step. Experiments over diverse language pairs show large improvement compared to previous work.

22. Lost in Interpretation: Predicting Untranslated Terminology in Simultaneous Interpretation [PDF] 返回目录
  NAACL 2019.
  Nikolai Vogler, Craig Stewart, Graham Neubig
Simultaneous interpretation, the translation of speech from one language to another in real-time, is an inherently difficult and strenuous task. One of the greatest challenges faced by interpreters is the accurate translation of difficult terminology like proper names, numbers, or other entities. Intelligent computer-assisted interpreting (CAI) tools that could analyze the spoken word and detect terms likely to be untranslated by an interpreter could reduce translation error and improve interpreter performance. In this paper, we propose a task of predicting which terminology simultaneous interpreters will leave untranslated, and examine methods that perform this task using supervised sequence taggers. We describe a number of task-specific features explicitly designed to indicate when an interpreter may struggle with translating a word. Experimental results on a newly-annotated version of the NAIST Simultaneous Translation Corpus (Shimizu et al., 2014) indicate the promise of our proposed method.

23. Prediction Improves Simultaneous Neural Machine Translation [PDF] 返回目录
  EMNLP 2018.
  Ashkan Alinejad, Maryam Siahbani, Anoop Sarkar
Simultaneous speech translation aims to maintain translation quality while minimizing the delay between reading input and incrementally producing the output. We propose a new general-purpose prediction action which predicts future words in the input to improve quality and minimize delay in simultaneous translation. We train this agent using reinforcement learning with a novel reward function. Our agent with prediction has better translation quality and less delay compared to an agent-based simultaneous translation system without prediction.

24. Incremental Decoding and Training Methods for Simultaneous Translation in Neural Machine Translation [PDF] 返回目录
  NAACL 2018. Short Papers
  Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Stephan Vogel
We address the problem of simultaneous translation by modifying the Neural MT decoder to operate with dynamically built encoder and attention. We propose a tunable agent which decides the best segmentation strategy for a user-defined BLEU loss and Average Proportion (AP) constraint. Our agent outperforms previously proposed Wait-if-diff and Wait-if-worse agents (Cho and Esipova, 2016) on BLEU with a lower latency. Secondly we proposed data-driven changes to Neural MT training to better match the incremental decoding framework.

25. Interpretese vs. Translationese: The Uniqueness of Human Strategies in Simultaneous Interpretation [PDF] 返回目录
  NAACL 2016.
  He He, Jordan Boyd-Graber, Hal Daumé III


26. Lecture Translator - Speech translation framework for simultaneous lecture translation [PDF] 返回目录
  NAACL 2016. Demonstrations
  Markus Müller, Thai Son Nguyen, Jan Niehues, Eunah Cho, Bastian Krüger, Thanh-Le Ha, Kevin Kilgour, Matthias Sperber, Mohammed Mediani, Sebastian Stüker, Alex Waibel


27. Syntax-based Simultaneous Translation through Prediction of Unseen Syntactic Constituents [PDF] 返回目录
  ACL 2015. Long Papers
  Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, Satoshi Nakamura


28. Syntax-based Simultaneous Translation through Prediction of Unseen Syntactic Constituents [PDF] 返回目录
  ACL 2015. Long Papers
  Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, Satoshi Nakamura


29. Syntax-based Rewriting for Simultaneous Machine Translation [PDF] 返回目录
  EMNLP 2015.
  He He, Alvin Grissom II, John Morgan, Jordan Boyd-Graber, Hal Daumé III


30. Optimizing Segmentation Strategies for Simultaneous Speech Translation [PDF] 返回目录
  ACL 2014. Short Papers
  Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, Satoshi Nakamura


31. Don’t Until the Final Verb Wait: Reinforcement Learning for Simultaneous Machine Translation [PDF] 返回目录
  EMNLP 2014.
  Alvin Grissom II, He He, Jordan Boyd-Graber, John Morgan, Hal Daumé III


注:论文列表使用AC论文搜索器整理!