翻译自动译后编辑-英德

我要开发同款
匿名用户2024年07月31日
11阅读
开发技术tensorflow
所属分类ai、WMT 2020、Alibaba、Neural Machine Trans、Automatic Post Editi、hter、bleu、nlp、wiki
开源地址https://modelscope.cn/models/iic/nlp_automatic_post_editing_for_translation_en2de
授权协议Apache License 2.0

作品详情

英德翻译自动译后编辑模型

用于机器翻译的自动译后编辑,可根据原文对翻译结果进行进一步修正,获取更好的翻译效果,该模型的ensemble系统(MinD)获WMT2020机器翻译大赛自动译后编辑任务 英德语向亚军

模型描述

该模型用一个带memory的encoder结构同时编码原文和机翻译文信息,再用decoder生成修改后的译文,对memory-encoder使用了大量平行语料进行预训练,再使用<原文,机翻译文,译后编辑译文>的三元数据训练decoder。模型基于OpenNMT-tf代码框架进行训练。

期望模型使用方式以及适用范围

该模型可用于英德语向的机器翻译译后编辑,输入为原文及机翻译文,输出为修改提升后的译文。

如何使用

在安装ModelScope完成后即可使用,注意该模型仅支持python 3.6和tensorflow 1.12~1.15版本,不可在tensorflow 2.x环境下使用。

代码范例

from modelscope.pipelines import pipeline
p = pipeline('translation', model='damo/nlp_automatic_post_editing_for_translation_en2de')
# 以 '\005' 拼接原文和译文
print(p('Simultaneously, the Legion took part to the pacification of Algeria, plagued by various tribal rebellions and razzias.\005Gleichzeitig nahm die Legion an der Befriedung Algeriens teil, die von verschiedenen Stammesaufständen und Rasias heimgesucht wurde.'))

模型局限性以及可能的偏差

该模型训练数据主要偏向wiki领域,在其它领域的表现可能会下降

训练数据介绍

WMT2020 APE数据集,训练数据由<原文,机翻译文,译后编辑译文>的三元组组成。

数据评估及结果

该模型的ensemble系统在WMT2020测试集上的评测结果如下

Model TER↓ BLEU↑
baseline (MT) 31.56 50.21
Ours 26.99 55.77

相关论文以及引用信息

如果您在您的研究中使用了该模型,请引用下面两篇论文

@inproceedings{wang-etal-2020-computer,
    title = "Computer Assisted Translation with Neural Quality Estimation and Automatic Post-Editing",
    author = "Wang, Ke  and
      Wang, Jiayi  and
      Ge, Niyu  and
      Shi, Yangbin  and
      Zhao, Yu  and
      Fan, Kai",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.findings-emnlp.197",
    doi = "10.18653/v1/2020.findings-emnlp.197",
    pages = "2175--2186",
    abstract = "With the advent of neural machine translation, there has been a marked shift towards leveraging and consuming the machine translation results. However, the gap between machine translation systems and human translators needs to be manually closed by post-editing. In this paper, we propose an end-to-end deep learning framework of the quality estimation and automatic post-editing of the machine translation output. Our goal is to provide error correction suggestions and to further relieve the burden of human translators through an interpretable model. To imitate the behavior of human translators, we design three efficient delegation modules {--} quality estimation, generative post-editing, and atomic operation post-editing and construct a hierarchical model based on them. We examine this approach with the English{--}German dataset from WMT 2017 APE shared task and our experimental results can achieve the state-of-the-art performance. We also verify that the certified translators can significantly expedite their post-editing processing with our model in human evaluation.",
}
@inproceedings{wang-etal-2020-alibabas,
    title = "{A}libaba{'}s Submission for the {WMT} 2020 {APE} Shared Task: Improving Automatic Post-Editing with Pre-trained Conditional Cross-Lingual {BERT}",
    author = "Wang, Jiayi  and
      Wang, Ke  and
      Fan, Kai  and
      Zhang, Yuqi  and
      Lu, Jun  and
      Ge, Xin  and
      Shi, Yangbin  and
      Zhao, Yu",
    booktitle = "Proceedings of the Fifth Conference on Machine Translation",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2020.wmt-1.84",
    pages = "789--796",
    abstract = "The goal of Automatic Post-Editing (APE) is basically to examine the automatic methods for correcting translation errors generated by an unknown machine translation (MT) system. This paper describes Alibaba{'}s submissions to the WMT 2020 APE Shared Task for the English-German language pair. We design a two-stage training pipeline. First, a BERT-like cross-lingual language model is pre-trained by randomly masking target sentences alone. Then, an additional neural decoder on the top of the pre-trained model is jointly fine-tuned for the APE task. We also apply an imitation learning strategy to augment a reasonable amount of pseudo APE training data, potentially preventing the model to overfit on the limited real training data and boosting the performance on held-out data. To verify our proposed model and data augmentation, we examine our approach with the well-known benchmarking English-German dataset from the WMT 2017 APE task. The experiment results demonstrate that our system significantly outperforms all other baselines and achieves the state-of-the-art performance. The final results on the WMT 2020 test dataset show that our submission can achieve +5.56 BLEU and -4.57 TER with respect to the official MT baseline.",
}
声明:本文仅代表作者观点,不代表本站立场。如果侵犯到您的合法权益,请联系我们删除侵权资源!如果遇到资源链接失效,请您通过评论或工单的方式通知管理员。未经允许,不得转载,本站所有资源文章禁止商业使用运营!
下载安装【程序员客栈】APP
实时对接需求、及时收发消息、丰富的开放项目需求、随时随地查看项目状态

评论