Residential Collegefalse
Status已發表Published
Progressive Multi-Granularity Training for Non-Autoregressive Translation
Ding, Liang1; Wang, Longyue2; Liu, Xuebo3; Wong, Derek F.3; Tao, Dacheng4; Tu, Zhaopeng2
2021
Conference NameThe Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2021)
Source PublicationFindings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Pages2797-2803
Conference Date1 August 2021through 6 August 2021
Conference PlaceVirtual
Abstract

Non-autoregressive translation (NAT) significantly accelerates the inference process via predicting the entire target sequence. However, recent studies show that NAT is weak at learning high-mode of knowledge such as one-to-many translations. We argue that modes can be divided into various granularities which can be learned from easy to hard. In this study, we empirically show that NAT models are prone to learn fine-grained lower-mode knowledge, such as words and phrases, compared with sentences. Based on this observation, we propose progressive multi-granularity training for NAT. More specifically, to make the most of the training data, we break down the sentence-level examples into three types, i.e. words, phrases, sentences, and with the training goes, we progressively increase the granularities. Experiments on Romanian-English, English-German, Chinese-English and Japanese-English demonstrate that our approach improves the phrase translation accuracy and model reordering ability, therefore resulting in better translation quality against strong NAT baselines. Also, we show that more deterministic fine-grained knowledge can further enhance performance.

URLView the original
Language英語English
Scopus ID2-s2.0-85108849389
Fulltext Access
Citation statistics
Document TypeConference paper
CollectionDEPARTMENT OF COMPUTER AND INFORMATION SCIENCE
Corresponding AuthorDing, Liang; Wang, Longyue
Affiliation1.The University of Sydney, Australia
2.Tencent AI Lab, China
3.University of Macau, Macao
4.JD Explore Academy, JD.com, China
Recommended Citation
GB/T 7714
Ding, Liang,Wang, Longyue,Liu, Xuebo,et al. Progressive Multi-Granularity Training for Non-Autoregressive Translation[C], 2021, 2797-2803.
APA Ding, Liang., Wang, Longyue., Liu, Xuebo., Wong, Derek F.., Tao, Dacheng., & Tu, Zhaopeng (2021). Progressive Multi-Granularity Training for Non-Autoregressive Translation. Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 2797-2803.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Ding, Liang]'s Articles
[Wang, Longyue]'s Articles
[Liu, Xuebo]'s Articles
Baidu academic
Similar articles in Baidu academic
[Ding, Liang]'s Articles
[Wang, Longyue]'s Articles
[Liu, Xuebo]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Ding, Liang]'s Articles
[Wang, Longyue]'s Articles
[Liu, Xuebo]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.