A Neural Grammatical Error Correction System Built On Better Pre-training and Sequential Transfer Learning

ACL Workshop on Innovative Use of NLP for Building Educational Applications (BEA) (2019)

초록

Grammatical error correction can be viewed as a low-resource sequence-to-sequence task, because publicly available parallel corpora are limited. To tackle this challenge, we first generate erroneous versions of large unannotated corpora using a realistic noising function. The resulting parallel corpora are subsequently used to pre-train Transformer models. Then, by sequentially applying transfer learning, we adapt these models to the domain and style of the test set. Combined with a context-aware neural spellchecker, our system achieves competitive results in both restricted and low resource tracks in ACL 2019 BEA Shared Task. We release all of our code and materials for reproducibility.

저자

최요중 (카카오), 함지연 (카카오브레인), 박규병 (카카오브레인), 윤여일 (카카오브레인)

키워드

NLP GEC(Grammatical Error Correction) pre-training

발행 날짜

2019.07.02