An empirical study on automatic post editing for neural machine translation (IEEE Access 2021)

IEEE Access

  • Impact Factor 2021: 3.47

Authors

  • Hyeonseok Moon, Chanjun Park, Sugyeong Eo, Jaehyung Seo, Heuiseok Lim

Abstract

Automatic post editing (APE) researches aim to correct errors in the machine translation results. Recently, APE research has mainly been conducted in two directions: noise-based APE and adapter-based APE. This study poses three questions based on existing APE studies and conducted a verification. The first is a question about the optimal APE research direction, and this has been figured out through a comparative analysis of the previous studies on noise-based APE and adapter-based APE. The second is about the substantial effectiveness of the bottleneck adapter layer (BAL) in adapter based APE. For the verification, various experiments on the different size of BAL has been conducted, and through these experiments, optimal approaches in adapter based APE has been proposed. For the last, this work raises a question about the reason why leveraging external knowledge is influential in APE. In this regard, we conducted several comparative experiments on the method of utilizing external data to APE training to achieve a better performance. The results revealed that the performance can be improved by applying the method of concatenating the external data with the existing data when training the APE model.Through deep analysis on these experiments, this work propose the optimal research direction in APE.

Check out the This Link for more info on our paper.