Is Machine Translation Solved?
The quality of machine translation (MT) has been radically improved in the last five years thanks to neural sequence-to-sequence modeling and large scale training. Some works have already claimed that MT reached human parity and MT is not a difficult problem anymore. This talk will reveal the truth and falsity of the MT performance in various use cases: what is solved and what is not solved. By walking through the history of MT and recent advances, we will summarize the lessons learned and investigate the current challenges of MT in four aspects: multilingual, context, human-in-the-loop, and speech.
Yunsu Kim is an assistant professor at POSTECH. After graduating from POSTECH with a B.Sc. in computer science and engineering, he studied machine translation for his M.Sc. and Ph.D. degrees at RWTH Aachen University, advised by Prof. Hermann Ney. He conducted research on cross-lingual, semi-supervised, and unsupervised learning for neural machine translation and led multiple research projects in cooperation with eBay. He built numerous production-level translation models for enterprise clients while working at AppTek and Lilt, and also has ranked top in supervised, unsupervised, and data filtering tracks in WMT.