Multimodal machine translation through visuals and speech
File(s)1911.12798v1.pdf (1.6 MB)
Working paper
Author(s)
Type
Working Paper
Abstract
Multimodal machine translation involves drawing information from more than
one modality, based on the assumption that the additional modalities will
contain useful alternative views of the input data. The most prominent tasks in
this area are spoken language translation, image-guided translation, and
video-guided translation, which exploit audio and visual modalities,
respectively. These tasks are distinguished from their monolingual counterparts
of speech recognition, image captioning, and video captioning by the
requirement of models to generate outputs in a different language. This survey
reviews the major data resources for these tasks, the evaluation campaigns
concentrated around them, the state of the art in end-to-end and pipeline
approaches, and also the challenges in performance evaluation. The paper
concludes with a discussion of directions for future research in these areas:
the need for more expansive and challenging datasets, for targeted evaluations
of model performance, and for multimodality in both the input and output space.
one modality, based on the assumption that the additional modalities will
contain useful alternative views of the input data. The most prominent tasks in
this area are spoken language translation, image-guided translation, and
video-guided translation, which exploit audio and visual modalities,
respectively. These tasks are distinguished from their monolingual counterparts
of speech recognition, image captioning, and video captioning by the
requirement of models to generate outputs in a different language. This survey
reviews the major data resources for these tasks, the evaluation campaigns
concentrated around them, the state of the art in end-to-end and pipeline
approaches, and also the challenges in performance evaluation. The paper
concludes with a discussion of directions for future research in these areas:
the need for more expansive and challenging datasets, for targeted evaluations
of model performance, and for multimodality in both the input and output space.
Date Issued
2019-11-28
Citation
2019
Publisher
arXiv
Copyright Statement
© 2019 The Author(s)
Sponsor
Commission of the European Communities
British Council (Turkey)
Identifier
http://arxiv.org/abs/1911.12798v1
Grant Number
678017
352343575 - 154082
Subjects
cs.CL
cs.CL
Notes
34 pages, 4 tables, 8 figures. Submitted (Nov 2019) to the Machine Translation journal (Springer)
Publication Status
Published