Hypergraph Attention Networks for Multimodal Learning

CVPR (2020)

초록

One of the fundamental problems that arise in multimodal learning tasks is the disparity of information levels between different modalities. To resolve this problem, we propose Hypergraph Attention Networks (HANs), which define a common semantic space among the modalities with symbolic graphs and extract a joint representation of the modalities based on a co-attention map constructed in the semantic space. HANs follow the process: constructing the com- mon semantic space with symbolic graphs of each modality, matching the semantics between sub-structures of the sym- bolic graphs, constructing co-attention maps between the graphs in the semantic space, and integrating the multimodal inputs using the co-attention maps to get the final joint rep- resentation. From the qualitative analysis with two Visual Question and Answering datasets, we discover that 1) the alignment of the information levels between the modalities is important, and 2) the symbolic graphs are very powerful ways to represent the information of the low-level signals in alignment. Moreover, HANs dramatically improve the state-of-the-art accuracy on the GQA dataset from 54.6% to 61.88% only using the symbolic information in quantitatively.

저자

김은솔(카카오브레인), 강우영(카카오브레인), 허유정(서울대학교), 온경운(서울대학교), 장병탁(서울대학교)

키워드

vision core ML/DL

발행 날짜

2020.02.27