Visualizing Semantic Structures of Sequential Data by Learning Temporal Dependencies

AAAI Workshop on Network Interpretability for Deep Learning (2019)

초록

While conventional methods for sequential learning focus on interaction between consecutive inputs, we suggest a new method which captures composite semantic flows with variable-length dependencies. In addition, the semantic structures within given sequential data can be interpreted by visualizing temporal dependencies learned from the method. The proposed method, called Temporal Dependency Network (TDN), represents a video as a temporal graph whose node represents a frame of the video and whose edge represents the temporal dependency between two frames of a variable distance. The temporal dependency structure of semantic is discovered by learning parameterized kernels of graph convolutional methods. We evaluate the proposed method on the large-scale video dataset, Youtube-8M. By visualizing the temporal dependency structures as experimental results, we show that the suggested method can find the temporal dependency structures of video semantic.

저자

온경운 (서울대학교), 김은솔 (카카오브레인), 허유정 (서울대학교), 장병탁 (서울대학교)

키워드

vision video structural Learning

발행 날짜

2019.01.20