Learning Loss for Test-Time Augmentation

NeurIPS (2020)

초록

Data augmentation has been actively studied for robust neural networks. Most of the recent data augmentation methods focus on augmenting datasets during the training phase. At the testing phase, simple transformations are still widely used for test-time augmentation. In this paper, we propose a novel instance-level test-time augmentation that efficiently selects suitable transformations for a test input. Our proposed method involves an auxiliary module to predict the loss of each possible transformation given the input. Then, the transformations having lower predicted losses are applied to the input. The network obtains the results by averaging the prediction results of augmented inputs. Experimental results on several image classification benchmarks show that the proposed instance-aware test-time augmentation improves the model’s robustness against various corruptions. The source code will be released.

저자

김일두(카카오브레인), 김영훈(성신여자대학교), 김성웅(카카오브레인)

키워드

Vision Core ML/DL

발행 날짜

2020.12.06