Soft Labeling Affects Out-of-Distribution Detection of Deep Neural Networks

ICML Workshop on Uncertainty & Robustness in Deep Learning (2020)

초록

Soft labeling becomes a common output regularization for generalization and model compression of deep neural networks. However, the effect of soft labeling on out-of-distribution (OOD) detection, which is an important topic of machine learning safety, is not explored. In this study, we show that soft labeling can determine OOD detection performance. Specifically, how to regularize outputs of incorrect classes by soft labeling can deteriorate or improve OOD detection. Based on the empirical results, we postulate a future work for OOD-robust DNNs: a proper output regularization by soft labeling can construct OOD-robust DNNs without additional training of OOD samples or modifying the models, while improving classification accuracy.

저자

이도엽(POSTECH), 천영재(카카오브레인)

키워드

Vision Core ML/DL Robustness

발행 날짜

2020.04.10