Stochastic coordinate descent for 01 loss and its sensitivity to adversarial attacks

Abstract: The 01 loss while hard to optimize is least sensitive to outliers compared to its continuous differentiable counterparts, namely hinge and logistic loss. Here we propose a stochastic coordinate descent heuristic for linear 01 loss classification. We implement and study our heuristic on real datasets from the UCI machine learning archive and find our method to be comparable to the support vector machine in accuracy and tractable in training time. We conjecture that the 01 loss may be harder to attack in a black box setting due to its non-continuity and infinite solution space. We train our linear classifier in a one-vs-one multi-class strategy on CIFAR10 and STL10 image benchmark datasets. In both cases we find our classifier to have the same accuracy as the linear support vector machine but more resilient to black box attacks. On CIFAR10 the linear support vector machine has 0% on adversarial examples while the 01 loss classifier hovers about 10% while on STL10 the linear support vector machine has 0% accuracy whereas 01 loss is at 10%. Our work here suggests that 01 loss may be more resilient to adversarial attacks than the hinge loss and further work is required.

Contact: usman@njit.edu

Programs and data available upon request

Citation: Meiyan Xie, Yunzhe Xue, and Usman Roshan, Stochastic coordinate descent for 0/1 loss and its sensitivity to adversarial attacks (accepted to IEEE ICMLA 2019)