Securing Distributed Systems Against Adversarial Attacks

Lili Su
University of Illinois at Urbana-Champaign


Abstract

Distributed systems are ubiquitous in both industry and our daily life. For example, we use clusters and networked workstations to analyze large amount of data, use the world wide web for information and resource sharing, and use the Internet of Things (IoT) to access a much wider variety of resources. In distributed systems, components are more vulnerable to adversarial attacks. In this talk, we model the distributed systems as multi-agent networks, and consider the most general attack model - Byzantine fault model. In particular, this talk will focus on the problem of distributed learning over multi-agent net- works, where agents repeatedly collect partially informative observations (sam- ples) about an unknown state of the world, and try to collaboratively learn the true state. We focus on the impact of the Byzantine agents on the performance of consensus-based non-Bayesian learning. Our goal is to design algorithms for the non-faulty agents to collaboratively learn the true state through local communication. At the end of this talk, I will also briefly mention our exploration on tolerating adversarial attacks in multi-agent optimization problems.