Statistics and Learning Theory
L.6.3
Lecture
Gradient-Based Distributed Learning

On Byzantine-Resilient High-Dimensional Stochastic Gradient Descent

Deepesh Data, Suhas Diggavi

Date & Time

01:00 am – 01:00 am

Abstract

We study stochastic gradient descent (SGD) in the master-worker architecture under Byzantine attacks. Building upon the recent advances in algorithmic high-dimensional robust statistics, in each SGD iteration, master employs a non-trivial decoding to estimate the true gradient from the unbiased stochastic gradients received from workers, some of which may be corrupt. We provide convergence analyses for both strongly-convex and non-convex smooth objectives under standard SGD assumptions. We can control the approximation error of our solution in both these settings by the mini-batch size of stochastic gradients; and we can make the approximation error as small as we want, provided that workers use a sufficiently large mini-batch size. Our algorithm can tolerate less than $\frac{1}{3}$ fraction of Byzantine workers. It can approximately find the optimal parameters in the strongly-convex setting {\em exponentially fast}, and reaches to an approximate stationary point in the non-convex setting with {\em linear speed}, i.e., with a rate of $\frac{1}{T}$, thus, matching the convergence rates of vanilla SGD in the Byzantine-free setting.


Presenters

Deepesh Data

University of California, Los Angeles

Suhas Diggavi

University of California, Los Angeles

Session Chair

Chao Tian

Texas A&M University