Q-GADMM: Quantized group ADMM for communication efficient decentralized machine learning
Version 2 2024-06-05, 07:15Version 2 2024-06-05, 07:15
Version 1 2020-09-29, 12:08Version 1 2020-09-29, 12:08
conference contribution
posted on 2024-06-05, 07:15authored byA Elgabli, Jihong ParkJihong Park, AS Bedi, M Bennis, V Aggarwal
In this paper, we propose a communication-efficient decen-tralized machine learning (ML) algorithm, coined quantized group ADMM (Q-GADMM). Every worker in Q-GADMM communicates only with two neighbors, and updates its model via the group alternating direct method of multiplier (GADMM), thereby ensuring fast convergence while reducing the number of communication rounds. Furthermore, each worker quantizes its model updates before transmissions, thereby decreasing the communication payload sizes. We prove that Q-GADMM converges to the optimal solution for convex loss functions, and numerically show that Q-GADMM yields 7x less communication cost while achieving almost the same accuracy and convergence speed compared to GADMM without quantization.