How to Democratise and Protect AI: Fair and Differentially Private Decentralised Deep Learning
Version 2 2024-06-06, 10:41Version 2 2024-06-06, 10:41
Version 1 2020-07-16, 18:15Version 1 2020-07-16, 18:15
journal contribution
posted on 2024-06-06, 10:41authored byL Lyu, Y Li, K Nandakumar, J Yu, X Ma
This paper firstly considers the research problem of fairness in collaborative deep learning, while ensuring privacy. We study the weaknesses in current server-based deep learning frameworks, and solve the single-point-of-failure problem by using Blockchain to realise decentralisation. To address fairness and privacy, we propose a novel reputation system through digital tokens and local credibility to ensure fairness, in combination with differential privacy to guarantee privacy. In particular, we build a fair and differentially private decentralised deep learning framework called FDPDDL, which enables parties to derive more accurate local models in a fair and private manner by using our developed two-stage scheme: during the initialisation stage, artificial samples generated by Differentially Private Generative Adversarial Network (DPGAN) are used to mutually benchmark the local credibility of each party and generate initial tokens; during the update stage, Differentially Private SGD (DPSGD) is used to facilitate collaborative privacy-preserving deep learning, and local credibility and tokens of each party are updated according to the quality and quantity of individually released gradients. Experimental results on benchmark datasets under three realistic settings demonstrate that FDPDDL achieves high fairness, yields comparable accuracy to the centralised and distributed frameworks, and delivers better accuracy than the standalone framework.
History
Journal
IEEE Transactions on Dependable and Secure Computing