Deakin University
Browse

File(s) not publicly available

Perturbation-enabled Deep Federated Learning for Preserving Internet of Things-based Social Networks

journal contribution
posted on 2022-09-30, 00:15 authored by Sara Salim, Nour Moustafa, Benjamin Turnbull, Imran RazzakImran Razzak
Federated Learning (FL), as an emerging form of distributed machine learning, can protect participants’ private data from being substantially disclosed to cyber adversaries. It has potential uses in many large-scale, data-rich environments, such as the Internet of Things (IoT), Industrial IoT, Social Media, and the emerging SM 3.0. However, federated learning is susceptible to some forms of data leakage through model inversion attacks. Such attacks occur through the analysis of participants’ uploaded model updates. Model inversion attacks can reveal private data and potentially undermine some critical reasons for employing federated learning paradigms. This paper proposes novel differential privacy (DP)-based deep federated learning framework. We theoretically prove that our framework can fulfill DP’s requirements under distinct privacy levels by appropriately adjusting scaled variances of Gaussian noise. We then develop a Differentially Private Data-Level Perturbation (DP-DLP) mechanism to conceal any single data point’s impact on the training phase. Experiments on real-world datasets, specifically the social media 3.0, Iris, and Human Activity Recognition (HAR) datasets, demonstrate that the proposed mechanism can offer high privacy, enhanced utility, and elevated efficiency. Consequently, it simplifies the development of various DP-based FL models with different trade-off preferences on data utility and privacy levels.

History

Journal

ACM Transactions on Multimedia Computing, Communications, and Applications

Publisher

Association for Computing Machinery (ACM)

ISSN

1551-6857

eISSN

1551-6865

Language

en