Deakin University
Browse

File(s) under permanent embargo

Using adversarial noises to protect privacy in deep learning era

Version 2 2024-06-03, 11:49
Version 1 2019-04-05, 23:04
conference contribution
posted on 2024-06-03, 11:49 authored by B Liu, M Ding, T Zhu, Yong XiangYong Xiang, W Zhou
The unprecedented accuracy of deep learning methods has earned themselves as the foundation of new AI-based services on the Internet. At the same time, it presents obvious privacy issues. The deep learning aided privacy attack can dig out sensitive personal information not only from the text but also from unstructured data such as images and videos. In this paper, we proposed a framework to protect image privacy against the deep learning tools. We also propose two new metrics to measure the image privacy. Moreover, we propose two different image privacy protection schemes based on the two metrics, utilizing the adversarial example idea. The performance of our schemes is validated by simulation on a large-scale dataset. Our study shows that we can protect the image privacy by adding a small amount of noise, while the added noise has a humanly imperceptible impact on the image quality.

History

Pagination

1-6

Location

Abu Dhabi, United Arab Emirates

Start date

2018-12-09

End date

2018-12-13

ISBN-13

9781538647271

Language

eng

Publication classification

E1 Full written paper - refereed

Copyright notice

2018, IEEE

Editor/Contributor(s)

[Unknown]

Title of proceedings

GLOBECOM 2018 : Proceedings of the 2018 IEEE Global Communications Conference

Event

IEEE Communications Society. Conference (2018 : Abu Dhabi, United Arab Emirates)

Publisher

Institute of Electrical and Electronics Engineers

Place of publication

Piscataway, N.J.

Series

IEEE Communications Society Conference

Usage metrics

    Research Publications

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC