File(s) under permanent embargo
Using adversarial noises to protect privacy in deep learning era
conference contribution
posted on 2018-01-01, 00:00 authored by Bo Liu, M Ding, T Zhu, Yong XiangYong Xiang, Wanlei ZhouThe unprecedented accuracy of deep learning methods has earned themselves as the foundation of new AI-based services on the Internet. At the same time, it presents obvious privacy issues. The deep learning aided privacy attack can dig out sensitive personal information not only from the text but also from unstructured data such as images and videos. In this paper, we proposed a framework to protect image privacy against the deep learning tools. We also propose two new metrics to measure the image privacy. Moreover, we propose two different image privacy protection schemes based on the two metrics, utilizing the adversarial example idea. The performance of our schemes is validated by simulation on a large-scale dataset. Our study shows that we can protect the image privacy by adding a small amount of noise, while the added noise has a humanly imperceptible impact on the image quality.
History
Event
IEEE Communications Society. Conference (2018 : Abu Dhabi, United Arab Emirates)Series
IEEE Communications Society ConferencePagination
1 - 6Publisher
Institute of Electrical and Electronics EngineersLocation
Abu Dhabi, United Arab EmiratesPlace of publication
Piscataway, N.J.Publisher DOI
Start date
2018-12-09End date
2018-12-13ISBN-13
9781538647271Language
engPublication classification
E1 Full written paper - refereedCopyright notice
2018, IEEEEditor/Contributor(s)
[Unknown]Title of proceedings
GLOBECOM 2018 : Proceedings of the 2018 IEEE Global Communications ConferenceUsage metrics
Categories
No categories selectedKeywords
Licence
Exports
RefWorks
BibTeX
Ref. manager
Endnote
DataCite
NLM
DC