Deakin University
Browse

File(s) not publicly available

FakeFilter: a cross-distribution deepfake detection system with domain adaptation

journal contribution
posted on 2021-01-01, 00:00 authored by Jianguo Jiang, Boquan Li, Baole Wei, Gang LiGang Li, Chao Liu, Weiqing Huang, Meimei Li, Min Yu
Abuse of face swap techniques poses serious threats to the integrity and authenticity of digital visual media. More alarmingly, fake images or videos created by deep learning technologies, also known as Deepfakes, are more realistic, high-quality, and reveal few tampering traces, which attracts great attention in digital multimedia forensics research. To address those threats imposed by Deepfakes, previous work attempted to classify real and fake faces by discriminative visual features, which is subjected to various objective conditions such as the angle or posture of a face. Differently, some research devises deep neural networks to discriminate Deepfakes at the microscopic-level semantics of images, which achieves promising results. Nevertheless, such methods show limited success as encountering unseen Deepfakes created with different methods from the training sets. Therefore, we propose a novel Deepfake detection system, named FakeFilter, in which we formulate the challenge of unseen Deepfake detection into a problem of cross-distribution data classification, and address the issue with a strategy of domain adaptation. By mapping different distributions of Deepfakes into similar features in a certain space, the detection system achieves comparable performance on both seen and unseen Deepfakes. Further evaluation and comparison results indicate that the challenge has been successfully addressed by FakeFilter.

History

Journal

Journal of computer security

Volume

29

Issue

4

Pagination

403 - 421

Publisher

IOS Press

Location

Amsterdam, The Netherlands

ISSN

0926-227X

eISSN

1875-8924

Language

English

Publication classification

C1 Refereed article in a scholarly journal