Abstract
Generative Adversarial Networks (GANs) have been breaking their own records in terms of the quality of the synthesized images, which could be so high as to make it impossible to distinguish generated images from real ones by human eyes. This has raised threats to security and privacy-sensitive applications, and thus it is important to be able to tell if an image is generated by GANs, and better yet, by which GAN. The task is in a sense similar to digital image forensics for establishing image authenticity, but the literature has inconclusive reports as to whether GANs leave unique fingerprints in the generated images. In this paper, we attempt to develop a comprehensive understanding towards answering this question. We propose a model to extract fingerprints that can be viewed largely as GAN-specific. We further identify a few key components that contribute to defining the fingerprint of the generated images. Using experiments based on state-of-the-art GAN models and different datasets, we evaluate the performance of our model and verify the major conclusions of our analysis.
Original language | English (US) |
---|---|
State | Published - 2021 |
Event | 32nd British Machine Vision Conference, BMVC 2021 - Virtual, Online Duration: Nov 22 2021 → Nov 25 2021 |
Conference
Conference | 32nd British Machine Vision Conference, BMVC 2021 |
---|---|
City | Virtual, Online |
Period | 11/22/21 → 11/25/21 |
ASJC Scopus subject areas
- Artificial Intelligence
- Computer Vision and Pattern Recognition