ImagiNet: A Multi-Content Dataset for Generalizable Synthetic Image Detection via Contrastive Learning
Abstract
Generative models, such as diffusion models (DMs), variational autoencoders (VAEs), and generative adversarial networks (GANs), produce images with a level of authenticity that makes them nearly indistinguishable from real photos and artwork. While this capability is beneficial for many industries, the difficulty of identifying synthetic images leaves online media platforms vulnerable to impersonation and misinformation attempts. To support the development of defensive methods, we introduce ImagiNet, a high-resolution and balanced dataset for synthetic image detection, designed to mitigate potential biases in existing resources. It contains 200K examples, spanning four content categories: photos, paintings, faces, and uncategorized. Synthetic images are produced with open-source and proprietary generators, whereas real counterparts of the same content type are collected from public datasets. The structure of ImagiNet allows for a two-track evaluation system: i) classification as real or synthetic and ii) identification of the generative model. To establish a baseline, we train a ResNet-50 model using a self-supervised contrastive objective (SelfCon) for each track. The model demonstrates state-of-the-art performance and high inference speed across established benchmarks, achieving an AUC of up to 0.99 and balanced accuracy ranging from 86% to 95%, even under social network conditions that involve compression and resizing. Our data and code are available at https://github.com/delyan-boychev/imaginet.
Community
ImagiNet is a high-resolution, balanced dataset for synthetic image detection.
It comes with a strong baseline based on self-contrastive learning, which achieves state-of-the-art results on multiple benchmarks.
Some details:
- Includes 200K high-resolution images spanning four content types - photos, paintings, faces, and miscellaneous.
- Images are from both open-source and proprietary generators (GANs, Diffusion Models, Midjourney, DALL-E).
- Our contrastive baseline has almost twice the inference speed compared to the previous SOTA models.
- The diverse contents contribute to the robust performance of detectors even under compression and resizing typical for social networks.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Mixing Natural and Synthetic Images for Robust Self-Supervised Representations (2024)
- Improving Interpretability and Robustness for the Detection of AI-Generated Images (2024)
- FakeInversion: Learning to Detect Images from Unseen Text-to-Image Models by Inverting Stable Diffusion (2024)
- A Sanity Check for AI-generated Image Detection (2024)
- DataDream: Few-shot Guided Dataset Generation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper