Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The 2nd Workshop on Reliable and Responsible Foundation Models

Visual Language Models as Zero-Shot Deepfake Detectors

Viacheslav Pirogov

Keywords: [ Visual Language Models ] [ Deepfake Detection ] [ Zero-Shot Learning ]


Abstract:

The contemporary phenomenon of deepfakes, utilising GAN or diffusion models for face swapping, presents a substantial and evolving threat in digital media, identity verification, and a multitude of other systems.The majority of existing methods for detecting deepfakes rely on training specialised classifiers to distinguish between genuine and manipulated images, focusing only on the image domain without incorporating any auxiliary tasks that could enhance robustness.In this paper, inspired by the zero-shot capabilities of Vision-Language Models (VLMs), we propose a novel approach using VLMs to identify deepfakes. We introduce a new, high-quality deepfake dataset comprising 60,000 images, on which zero-shot VLMs demonstrate superior performance to almost all existing methods. Subsequently, we compare the performance of the best-performing VLM, InstructBLIP, on the popular deepfake dataset DFDC-P against traditional methods in two scenarios: zero-shot and in-domain fine-tuning. Our results demonstrate the superiority of VLMs over traditional classifiers.

Chat is not available.