Skip to yearly menu bar Skip to main content


Poster

Impossible Videos

Zechen Bai · Hai Ci · Mike Zheng Shou

West Exhibition Hall B2-B3 #W-211
[ ] [ ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Synthetic videos nowadays is widely used to complement data scarcity and diversity of real-world videos.Current synthetic datasets primarily replicate real-world scenarios, leaving impossible, counterfactual and anti-reality video concepts underexplored. This work aims to answer two questions: 1) Can today's video generation models effectively follow prompts to create impossible video content? 2) Are today's video understanding models good enough for understanding impossible videos?To this end, we introduce IPV-Bench, a novel benchmark designed to evaluate and foster progress in video understanding and generation. IPV-Bench is underpinned by a comprehensive taxonomy, encompassing 4 domains, 14 categories.It features diverse scenes that defy physical, biological, geographical, or social laws. Based on the taxonomy, a prompt suite is constructed to evaluate video generation models, challenging their prompt following and creativity capabilities. In addition, a video benchmark is curated to assess Video-LLMs on their ability of understanding impossible videos, which particularly requires reasoning on temporal dynamics and world knowledge. Comprehensive evaluations reveal limitations and insights for future directions of video models, paving the way for next-generation video models.

Lay Summary:

We introduce a benchmark of "Impossible Videos" that defy physical or commonsense laws—like snow in the tropics or objects moving on their own. Current AI models struggle with these cases. Our work reveals their limitations and encourages the development of video models with stronger reasoning and world knowledge.

Chat is not available.