Poster
Explanatory Instructions: Towards Unified Vision Tasks Understanding and Zero-shot Generalization
Yang Shen · Xiu-Shen Wei · Yifan Sun · YuXin Song · Tao Yuan · Jian Jin · He-Yang Xu · Yazhou Yao · Errui Ding
East Exhibition Hall A-B #E-3307
Large Vision (Vision-Language) Models excel at specific vision tasks like recognizing objects, but they struggle to generalize these skills to new, unseen tasks—unlike humans who can adapt quickly. This gap exists because current models rely on rigid, predefined task definitions (e.g., "segment the image") rather than understanding the underlying objectives.To bridge this gap, we introduced Explanatory Instructions, which describe vision tasks in natural language (e.g., "highlight the river in blue and mark the rocks in red"). We created a large dataset with 12 million image-instruction-output examples and trained a model to follow these instructions. This approach allows the model to generalize to new tasks without additional training, achieving zero-shot capabilities for both familiar and novel vision tasks.Our work moves toward more flexible and human-like computer vision systems, enabling models to tackle diverse tasks by simply understanding descriptive instructions—just like humans do.