Poster
OpenworldAUC: Towards Unified Evaluation and Optimization for Open-world Prompt Tuning
Cong Hua · Qianqian Xu · Zhiyong Yang · Zitai Wang · Shilong Bao · Qingming Huang
East Exhibition Hall A-B #E-1903
When adapting large vision-language models like CLIP to real-world applications, it's not enough to just do well on known categories — the model must also handle unfamiliar inputs without knowing whether they belong to familiar or new domains. This problem is called open-world prompt tuning.Existing evaluation methods usually split the problem into two separate parts: detecting if the input is from a known or unknown class, and then classifying it. But in real use, these two steps are tightly connected, and traditional evaluation metrics fail to capture this.To solve this, we introduce OpenworldAUC, a new evaluation metric that jointly considers detection and classification, without being sensitive to how many known or unknown examples appear. We also propose GMoP, a method that learns different prompts for different domains and uses a gating mechanism to decide how to use them. Our approach works reliably under realistic conditions and achieves strong performance across 15 benchmark datasets.