Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Methods and Opportunities at Small Scale (MOSS)

Is Visual Prompting the Right Setup for Knowledge Transfer in new Foundation Models?

Niclas Hergenröther · Antonio Orvieto

Keywords: [ Transfer Learning ] [ Adversarial Reprogramming ] [ VP ] [ Visual Prompting ]


Abstract:

Visual Prompting (VP) has emerged as a promising technique for efficient knowledge transfer. As new foundation model families (like Mamba) get introduced and VP pipelines such as AutoVP reach greater maturity, we find a growing need for a systematic evaluation of current approaches. In this work, we assess the performance of the latest models, comparing them to earlier architectures and alternative fine-tuning methods, to better understand the progress, challenges and opportunities in the field of efficient fine-tuning under resource limitations. Towards this goal, this paper provides a concise empirical overview of the interactions among foundation model families (Attention-, Convolution-, and Mamba-based) and transfer paradigms: VP, Linear Probing (LP), and Full Finetuning (FFT). Our work builds up on previous findings by broadening the selection of evaluated models, tuning hyperparameters, and techniques. In the interest of delivering practical guidelines for the user, we also explore application of prevalent regularization techniques to boost performance in the context of VP.

Chat is not available.