Poster
in
Affinity Workshop: New In ML
An Empirical Analysis of Model Capacity, Task Complexity, and Regularization in Few-Shot Learning
Hongkai Zhang
The effectiveness of a Few-Shot Learning (FSL) method is often deeply contextual. This paper systematically investigates this context by deconstructing the interplay between three key factors: model capacity, task complexity, and regularization strategy. We conduct a comprehensive 2x2 study using a low-capacity CNN and a high-capacity ResNet-12, evaluated on both a simple task (Omniglot) and a complex task (mini-ImageNet). Our findings reveal a nuanced landscape: on Omniglot, the low-capacity model thrives with sophisticated adaptive regularization, while the high-capacity ResNet-12, prone to overfitting, benefits more from a simple, strong, fixed regularization. On the more challenging mini-ImageNet, the low-capacity model's performance collapses, while the ResNet-12's success becomes contingent not only on the choice of regularization but also on a sufficiently rigorous training regimen. Our work demonstrates that there is no "silver bullet" FSL method; the optimal approach is a function of the specific model-task pairing, providing crucial insights for designing and evaluating FSL solutions.