Poster
On Understanding Attention-Based In-Context Learning for Categorical Data
Aaron Wang · William Convertino · Xiang Cheng · Ricardo Henao · Lawrence Carin
East Exhibition Hall A-B #E-3303
In-context learning based on attention models is examined for data with categorical outcomes, with inference in such models viewed from the perspective of functional gradient descent (GD). We develop a network composed of attention blocks, with each block employing a self-attention layer followed by a cross-attention layer, with associated skip connections. This model can exactly perform multi-step functional GD inference for in-context inference with categorical observations. We perform a theoretical analysis of this setup, generalizing many prior assumptions in this line of work, including the class of attention mechanisms for which it is appropriate. We demonstrate the framework empirically on synthetic data, image classification and language generation.
The Transformer is widely used as a generative model in virtually all language models being deployed in practice today. In spite of the success of such models, little is known about how they work. This paper has sought to provide insight on the mechanisms by which Transformers respond and adapt to the prompt that is provided as input. A key advance of this paper concerns the form of observed data, which is categorical. By this we mean that the observations are from a finite (but large) discrete set, which corresponds to the vocabulary of tokens used in language models. We have shown that the Transformer can learn to perform prompt-dependent inference based on a widely studied mathematical framework called gradient descent. This insight suggests ways in which the Transformer can adapt to prompts as applied to language.