Skip to yearly menu bar Skip to main content


Spotlight Poster

TabFlex: Scaling Tabular Learning to Millions with Linear Attention

Yuchen Zeng · Tuan Dinh · Wonjun Kang · Andreas Mueller

East Exhibition Hall A-B #E-2509
[ ] [ ]
Thu 17 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Leveraging the in-context learning (ICL) capability of Large Language Models (LLMs) for tabular classification has gained significant attention for its training-free adaptability across diverse datasets. Recent advancements, like TabPFN, excel in small-scale tabular datasets but struggle to scale for large and complex datasets. Our work enhances the efficiency and scalability of TabPFN for larger datasets by incorporating linear attention mechanisms as a scalable alternative to complexity-quadratic self-attention. Our model, TabFlex, efficiently handles tabular datasets with thousands of features and hundreds of classes, scaling seamlessly to millions of samples. For instance, TabFlex processes the poker-hand dataset with over a million samples in just 5 seconds. Our extensive evaluations demonstrate that TabFlex can achieve over a 2× speedup compared to TabPFN and a 1.5× speedup over XGBoost, outperforming 25 tested baselines in terms of efficiency across a diverse range of datasets. Furthermore, TabFlex remains highly effective on large-scale datasets, delivering strong performance with significantly reduced computational costs, especially when combined with data-efficient techniques such as dimensionality reduction and data sampling.

Lay Summary:

Recently, a new way of using large language models (LLMs) has gained attention: giving them a few examples to help make predictions on table-based tasks—where the data looks like a spreadsheet (e.g., a CSV file), and the goal is to predict one column based on others. This method is fast and does not require training the model. However, it only works well with a small number of examples; too many can slow things down and use a lot of memory, because of how LLMs are built. In this paper, we explore different model designs, find a better solution, and introduce a new model that handles more data with much less memory and faster processing—without losing accuracy.

Chat is not available.