Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: New In ML

Light-Weight Benchmarks Reveal the Hidden Hardware Cost of Zero-Shot Tabular Foundation Models


Abstract:

Zero-shot foundation models (FMs) promise “train-free” prediction on tabular data, yet their hardware footprint remains loosely characterised. We present a reproducible benchmark that pairs test accuracy with wall-clock latency, peak CPU RAM, and peak GPU VRAM on four public tables—Adult-Income, Higgs-100k, Wine-Quality, and California-Housing. Two open FMs (TabPFN-1.0, TabICL-base) are evaluated against tuned XGBoost, LightGBM, and Random-Forest baselines on a single NVIDIA T4. The tree ensembles equal or surpass FM accuracy on three datasets while completing a full-test batch in ≤ 0.40 s and ≤ 150 MB RAM with zero VRAM. TabICL gains +0.8 pp on Higgs but pays ≈ 40 000 × more latency (960 s) and 9 GB VRAM; TabPFN matches tree accuracy on Wine and Housing yet peaks at 4 GB VRAM and cannot process the full 100 k-row Higgs table. These findings quantify a large hardware–accuracy trade-off and deliver an open baseline for future efficiency-oriented research in tabular FMs.

Chat is not available.