Skip to yearly menu bar Skip to main content


Poster

COGNATE: Acceleration of Sparse Tensor Programs on Emerging Hardware using Transfer Learning

Chamika Sudusinghe · Gerasimos Gerogiannis · Damitha Lenadora · Charles Block · Josep Torrellas · Charith Mendis

West Exhibition Hall B2-B3 #W-513
[ ] [ ]
Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Sparse tensor programs are essential in deep learning and graph analytics, driving the need for optimized processing. To meet this demand, specialized hardware accelerators are being developed. Optimizing these programs for accelerators is challenging for two reasons: program performance is highly sensitive to variations in sparse inputs, and early-stage accelerators rely on expensive simulators. Therefore, ML-based cost models used for optimizing such programs on general-purpose hardware are often ineffective for early-stage accelerators, as they require large datasets for proper training. To this end, we introduce COGNATE, a novel framework that leverages inexpensive data samples from general-purpose hardware (e.g., CPUs) to train cost models, followed by few-shot fine-tuning on emerging hardware. COGNATE exploits the homogeneity of input features across hardware platforms while effectively mitigating heterogeneity, enabling cost model training with just 5% of the data samples needed by accelerator-specific models to achieve comparable performance. We conduct extensive experiments to demonstrate that COGNATE outperforms existing techniques, achieving average speedups of 1.47× (up to 5.46×) for SpMM and 1.39× (up to 4.22×) for SDDMM.

Lay Summary:

Many advanced technologies in AI rely on a special type of program called sparse tensor programs. To run these programs faster, researchers are building special computer chips known as hardware accelerators. But there's a big challenge: these programs behave very differently depending on the input data, and testing them during the early design stages of new chips is slow and expensive. Existing tools that help optimize these programs require large amounts of data to work well, which isn't practical during the chip design phase. In this work, we introduce COGNATE, a smarter way to optimize these programs for new chips. Instead of collecting large amounts of expensive data from simulators of new hardware, COGNATE starts by learning from data collected on inexpensive, widely available devices such as regular computer CPUs. It then fine-tunes its knowledge using just a small amount of data from the new hardware. This approach works because COGNATE can recognize what’s similar and what’s different between the data samples from existing (CPUs) and new hardware. COGNATE can optimize these programs using only 5% of the data normally required, saving both time and resources. This makes it easier and more cost-effective to design AI chips that run efficiently, helping accelerate future technological development.

Chat is not available.