Skip to yearly menu bar Skip to main content


Poster

Distributed Parallel Gradient Stacking(DPGS): Solving Whole Slide Image Stacking Challenge in Multi-Instance Learning

Boyuan Wu · wang · Xianwei Lin · Jiachun Xu · Jikai Yu · Zhou Shicheng · Hongda Chen · Lianxin Hu

West Exhibition Hall B2-B3 #W-315
[ ] [ ]
Thu 17 Jul 4:30 p.m. PDT — 7 p.m. PDT

Abstract:

Whole Slide Image (WSI) analysis is framed as a Multiple Instance Learning (MIL) problem, but existing methods struggle with non-stackable data due to inconsistent instance lengths, which degrades performance and efficiency. We propose a Distributed Parallel Gradient Stacking (DPGS) framework with Deep Model-Gradient Compression (DMGC) to address this. DPGS enables lossless MIL data stacking for the first time, while DMGC accelerates distributed training via joint gradient-model compression. Experiments on Camelyon16 and TCGA-Lung datasets demonstrate up to 31× faster training, up to a 99.2% reduction in model communication size at convergence, and up to a 9.3% improvement in accuracy compared to the baseline. To our knowledge, this is the first work to solve non-stackable data in MIL while improving both speed and accuracy.

Lay Summary:

Whole Slide Images (WSIs) are large medical images used in cancer diagnosis. They are split into many patches and analyzed using Multiple Instance Learning (MIL). But since each image has a different number of patches, current methods can’t train in batches, making them slow and less accurate. We propose DPGS, a method that enables fast, parallel training on uneven data. We also introduce DMGC, which cuts communication costs by over 99%. Tested on cancer datasets, our method sped up training by up to 31× and improved accuracy by 9.3%, making AI tools for pathology faster and more reliable.

Chat is not available.