Skip to yearly menu bar Skip to main content


Poster

Schwarz–Schur Involution: Lightspeed Differentiable Sparse Linear Solvers

Yu Wang · Mazdak Abulnaga · Yaël Balbastre · Bruce Fischl

West Exhibition Hall B2-B3 #W-509
[ ] [ ] [ Project Page ]
Tue 15 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Sparse linear solvers are fundamental to science and engineering, applied in partial differential equations (PDEs), scientific computing, computer vision, and beyond. Indirect solvers possess characteristics that make them undesirable as stable differentiable modules; existing direct solvers, though reliable, are too expensive to be adopted in neural architectures. We substantially accelerate direct sparse solvers or generalized deconvolution by up to 3 orders-of-magnitude faster, violating common assumptions that direct solvers are too slow. We ``condense'' a sparse Laplacian matrix into a dense tensor, a compact data structure that batch-wise stores the Dirichlet-to-Neumann matrices, reducing the sparse solving to recursively merging pairs of dense matrices that are much smaller. The batched small dense systems are sliced and inverted in parallel to take advantage of dense GPU BLAS kernels, highly optimized in the era of deep learning. Our method is efficient, qualified as a strong zero-shot baseline for AI-based PDE solving and a reliable differentiable module integrable into machine learning pipelines.

Lay Summary:

We propose a very fast method for solving sparse linear systems, qualifying sparse solvers as an efficient reliable differentiable module in neural architectures. It can be used for solving partial differential equations or generalized deconvolution.

Chat is not available.