Poster
Fast Tensor Completion via Approximate Richardson Iteration
Mehrdad Ghadiri · Matthew Fahrbach · Yunbum Kook · Ali Jadbabaie
West Exhibition Hall B2-B3 #W-520
We study tensor completion (TC) through the lens of low-rank tensor decomposition (TD). Many TD algorithms use fast alternating minimization methods to solve highly structured linear regression problems at each step (e.g., for CP, Tucker, and tensor-train decompositions). However, such algebraic structure is often lost in TC regression problems, making direct extensions unclear. This work proposes a novel lifting method for approximately solving TC regression problems using structured TD regression algorithms as blackbox subroutines, enabling sublinear-time methods. We analyze the convergence rate of our approximate Richardson iteration-based algorithm, and our empirical study shows that it can be 100x faster than direct methods for CP completion on real-world tensors.
We tackle the challenge of filling in missing entries in large, multi‑dimensional data arrays—like predicting unknown pixels in a video or missing ratings in a recommendation system—by leveraging techniques that decompose the data into small components. Although extremely fast and efficient algorithms exist for fully observed data, exploiting its inherent structure, that structure is lost when some entries are missing. Our work introduces a “lifting” trick that recovers this structure, enabling us to use these fast algorithms as black‑box subroutines. In this way, we decompose the data into small building blocks and accurately recover the missing entries. On real‑world datasets, our approach runs up to 100x faster than standard techniques—potentially accelerating everything from medical imaging to machine‑learning pipelines.