Skip to yearly menu bar Skip to main content


Poster

Low-Rank Tensor Transitions (LoRT) for Transferable Tensor Regression

Andong Wang · Yuning Qiu · Zhong Jin · Guoxu Zhou · Qibin Zhao

East Exhibition Hall A-B #E-1409
[ ] [ ]
Wed 16 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Tensor regression is a powerful tool for analyzing complex multi-dimensional data in fields such as neuroimaging and spatiotemporal analysis, but its effectiveness is often hindered by insufficient sample sizes. To overcome this limitation, we adopt a transfer learning strategy that leverages knowledge from related source tasks to improve performance in data-scarce target tasks. This approach, however, introduces additional challenges including model shifts, covariate shifts, and decentralized data management. We propose the Low-Rank Tensor Transitions (LoRT) framework, which incorporates a novel fusion regularizer and a two-step refinement to enable robust adaptation while preserving low-tubal-rank structure. To support decentralized scenarios, we extend LoRT to D-LoRT, a distributed variant that maintains statistical efficiency with minimal communication overhead. Theoretical analysis and experiments on tensor regression tasks, including compressed sensing and completion, validate the robustness and versatility of the proposed methods. These findings indicate the potential of LoRT as a robust method for tensor regression in settings with limited data and complex distributional structures.

Lay Summary:

When data are organized in complex formats like videos or medical scans, they naturally form a structure called a tensor — a multi-dimensional array. Making predictions using such data usually requires a lot of labeled examples, which are often expensive or hard to collect. In this work, we explore whether information from other related datasets (called source tasks) can help improve learning on a new dataset (the target task) that has very limited data.We propose a method called Low-Rank Tensor Transitions (LoRT), which tries to find shared structure across tasks using low-rank assumptions — a way of summarizing complex data with fewer key components. LoRT works in two stages: it first finds patterns shared between tasks, then it adjusts the result to better fit the new task. We also develop a version for decentralized settings, where raw data cannot be shared — only model parameters are communicated.This is an early step toward making tensor-based learning more practical in data-scarce environments. While there are still many challenges ahead — such as reducing computational costs or extending to more complex settings — we hope this work provides a foundation for future exploration.

Chat is not available.