Skip to yearly menu bar Skip to main content


Poster

Unbiased Recommender Learning from Implicit Feedback via Weakly Supervised Learning

Eric Wang · Zhichao Chen · Haotian Wang · Yanchao Tan · Licheng Pan · Tianqiao Liu · Xu Chen · Haoxuan Li · Zhouchen Lin

West Exhibition Hall B2-B3 #W-403
[ ] [ ]
Thu 17 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

Implicit feedback recommendation is challenged by the missing negative feedback essential for effective model training. Existing methods often resort to negative sampling, a technique that assumes unlabeled interactions as negative samples. This assumption risks misclassifying potential positive samples within the unlabeled data, thereby undermining model performance. To address this issue, we introduce PURL, a model-agnostic framework that reframes implicit feedback recommendation as a weakly supervised learning task, eliminating the need for negative samples. However, its unbiasedness hinges on the accurate estimation of the class prior. To address this challenge, we propose Progressive Proximal Transport (PPT), which estimates the class prior by minimizing the proximal transport cost between positive and unlabeled samples. Experiments on three real-world datasets validate the efficacy of PURL in terms of improved recommendation quality. Code is available at https://github.com/HowardZJU/weakrec.

Lay Summary:

This paper formulates implicit feedback recommendation as a weakly supervised learning problem, obtaining an unbiased positive-negative recommender without the need of negative feedback.

Chat is not available.