Skip to yearly menu bar Skip to main content


Poster
in
Workshop: CODEML: Championing Open-source DEvelopment in Machine Learning

Vulnerability of Text-Matching in ML/AI Conference Reviewer Assignments to Collusions

Jhih-Yi Hsieh · Aditi Raghunathan · Nihar Shah

[ ] [ Project Page ]
Fri 18 Jul 2:15 p.m. PDT — 3 p.m. PDT

Abstract:

OpenReview is an open-source platform for conference management that supports various aspects of conference peer review and is widely used by top-tier conferences in AI/ML. These conferences use automated algorithms on OpenReview to assign reviewers to paper submissions based on two factors: (1) reviewers' interests, indicated by their paper bids, and (2) domain expertise, inferred from the similarity between the text of their prior publications and submitted manuscripts. A major challenge is collusion rings, where groups of researchers manipulate the assignment process to review each other's papers positively, regardless of their actual quality. Most countermeasures target bid manipulation, assuming text similarity is secure. We demonstrate that, even without bidding, colluding authors and reviewers can exploit the text-matching component on OpenReview to be assigned to their target papers. Our results reveal specific vulnerabilities in the reviewer assignment system and offer suggestions to enhance its robustness.

Chat is not available.