Skip to yearly menu bar Skip to main content


Poster

Position: The Artificial Intelligence and Machine Learning Community Should Adopt a More Transparent and Regulated Peer Review Process

Jing Yang

East Exhibition Hall A-B #E-501
[ ] [ ] [ Project Page ]
Thu 17 Jul 11 a.m. PDT — 1:30 p.m. PDT

Abstract:

The rapid growth of submissions to top-tier Artificial Intelligence (AI) and Machine Learning (ML) conferences has prompted many venues to transition from closed to open review platforms. Some have fully embraced open peer reviews, allowing public visibility throughout the process, while others adopt hybrid approaches, such as releasing reviews only after final decisions or keeping reviews private despite using open peer review systems. In this work, we analyze the strengths and limitations of these models, highlighting the growing community interest in transparent peer review. To support this discussion, we examine insights from Paper Copilot (papercopilot.com), a website launched two years ago to aggregate and analyze AI / ML conference data while engaging a global audience. The site has attracted over 200,000 early-career researchers, particularly those aged 18–34 from 177 countries, many of whom are actively engaged in the peer review process. \textit{Drawing on our findings, this position paper advocates for a more transparent, open, and well-regulated peer review aiming to foster greater community involvement and propel advancements in the field.

Lay Summary:

The way research papers are reviewed at top AI and machine learning conferences has a big impact on what ideas are shared and recognized. But the current review system is facing serious problems: there are too many papers, reviews are often hidden from public view, and decisions can feel inconsistent or unfair—especially to early-career researchers.To better understand and improve this process, we created Paper Copilot, a website that tracks and visualizes how papers are reviewed across conferences. Since its launch, over 200,000 people from 177 countries have used it—mostly young researchers. We found that conferences with more open and transparent review systems attract much more community interest and trust. These systems also lead to more thoughtful discussions between reviewers and authors.Our research shows that making the review process more open—while still protecting privacy—could lead to fairer, more rigorous evaluations. We argue that the AI research community should adopt more consistent and transparent peer review practices, not just to improve fairness, but to better serve the global community pushing this field forward.

Chat is not available.