Position: The AI Conference Peer Review Crisis Demands Author Feedback and Reviewer Rewards

Cited 0 time in webofscience Cited 0 time in scopus
  • Hit : 11
  • Download : 68
The peer review process in major artificial intelligence (AI) conferences faces unprecedented challenges with the surge of paper submissions (exceeding 10,000 submissions per venue), accompanied by growing concerns over review quality and reviewer responsibility. This position paper argues for the need to transform the traditional one-way review system into a bi-directional feedback loop where authors evaluate review quality and reviewers earn formal accreditation, creating an accountability framework that promotes a sustainable, high-quality peer review system. The current review system can be viewed as an interaction between three parties: the authors, reviewers, and system (i.e. conference), where we posit that all three parties share responsibility for the current problems. However, issues with authors can only be addressed through policy enforcement and detection tools, and ethical concerns can only be corrected through selfreflection. As such, this paper focuses on reforming reviewer accountability with systematic rewards through two key mechanisms: (1) a twostage bi-directional review system that allows authors to evaluate reviews while minimizing retaliatory behavior, (2) a systematic reviewer reward system that incentivizes quality reviewing. We ask for the community’s strong interest in these problems and the reforms that are needed to enhance the peer review process.
Publisher
International Machine Learning Society (IMLS)
Issue Date
2025-07-15
Language
English
Citation

Forty-Second International Conference on Machine Learning (ICML 2025)

URI
http://hdl.handle.net/10203/339155
Appears in Collection
EE-Conference Papers(학술회의논문)
Files in This Item
kim25am.pdf(2.59 MB)Download

qr_code

  • mendeley

    citeulike


rss_1.0 rss_2.0 atom_1.0