The ACM SIGSOFT Improving Paper and Peer Review Quality Initiative

In May 2019, ACM SIGSOFT launched an initiative to improve the quality of research papers and peer reviews at software engineering venues. The initiative is co-chaired by Paul Ralph (Dalhousie University) and Romain Robbes (Free University of Bozen-Bolzano). It has two main components: method-specific reviewing guidelines and recommendations for improving review processes.

Reviewing Standards

Most journals and conferences provide general guidelines to authors and reviewers. For example, the ICSE 2020 reviewing guidelines emphasize soundness, significance, verifiability and presentation. While general guidelines are a step in the right direction, a "one-size-fits-all" approach does not account for the vast variability of methodologies in software engineering research, some of which reviewers may not be familiar with.

The initiative is developing methodology-specific reviewing standards: brief, public documents that describe expectations for a certain kind of study. A reviewing standard for controlled experiments might say something like "if reporting statistical significance, also report measures of effect size, if available."

The main benefits of reviewing standards are:

  1. focusing reviewing on research methods
  2. improving review quality
  3. improving agreement among reviewers
  4. communicating a venue's expectations to authors
  5. making the review process fairer and more transparent to authors
  6. separating philosophical debates from the evaluation of a specific paper

Reviewing standards have been successful in other fields (e.g. British Journal of Pharmacology, SIGPLAN). Reviewing standards do not micromanage style, and are phrased to discourage an inflexible, box-ticking style of reviewing. Justified deviations from standards are often beneficial.

Drafts of the reviewing standards are in development. When they are ready, they will be released for public consultation.

Review Process Recommendations

The goal here is to improve the review processes of conferences and journals. Many different review processes have been already tried by various venues. Gathering evidence of the impact of these changes, clearly highlighting the pros and cons, and making recommendations would be valuable for steering committees and editorial boards.

Some of the challenges the initiative is considering are:

  • Low acceptance rates
  • Low quality reviews
  • Long review times (especially at journals)
  • High reviewer workloads
  • The difficulty of finding appropriate reviewers, and matching reviewers with papers
  • Dealing with unethical or ethically controversial research
  • Publication bias

Some of the review process variations the initiative is investigating are:

  • Masking: single blind, double blind, triple blind, no blind, reverse blind (authors anonymous, reviewers known), etc.
  • Phases: one-phase; two-phase; multi-phase, rolling deadlines, etc.
  • Open review: submissions, reviews and discussions are public
  • Mechanisms for enforcing review standards
  • Discussion, moderation, and appeal mechanisms
  • Roles: reviewer, moderator, discussant, associate editor, managing editor, etc.
  • Acceptance models: voting, champion, consensus, etc.

Summary

The Initiative seeks to improve the quality of papers and peer reviews by clarifying what constitutes methodologically sound research and making review processes more effective and efficient. Everyone in the software engineering research community will have an opportunity to read and respond to the initiative's recommendations. However, if you would like to take a more active role in shaping review guidelines or process recommendations, please contact the co-chairs.