Peer review is the core of the modern academic quality control – and yet reviewing itself remains a rather informal practice that varies greatly across the fields, research communities and reviewer demographics. The lack of quality assurance, standardization and training in peer review jeopardizes the quality control and results in publication delays and dissemination of spurious results. The recent trend for openness in scientific publishing and evaluation – manifested by the growing popularity of preprint servers, open access journals and public discussion platforms – makes the need for high-quality peer reviewing even more pronounced.
Despite the advances in scientific communication and the availability of new digital modes of interaction, peer reviewing has not evolved over the past decades: a classic peer review is an unstructured essay of arbitrary length and thoroughness, sometimes accompanied by a numerical score. We combine the existing best practices in the areas of peer reviewing, discourse theory and annotation-based collaboration to advance the peer reviewing of scientific manuscripts – and develop a dedicated writing assistance tool: PEER.
Unlike traditional, essay-style reviewing, PEER builds upon the informal annotation (1) and commenting (2) that accompany reading of the scientific manuscripts, and guides the reviewers towards authoring comprehensive and concise review reports based on the annotations they make and the reviewing schemata provided by the organizers of the reviewing campaign (4). To make the manuscript assessment more efficient, we introduce assistance models (3) that use natural language processing to help users perform routine reviewing operations without biasing their evaluation.
- Prof. Dr. Iryna Gurevych (Principal Investigator)
- Soumya Sarkar
- Ilia Kuznetsov
This project is funded by Deutsche Forschungsgemeinschaft (German Research Foundation).