This article has Open Peer Review reports available.
Recruitment of reviewers is becoming harder at some journals: a test of the influence of reviewer fatigue at six journals in ecology and evolution
© The Author(s) 2017
Received: 9 December 2016
Accepted: 4 February 2017
Published: 8 March 2017
The original article was published in Research Integrity and Peer Review 2016 1:14
It is commonly reported by editors that it has become harder to recruit reviewers for peer review and that this is because individuals are being asked to review too often and are experiencing reviewer fatigue. However, evidence supporting these arguments is largely anecdotal.
We examine responses of individuals to review invitations for six journals in ecology and evolution. The proportion of invitations that lead to a submitted review has been decreasing steadily over 13 years (2003–2015) for four of the six journals examined, with a cumulative effect that has been quite substantial (average decline from 56% of review invitations generating a review in 2003 to just 37% in 2015). The likelihood that an invitee agrees to review declines significantly with the number of invitations they receive in a year. However, the average number of invitations being sent to prospective reviewers and the proportion of individuals being invited more than once per year has not changed much over these 13 years, despite substantial increases in the total number of review invitations being sent by these journals—the reviewer base has expanded concomitant with this growth in review requests.
The proportion of review invitations that lead to a review being submitted has been declining steadily for four of the six journals examined here, but reviewer fatigue is not likely the primary explanation for this decline.
KeywordsPeer review Reviewers Reviewer fatigue Scholarly journals
The process of peer review serves two primary purposes—reviewers advise editors on which papers to include in their journal and provide constructive feedback to authors to improve the quality of their research and paper. Success of the peer review system relies on the willingness of the research community to review manuscripts, which is usually unpaid. The research community depends on individuals who volunteer their time for peer review, but few direct rewards exist at the individual level to encourage reviewing . Given the tremendous growth in submissions that many journals are experiencing (e.g., ), it is unsurprising that many editors have reported that it is getting harder to recruit reviewers for manuscripts . There is a common perception that reviewers are increasingly being asked to review too often (certainly more than in the past) and are thus experiencing reviewer fatigue ([4, 5]; but see ), but there is little published evidence to support this.
In a recent analysis of peer review at five ecology journals, Albert et al.  examined how often invitations sent to prospective reviewers lead to a submitted review and tested whether the number of review invitations prospective reviewers receive has been increasing over a 7–8-year period. They found that the proportion of review requests that lead to a completed review declined over this period for four of the journals, but the decline was not substantial, and there was no evidence of a decline at a fifth journal. They also found that the number of review requests sent to an average reviewer had not increased substantially over the period of their study.
Here, we extend the analyses of Albert et al.  to additional years (13 years instead of 8) and two additional high impact factor journals (Evolution and Methods in Ecology and Evolution; 2015 impact factors are >4.0 for all six journals). Also, unknown to Albert et al. , the dataset available online for the four journals of the British Ecological Society [8, 9] contains errors that influenced some of their results (but not their main conclusions). We thus analyze a newly compiled dataset for these journals and present updated/corrected figures.
The decline in reviews received per invitation sent for these four journals (Functional Ecology, Journal of Animal Ecology, Journal of Applied Ecology, and Journal of Ecology) is driven primarily by a decline in the proportion of invitees agreeing to review when they respond to the invitation (blue line in Fig. 1); the decline in the proportion of respondents who agree to review was significant for all journals except Evolution (Fig. 1; χ 2 1 > 3.9, P < 0.05 for all except Evolution, for which χ 2 1 = 0.0, P = 0.99). For the four journals with the steepest declines, we see a drop from 66% of respondents agreeing to review in 2003 to just 46% agreeing in 2015 (averaged across journals). Also contributing is a small decline in the proportion of invitees who responded to the invitation email (red line in Fig. 1), though this also varied among journals. The proportion of invitees who responded declined significantly over time for J Animal Ecology, J Applied Ecology, and J Ecology (χ 2 1 > 29.2, P < 0.001) but not the others (for which χ 2 1 < 2.1, P > 0.15) (response rates actually increase slightly but significantly over the few years for which we have data for Methods in Ecology and Evolution).
However, the means and the patterns in Fig. 2b are difficult to interpret because most individuals are invited just once in any given year (i.e., the median invitations per individual is just 1 for all journals in all years). We thus examined the proportion of individuals invited more than once in any given year. As in the above analysis, there was a general decline over time for J Animal Ecology and J Applied Ecology, but no change over time for the remaining journals. This is likely because, despite the need to invite substantially increasing numbers of individuals over time at each of these journals, the journals are broadening their reviewer populations rather than increasing the burden per individual reviewer. J Applied Ecology and Functional Ecology have the most diverse reviewer pools (inviting only 14.4 and 16.3% of individuals more than once per year, averaged over years), and J Ecology and Evolution have the least diverse reviewer pools (28.3 and 27.5% of their invitees are invited more than once). On average, individual journals invite only 21% of individuals more than once per year, 5.5% more than twice, 1.2% more than three times, and 0.6% more than four times. Across the five journals of the British Ecological Society, which share a common reviewer database (all journals presented here except Evolution), most reviewers are invited only one time (across all journals) in any given year—only 32% of reviewers are invited more than once within any calendar year, 12.8% more than twice, 5.7% more than three times, and 2.6% more than four times.
Lastly, Albert et al.  highlight “discrepancies” that they cannot reconcile for reviewer responses at Functional Ecology between their re-analysis of data from Fox et al. (; data available at Dryad, datadryad.com; ) and their re-analysis of data from Petchey et al. (; also available at Dryad ). They find good agreement between the two datasets from 2007 to 2010 but not prior to 2007 (see Figure 4 in ). There are at least three factors producing the observed discrepancies. (1) The Fox et al. [10, 11] data include only standard research papers, whereas the Petchey et al. [8, 9] data include editorials, reviews, and other non-standard papers. (2) The Fox et al. [10, 11] study treated invitation responses for which the invitee did not respond to the review request as missing data (because “no response” was not consistently recorded until 2007). This did not impact the results of Fox et al.  because that analysis examined each step of the reviewer recruitment process separately and excluded pre-2007 data from analysis of variables that could be affected by the missing data. Accounting for reviewer non-responses (as done here in Fig. 1) leads to substantially improved agreement between the two analyses. (3) Unbeknown to Albert et al. , the dataset of Petchey et al. [8, 9] double counts review invitations for some journals and some years if a revision of the manuscript is submitted (this is at least in part because ScholarOne Manuscripts [previously Manuscript Central] automatically listed individuals who reviewed an original version as reviewers on a submitted revision and counted them as invited in the report of invited reviewers whether they were invited or not). This double counting had little effect on most results presented in Albert et al. , but it did inflate the estimated number of invitations sent to each reviewer in the early years of their dataset for journals that had enabled this automatic reviewer selection on revisions (see Fig. 2 for corrected numbers without the double counting). Importantly, none of these dataset problems influence the main conclusions of Albert et al. , though analysis of the expanded dataset presented in this commentary shows that editor success in recruiting reviewers has declined more substantially, at least at four of the journals examined here, than Albert et al. ) estimated.
In summary, we find that the proportion of invitees who submit a review has been decreasing slowly but steadily for four of the six journals examined here and that the cumulative effect over 13 years has been quite substantial for these journals. Why two of these journals (Evolution and Methods in Ecology and Evolution), plus a third journal examined by Albert et al.  (Molecular Ecology), have not experienced a similar decline, are unclear. It could be due to differences in editorial practices at these journals; e.g., although editors select the reviewers to be invited, three of the journals with the most significant declines in the proportion of reviewers agreeing to review—Functional Ecology, Journal of Ecology, and Journal of Applied Ecology (but not Journal of Animal Ecology)—have editorial assistants who contact prospective reviewers on behalf of editors, whereas editors themselves send the reviewer invitations for both journals that showed no significant decline, Methods in Ecology and Evolution and Evolution. Alternatively, it could be due to differences in the communities they serve—those that have experienced consistent reviewer response rates over time publish more evolutionarily and genetically focused research, whereas those that have shown substantial declines are more ecological in scope. We also find, like Vines et al.  and Albert et al. , that the average number of invitations being sent to prospective reviewers has not changed much over the 13 years we examine, at least within these journals, suggesting that reviewer fatigue is not the primary reason for the decline in the proportion of invitees who agree to review. We do see evidence that reviewer fatigue may occur at the per-individual level; individuals who receive the most invitations are the most likely to decline the invitation, but too few individuals receive enough invitations (at least within journals) for this to be a primary explanation for the declining proportion of individuals who agree to review.
Taken together, the data presented here and in Albert et al.  suggest that whatever is driving the decrease in reviewer agreement rate is external to the peer review system: even though submissions have increased, review requests per person have not (Fig. 2), at least not within journals. It may be that the rising number of journals or the increase in journal rejection rates (at least at top tier journals, causing papers to cascade among journals) are increasing per-individual reviewing workload, but these should be functionally similar to individual journals receiving more submissions. Moreover, recent modelling work by Kovanis et al.  showed that there is normally sufficient capacity within the reviewer pool to cope with increased submissions. One potential cause of the decline in agreement rate is growing demands on researchers’ time from other areas, such as administration and grant writing. The latter is supported by the steady decline in application success rates at, e.g., the National Science Foundation [https://www.nsf.gov/nsb/publications/2014/nsb1432.pdf]. Another possible cause is the apparent growing dissatisfaction with commercial publishers. It would therefore be interesting to repeat this analysis for a non-profit open access publisher such as the Public Library of Science (PLoS).
The British Ecological Society and the Society for the Study of Evolution provided permission to access their databases for this peer review analysis. Katie Simmons assisted with extracting the reviewer database for Evolution, and Emilie Aimé, Christopher Grieves, Kate Harrison, Simon Hoggart, Jennifer Meyer, Erika Newton, Alice Plane, James Ross, and Leila Walker extracted the reviewer databases for the BES journals. Emilie Aimé, Jacqueline Dillard, Daphne Fairbairn, Allyssa Kilanowski, Jennifer Meyer, Josiah Ritchey, and Boris Sauterey provided helpful comments on earlier drafts of this commentary.
This project was funded in part by the Kentucky Agricultural Experiment Station, the British Ecological Society and the Society for the Study of Evolution (all to CF).
Availability of data and materials
The datasets analyzed during the current study are available from the corresponding author on reasonable request and at Dryad (DataDryad.org), doi:10.5061/dryad.2jr8p.
CF acquired, proofed and analyzed the data, produced figures, and wrote most of the paper. AA and TV critically commented on the ideas and results presented here and wrote parts of the manuscript. All authors read and approved the final manuscript.
CF is an Executive Editor of one of the journals (Functional Ecology) evaluated here. TV is employed by Axios Review, an external peer review provider.
Consent for publication
This work was approved by the University of Kentucky’s Institutional Review Board (IRB 15–0890).
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Hochberg ME, Chase JM, Gotelli NJ, Hastings A, Naeem S. The tragedy of the reviewer commons. Ecol Lett. 2009;12(1):2–4. http://dx.doi.org/10.1111/j.1461-0248.2008.01276.x.View ArticleGoogle Scholar
- Fox CW, Burns CS. The relationship between manuscript title structure and success: editorial decisions and citation performance for an ecological journal. Ecol Evol. 2015;5(10):1970–80. http://dx.doi.org/10.1002/ece3.1480.View ArticleGoogle Scholar
- Baveye PC, Trevors JT. How can we encourage peer-reviewing? Water Air Soil Pollut. 2011;214:1–3. http://dx.doi.org/10.1007/s11270-010-0355-7.
- Fox J, Petchey OL. Pubcreds: fixing the peer review process by “privatizing” the reviewer commons. Bull Ecol Soc Am. 2010;91:325–33. http://dx.doi.org/10.1890/0012-9623-91.3.325.View ArticleGoogle Scholar
- Breuning M, Backstrom J, Brannon J, Gross BI, Widmeier M. Reviewer fatigue? Why scholars decline to review their peers’ work. PS: Political Science and Politics. 2015;48:595–600.Google Scholar
- Vines T, Rieseberg L, Smith H. No crisis in supply of peer reviewers. Nature. 2010;468:1041. http://dx.doi.org/10.1038/4681041a.
- Albert AY, Gow JL, Cobra A, Vines TH. Is it becoming harder to secure reviewers for peer review? A test with data from five ecology journals. Res Integrity Peer Rev. 2016;1(1):14. http://dx.doi.org/10.1186/s41073-016-0022-7.View ArticleGoogle Scholar
- Petchey OL, Fox JW, Haddon L. Imbalance in individual researcher’s peer review activities quantified for four British Ecological Society journals, 2003–2010. PLoS One. 2014;9:e92896. http://dx.doi.org/10.1371/journal.pone.0092896.View ArticleGoogle Scholar
- Petchey OL, Fox JW, Haddon L. Data from: Imbalance in individual researcher’s peer review activities quantified for four British Ecological Society Journals, 2003–2010. Dryad Digital Repository. 2014b; http://dx.doi.org/10.5061/dryad.36r69.
- Fox CW, Burns CS, Meyer JA. Editor and reviewer gender influence the peer review process but not peer review outcomes at an ecology journal. Funct Ecol. 2016;30:140–53. http://dx.doi.org/10.1111/1365-2435.12529.View ArticleGoogle Scholar
- Fox CW, Burns CS, Meyer JA. Data from: Editor and reviewer gender influence the peer review process but not peer review outcomes at an ecology journal. Dryad Digital Repository. 2016. http://dx.doi.org/10.5061/dryad.5090r.
- Kovanis M, Porcher R, Ravaud P, Trinquart L. The global burden of journal peer review in the biomedical literature: strong imbalance in the collective enterprise. PLoS ONE. 2016;11(11):e0166387. doi:10.1371/journal.pone.0166387.View ArticleGoogle Scholar