Skip to main content

A billion-dollar donation: estimating the cost of researchers’ time spent on peer review



The amount and value of researchers’ peer review work is critical for academia and journal publishing. However, this labor is under-recognized, its magnitude is unknown, and alternative ways of organizing peer review labor are rarely considered.


Using publicly available data, we provide an estimate of researchers’ time and the salary-based contribution to the journal peer review system.


We found that the total time reviewers globally worked on peer reviews was over 100 million hours in 2020, equivalent to over 15 thousand years. The estimated monetary value of the time US-based reviewers spent on reviews was over 1.5 billion USD in 2020. For China-based reviewers, the estimate is over 600 million USD, and for UK-based, close to 400 million USD.


By design, our results are very likely to be under-estimates as they reflect only a portion of the total number of journals worldwide. The numbers highlight the enormous amount of work and time that researchers provide to the publication system, and the importance of considering alternative ways of structuring, and paying for, peer review. We foster this process by discussing some alternative models that aim to boost the benefits of peer review, thus improving its cost-benefit ratio.

Peer Review reports


One of the main products of the academic publication system, the journal article, is a co-production of researchers and publishers. Researchers provide value not only by doing the research and writing up the results as a manuscript, but also by serving as peer reviewers. Publishers provide services of selection, screening, and dissemination of articles, including ensuring (proper) meta-data indexing in databases. Although several careful estimates are available regarding the cost of academic publishing e.g., [1], one aspect these estimates often neglect is the cost of peer reviews [2]. Our aim was to provide a timely estimation of reviewers’ contribution to the publication system in terms of time and financial value and discuss the implications.

In their peer reviewer role, scientists and other researchers provide comments to improve other researchers’ manuscripts and judge their quality. They offer their time and highly specialized knowledge to provide a detailed evaluation and suggestions for improvement of manuscripts. On average, a reviewer completes 4.73 reviews per year,Footnote 1 yet, according to Publons,Footnote 2 certain reviewers complete over a thousand reviews a year. This contribution takes considerable time from other academic work. In the biomedical domain alone, the time devoted to peer review in 2015 was estimated to be 63.4 M hours [3].

A manuscript typically receives multiple rounds of reviews before acceptance, and each round typically involves two or more researchers as peer reviewers. Peer review work is rarely formally recognized or directly financially compensated in the journal system (exceptions include some medical journals that pay for statistical reviewers and some finance journals that pay for quick referee reports). Most universities seem to expect academics to do review work as part of their research or scholarly service mission, although we know of none with an explicit policy about how much time they should spend on it.

While peer review work is a critical element of academic publishing, we found only a single estimate of its financial value, which was from 2007. Then, when the global number of published articles was not even half of the present volume, rough estimates indicated that if reviewers were paid for their time, the bill would be on the order of £1.9bn [4].

As a facet of the research process that currently requires labor by multiple human experts, reviewing contributes to a cost disease situation for science. “Cost disease” [5] refers to the fact that while the cost of many products and services have steadily decreased over the last two hundred years, this has not happened for some for which the amount of labor time per unit has not changed. This can make some products and services increasingly expensive relative to everything else in society, as has occurred, for example, for live classical music concerts. This may also be the fate of scholarly publication, unless reviewing is made more efficient.

The fairness and efficiency of the traditional peer review system has recently become a highly-debated topic [6, 7]. In this paper, we extend this discussion by providing an update on the estimate of researchers’ time and the salary-based contribution to the peer-review system. We used publicly available data for our calculations. Our approximation is almost certainly an underestimate because not only do we choose conservative values of parameters, but for the total number of academic articles, we rely on a database (Dimensions) that does not purport to include every journal in the world. We discuss the implications of our estimates and identify a number of alternative models for better utilizing research time in peer review.

Methods and results

To estimate the time and the salary-based monetary value of the peer review conducted for journals in a single year, we had to estimate the number of peer reviews per year, the average time spent per review, and the hourly labor cost of academics. In case of uncertainty, we used conservative estimates for our parameters, therefore, the true values are likely to be higher.


The total number of articles is obviously a critical input for our calculation. Unfortunately, there appears to be no database available that includes all the academic articles published in the entire world. Ulrich’s Periodicals Database may list the largest number of journals - querying their database for “journals” or “conference proceedings” and “Refereed / Peer-reviewed” yielded 99,753 entries. However, Ulrich’s does not indicate the number of articles that these entities publish. Out of the available databases that do report the number of articles, we chose to use Dimensions’ dataset ( which collects and collected articles from 87,000 scholarly journals, much more than Scopus (~ 20,000) or Web of Science (~ 14,000) [8].

Number of peer reviews per year

Only estimates exist for how many peer reviews associated with journals occur each year. Publons [9] estimated that the 2.9 million articles indexed in the Web of Science in 2016 required 13.7 million reviews. To calculate the number of reviews relevant to 2020, we used the formula used by Publons [9] - eq. 1 below. In that formula, a review is what one researcher does in one round of a review process.Footnote 3 For submissions that are ultimately accepted by the journal submitted to, the Publons formula assumes that on average there are two reviews in the first round and one in the second round; for rejected articles (excluding desk rejections) the formula assumes an average of two reviews for submissions that are ultimately rejected, both in the first round. Publons’ assumptions are based on their general knowledge of the industry but no specific data. Note, however, that if anything these are most likely underestimations as not all peer reviews are included in our estimation. For example, the review work done by some editors when handling a manuscript is not usually indexed in Publons, and a single written review report may be signed by several researchers.

Publons estimated the acceptance rate for peer-reviewed submissions to be 55%. That is, 45% of manuscripts that are not desk rejected are, after one or more rounds of review, ultimately rejected. Before including Publons’ estimates in our calculations, we evaluated them based on other available information. The Thomson Reuters publishing company reported numbers regarding the submissions, acceptances, and rejections that occurred at their ScholarOne journal management system for the period 2005–2010 [10]. In agreement with other sources [11, 12], it showed that the mean acceptance rates have apparently declined [10], the proportion of submissions that are eventually accepted by the journal the manuscript was submitted at was 0.40 in 2005: 0.37 in 2010, and 0.35 in 2011 [11, 12].

We did not find estimates of acceptance rates for the last several years, but we assume that the decline described by Thomson Reuters [10] continued to some extent, and assume that the present mean acceptance rate at journals is 0.30 then we can arrive at Publons’ figures. However, for the final numbers, we also need to estimate the rate of desk rejections as well. Although the rate of desk rejections likely varies substantially across journals (e.g., 22–26% at PLOS ONEFootnote 4), referenced values [13, 14] and journal publisher estimatesFootnote 5 lead us to estimate this value around 0.45.

The above estimates imply that, on average, every 100 submissions to a journal comprise 30 that are accepted after one or more rounds of peer review, 45 that are desk rejected, and 25 that are rejected after review. Thus, among submissions sent out for review, 55% (30 / (30 + 25) are ultimately accepted. That is, the articles published represent 55% of all reviewed submissions, indicating that 45% of submissions that were reviewed were rejected. These values are undoubtedly speculative, but they are consistent with Publons’ estimates.

Therefore, to estimate the number of peer reviews per year, we used Publons’ [9] formula:

$$ Nr\ of\ submis\mathrm{s}{ions}_{accepted}\times Average\ Nr\ of\ {reviews}_{accepted}+ Nr\ of\ {submissions}_{rejected}\times Average\ Nr\ of\ {reviews}_{rejected} $$

To obtain these values, we had to estimate the number of peer reviews performed for articles in 2020. For that, we used the numbers provided by the Dimensions portal ( The free version as well as the subscription version of Dimensions currently provide separate numbers for articles, chapters, proceedings, preprints, monographs, and edited books. For the sake of simplicity, our estimate is confined to articles.

The total number of articles published in 2020 according to the Dimensions database is 4,701,988. Assuming that this sum reflects the 55% acceptance rate of reviewed submissions, the number of reviewed but rejected submissions (the 45% of all reviewed submissions) are estimated to be globally 4,701,988/55*45 = 3,847,081. Based on these calculations, the total number of peer reviews for submitted articles in 2020 is 4,701,988*3 + 3,847,081*2 = 21,800,126.

Time spent on reviews

Several reports exist for the average time a reviewer spends when reviewing a manuscript. All of these are unfortunately based on subjective reports by reviewers rather than an objective measure. The only thing resembling an objective indication we found was in the Publons dashboard (, which as of 6 Aug 2021 indicated that the average length of reviews in their database across all fields is approximately 390 words. This highlights that the average review likely has substantive content beyond a yes/no verdict, but this cannot be converted to a time estimate. A 2009 survey responded to by 3597 randomly selected reviewers indicated that the reported average time spent on the last review was 6 h [15], a 2016 survey reported that the median reviewing time is 5 h [9]. Another survey in 2008 found that the average reported time spent reviewing was 8.5 h [16]. To be noted, it is likely that the second round of reviews do not take as long as the first one. To be conservative (and considering the tendency of people to overestimate how much time they work), we will use 6 h as the average time reviewers spend on each review.

Based on our estimate of the number of reviews and hours spent on a review, we estimate that in 2020 reviewers spent 21,800,126 × 6 h = 130,800,757 h on reviewing. This is equivalent to 14,932 years (at 365 days a year and 24 h of labor per day) (Fig. 1).

Fig. 1
figure 1

Overview of our calculation estimates of time spent on reviewing for scholarly articles in 2020. Number of published articles was obtained from Dimesions.AI database, all other numbers are assumptions informed by previous literature

Hourly wage of reviewers

To estimate the monetary value of the time reviewers spend on reviews, we multiplied reviewers’ average hourly wage by the time they spend reviewing. Note that some scholars consider their reviewing work to be volunteer work rather than part of their professional duties [5], but here we use their wages as an estimate of the value of this time. No data seem to have been reported about the wages of journal reviewers, therefore, we require some further assumptions. We assumed that the distribution of the countries in which reviewers work is similar to the distribution of the countries in the production of articles. In other words, researchers in countries that produce more articles also perform more reviews, while countries that produce few articles also do proportionally few reviews. Given the English-language and geographically Anglophone-centered concentration of scientific journals, we suspect that people in English-speaking countries are called on as reviewers perhaps even more than is their proportion as authors [17]. Because such countries have higher wages than most others, our assumption of reviewer countries being proportional to author countries is conservative for total cost. Accordingly, we calculated the country contributions to the global article production by summing the total number of publications for all countries as listed in the Dimensions database and computing the proportion of articles produced by each country.

Based on the results of the Peer Review Survey [15] and to keep the model simple and conservative, we assumed that reviewing is conducted almost entirely by people employed by academic workplaces such as universities and research institutes and that junior and senior researchers participate in reviewing in a ratio of 1:1. Therefore, to calculate the hourly reviewer wage in a given country we used.

$$ \frac{average\ annual\ post- doc\ salary+ average\ annual\ full\ professor\ salary}{2\times annual\ labor\ hours} $$

This yields a figure of $69.25 per hour for the U.S., $57.21 for the UK, and $33.26 for China (Table 1).

Table 1 Estimating the Value of Review Labor for the US, China, and the UK for 2020

Value of reviewing labor

We estimated the value of reviewing by multiplying the calculated hourly reviewer wage in a country by the number of estimated reviews in that country and the time preparing one review. We calculated each country’s share from the global number of reviews by using the country’s proportional contribution to global production of articles. In this calculation, each article produced by international collaborations counts as one to each contributing country. This yielded that the monetary value of reviewing labor for the three countries that contributed to the most articles in 2020, is: $USD 1.5 billion for the U.S., $626 million for China, and $391 million for the UK (Table 1). An Excel file including the formula used for the estimation in the present paper with interchangeable parameters is available at the OSF page of the project


The high price of scientific publishing receives a lot of attention, but the focus is usually on journal subscription fees, article processing charges, and associated publisher costs such as typesetting, indexing, and manuscript tracking systems e.g., [1]. The cost of peer review is typically no included. Here, we found that the total time reviewers worked on peer reviews was over 130 million hours in 2020, equivalent to almost 15 thousand years. The estimated monetary value of the time US-based reviewers spent on writing reviews was over 1.5 billion USD in 2020. For China-based reviewers, the estimate is over 600 million USD, and for UK-based, close to 400 million USD. These are only rough estimates but they help our understanding of the enormous amount of work and time that researchers provide to the publication system. While predominantly reviewers do not get paid to conduct reviews, their time is likely paid for by universities and research institutes.

Without major reforms, it seems unlikely that reviewing will become more economical, relative to other costs associated with publishing. One reason is that while technology improvements may automate or partially automate some aspects of publishing, peer review likely cannot be automated as easily. However, reducing details that reviewers should check might soon become automated (see

A second issue is that while there is much discussion of how to reduce other costs associated with publishing, little attention has been devoted to reducing the cost of peer review, even though it would likely be the costliest component of the system if reviewers were paid for the reviewers – rather than conducting the reviews under their “salary” paid time. After a long period of above-inflation subscription journal price increases, funders have attempted to put downward pressure on prices through initiatives such as Plan S [18] and through funding separate publishing infrastructures e.g., Wellcome Open Research and Gates Open Research [19, 20]. However, because publishers do not have to pay for peer review, putting pressure on publishers may have no effect on review labor costs. Peer review labor sticks out as a large cost that is not being addressed systematically by publishers. In another domain, research funders have worked on reducing the cost of paid grant review, for example by shortening the proposals or reducing the need for consensus meetings after individual assessments [21].

Here we will discuss two reforms to reduce the cost of peer review. The first would decrease the amount of labor needed per published article by reducing redundancy in reviews. The second would make better use of less-trained reviewers. Finally, we will briefly mention a few other reforms that may not reduce cost per review but would boost the benefits of peer review, thus improving the cost-benefit ratio.

Reducing redundancy in peer review

Many manuscripts get reviewed at multiple journals, which is a major inefficiency e.g., [22]. Because this is a multiplicative factor, it exacerbates the issue of the rising global increase in number of submissions. While improvements in the manuscript between submissions means that the reviewing process is not entirely redundant, typically at least some of the assessment being done is duplication. Based on survey data [23], we conservatively estimated that, on average, a manuscript is submitted to two journals before acceptance (including the accepting journal). In other words, each accepted article has one rejection and resubmission behind it. Should the reviews of a previous submission be available to the journal of the new submission, reviewing time could be substantially reduced (presuming that the quality of review does not differ between journals – and it very likely does), but unfortunately this is not common practice. If we assume that the “passed on” or open reviews would reduce the requirements by one review per manuscript, then approx. 28 M hours (of our 85 M hour total estimate) could be saved annually. In the US alone, it would mean a savings of approx. 297 M USD of work.Footnote 6

Some savings of this kind have already begun. Several publishers or journals share reviews across their own journals (PLOS, Nature [24]), which is sometimes known as “cascading peer review” [25]. Some journals openly publish the reviews they solicit (e.g., eLife; Meta-psychology; PLOS; Research Integrity and Peer Review; for a recent review see [26]), although typically not when the manuscript is rejected (Meta-psychology is an exception, and eLife will publish the reviews after a rejected manuscript is accepted somewhere else). The Review Commons initiative allows authors to have their preprint reviewed, with those reviews used by journal publishers including EMBO and PLoS [27]. Similarly, Peer Community In ( solicits reviews of preprints that can then be used by journals, including over 70 that have indicated they will consider such reviews.

A decline in the amount of research conducted, or the number of manuscripts this research results in, would reduce the amount of peer review labor needed. The number of articles being published has been growing rapidly for many decades [28, 29]. Some of this may be due to salami slicing (publishing thinner papers, but more of them), but this is not necessarily true - one study found that researchers’ individual publication rate has not increased [30] when normalized by the number of authors per paper, suggesting that authors are collaborating more to boost their publication count rather than publishing thinner papers. Hence, the increase in publication volume may be more a result of the steady increase in the global economy and, with it, support for researchers. Quality rather than publication quantity has, however, recently begun to be emphasized more by some funders and national evaluation schemes, and this may moderate the rate of growth in number of publications and potentially the peer review burden [31].

Improving the allocation of review labor

Broadening and deepening the reviewer pool

Journal editors disproportionately request reviews from senior researchers, whose time is arguably the most valuable. One reason for this is that senior researchers on average show up more often in literature searches, and also editors favor people they are familiar with, and younger researchers have had less time to become familiar to editors [32]. With the same individuals tapped more and more, the proportion of requests that they can agree to falls [33], which is likely one reason that editors have to issue increasing numbers of requests to review (a contributor to increasing costs which we did not calculate). Journal peer review, therefore, takes longer and longer because the system fails to keep up with academia’s changing demographics [3]. Today, more women and minorities are doing academic research, and the contributions from countries such as China are growing rapidly. But many of these researchers don’t show up on the radar of the senior researchers, located disproportionately in North America and Europe, who edit journals. This can be addressed by various initiatives, such as appointing more diverse editors and encouraging junior researchers to sign up to databases that editors consult when they seek reviewers [34, 35].

A more substantial increase in efficiency might come from soliciting contributions to peer review from individuals with less expertise than traditionally has been expected. Journal editors traditionally look for world experts on a topic, whose labor is particularly costly in addition to being in short supply and in high demand. But perhaps contributions to peer review shouldn’t be confined only to those highly expert in a field. Evaluating a manuscript means considering multiple dimensions of the work and how it is presented. For some research areas, detailed checklists have been developed regarding all the information that should be reported in a manuscript (see This provides a way to divide up the reviewing labor and have some aspects where even students, after some training, can vet aspects of manuscripts. Thus, we are hopeful that after more meta-research on what is desired from peer review for particular research areas, parts of peer review can be done by people who are not experts in the very specific topic of a manuscript but can nonetheless be very capable at evaluating particular aspects of a manuscript (and as mentioned above, automation can help with some tasks).

This process could also lead to greater specialization in peer review. For example, for manuscripts that report clinical trials, some people could be trained in evaluating the blinding protocol and resulting degree of success of blinding [36], and if they had the opportunity to evaluate that particular portion of many manuscripts, they grow better at it and thus can evaluate more in a shorter time, reducing the number of hours of labor that need be paid for. To some extent, this specialization in peer review has already begun. As reporting standards for particular kinds of research have become more widespread (e.g., Consolidated Standards of Reporting Trials (CONSORT) for clinical trials, Animal Research: Reporting of In Vivo Experiments (ARRIVE) for animal research, and Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) for systematic reviews of randomized trialsFootnote 7), professional staff at some publishers have begun performing some checks for compliance with these standards. For example, staff at PLOS check all manuscripts on human subject research for a statement regarding compliance with the Declaration of Helsinki, and clinical trials research for a CONSORT statement. These staff presumably can do this job more efficiently, and do so for a lower salary, than an academic charged with peer reviewing every word of an entire manuscript. There are also some services (e.g., RIPETA,Footnote 8 PUBSUREFootnote 9) that automatically screen the to-be-submitted manuscripts and provide reports on potential errors and instant feedback to the authors, while other products (e.g., AuthorONEFootnote 10) support publishers with automatic manuscripts screening including technical readiness checks, plagiarism checks, and checking for ethics statements.

Unlocking the value of reviews

Some reforms to peer review would not reduce the cost per review, but would increase the benefits per review, improving the cost-benefit ratio. One such reform is making reviews public instead of confidential. Under the currently-dominant system of anonymised peer review, however, only the authors, other reviewers, and editor of the manuscript have the opportunity to benefit from the content of the review.

When reviews are published openly, the expert judgments and information within reviews can benefit others. One benefit is the judgments and comments made regarding the manuscript. Reviews often provide reasons for caution about certain interpretations, connections to other literature, points about the weaknesses of the study design, and what the study means from their particular perspective. While those comments influence the revision of the manuscript, often they either don’t come through as discrete points or the revisions are made to avoid difficult issues, so that they don’t need to be mentioned.

It is not uncommon for some of the points made in a review to also be applicable to other manuscripts. Some topics of research have common misconceptions that lead to certain mistakes or unfortunate choices in study design. Some of the experienced researchers that are typically called upon to do peer review can rapidly detect these issues, and pass on the “tips and tricks” that make for a rigorous study of a particular topic or that uses a particular technique. But because peer reviews are traditionally available only to the editor and authors of the reviewed study, this dissemination of knowledge happens only very slowly, much like the traditional apprenticeship system required for professions before the invention of the printing press. How much more productive would the scientific enterprise be if the information in peer reviews were unlocked? We should soon be able to get a better sense of this, as this is already being done by the journals that have begun publishing at least some of their peer reviews (e.g, Meta-psychology, eLife, the PLOS journals; F1000Research, Royal Society Open Science, Annals of Anatomy, Nature Communications, PeerJ [20]). It will be very difficult, however, to put a financial value on the benefits. Fortunately, there are also other reasons that suggest that such policies should be adopted, such as providing more information about the quality of published papers.

In some cases, performing a peer review can actually benefit the reviewer. In Publons’ 2018 reviewer survey, 33% of respondents indicated that one reason (they could choose two from a list of nine) they agreed to review manuscripts was to “Keep up-to-date with the latest research trends in my field.” (p12 9). If more of such people can be matched with a manuscript, reviewing becomes more of a “win-win”, with greater benefits accruing to the reviewer than may be typical in the current system. Better matching, then, would mean an increased return on the portion of an employer’s payment of a researcher’s salary that pays for peer review. The initiatives that broaden the reviewer pool beyond the usual senior researchers that editors are most likely to think of may have this effect.


A limitation of the present study is that it does not quantify academic editors’ labor, which is typically funded by universities, research institutes or publishers and is integral to the peer review process. At prestige journals with high rejection rates, a substantial proportion of (associate) editors’ time is spent desk-rejecting articles, which could be considered wasteful, as rejected articles are eventually published somewhere else. Which also requires additional work from authors to prepare the manuscripts and navigate different submission systems.

Additionally, our study’s limitations come from the poverty of the available data. For example, today, no available database covers all scholarly journals and their articles. The rates of acceptance and rejections we used are approximate estimates. The average time spent on reviews likely strongly depends on fields and length of manuscript and we do not know how representative the number we used is of all academia. We could not calculate the cost of review for journal articles and conference papers separately, although they might differ in this regard. The nationality and salary of the reviewers are not published either, therefore, our calculations need to be treated with caution as they have to rely on broad assumptions. Nevertheless, the aim of this study was to estimate only the magnitude of the cost of peer review without the ambition to arrive at precise figures. We encourage publishers and other stakeholders to explore and openly share more information about peer review activities to foster a fairer and more efficient academic world.

Availability of data and materials

The public dataset supporting the conclusions of this article is available from the webpage.

An Excel file including the formula used for the estimation in the present paper with interchangeable parameters is available at the OSF page of the project


  1. Based on Personal communication with the Publons team.


  3. Note, that there are cases when a single submitted review is prepared by more than one individual, but the used formula does not differentiate these cases from when a review is prepared by only one individual.



  6. (715,645 articles × 6 h)* $69.25

  7. For their collection, see





  1. Grossmann A, Brembs B. Assessing the size of the affordability problem in scholarly publishing [internet]. PeerJ preprints; 2019. Available from:

  2. Horbach SP, Halffman W. Innovating editorial practices: academic publishers at work. Res Integr Peer Rev. 2020;5(1):1–5.

    Article  Google Scholar 

  3. Kovanis M, Porcher R, Ravaud P, Trinquart L. The global burden of journal peer review in the biomedical literature: strong imbalance in the collective enterprise. PLoS One. 2016;11(11):e0166387.

    Article  Google Scholar 

  4. RIN. Activities, costs and funding flows in the scholarly communications system in the UK. Res Inf Netw [Internet]. 2008; Available from:

  5. Baumol WJ, Bowen WG. Performing arts-the economic dilemma: a study of problems common to theater, opera, music and dance. Gregg Revivals; 1993.

  6. Brainard J. The $450 question: should journals pay peer reviewers? Science. 2021, ;

  7. Smith R. Peer reviewers—time for mass rebellion? [Internet]. The BMJ. 2021 [cited 2021 Mar 17]. Available from:

  8. Singh VK, Singh P, Karmakar M, Leta J, Mayr P. The journal coverage of web of science, Scopus and dimensions: a comparative analysis. Scientometrics. 2021;126(6):5113–42.

    Article  Google Scholar 

  9. Publons. 2018 Global State of Peer Review [Internet]. 2018 [cited 2020 Sep 8]. Available from:

  10. Reuters T. Global publishing: changes in submission trends and the impact on scholarly publishers. White Pap Thomson Reuters Httpscholarone Commed Pdf. 2012.

  11. Björk B-C. Acceptance rates of scholarly peer-reviewed journals: A literature survey. Prof Inf [Internet]. 2019 Jul 27 [cited 2021 Mar 9];28(4). Available from:

  12. Sugimoto CR, Lariviére V, Ni C, Cronin B. Journal acceptance rates: a cross-disciplinary analysis of variability and relationships with journal measures. J Inf Secur. 2013;7(4):897–906.

    Article  Google Scholar 

  13. Liguori EW, Tarabishy AE, Passerini K. Publishing entrepreneurship research: strategies for success distilled from a review of over 3,500 submissions. J Small Bus Manag. 2021;59(1):1–12.

    Article  Google Scholar 

  14. Shalvi S, Baas M, Handgraaf MJJ, Dreu CKWD. Write when hot — submit when not: seasonal bias in peer review or acceptance? Learn Publ. 2010;23(2):117–23.

    Article  Google Scholar 

  15. Sense About Science. Peer review survey 2009: Full report. 2009 [cited 2020 Sep 9]; Available from:

  16. Ware M. Peer review: benefits, perceptions and alternatives. London: Publishing Research Consortium; 2008.

    Google Scholar 

  17. Vesper I. Peer reviewers unmasked: largest global survey reveals trends. Nature [Internet]. 2018 Sep 7 [cited 2021 Aug 4]; Available from:

  18. Wallace N. Open-access science funders announce price transparency rules for publishers. Science. 2020.

  19. Butler D. Wellcome Trust launches open-access publishing venture. Nat News [Internet]. 2016 [cited 2021 Mar 17]; Available from:

  20. Butler D. Gates Foundation announces open-access publishing venture. Nat News. 2017;543(7647):599.

    Article  Google Scholar 

  21. Shepherd J, Frampton GK, Pickett K, Wyatt JC. Peer review of health research funding proposals: a systematic map and systematic review of innovations for effectiveness and efficiency. PLoS One. 2018;13(5):e0196914.

    Article  Google Scholar 

  22. Schriger DL, Sinha R, Schroter S, Liu PY, Altman DG. From Submission to Publication: A Retrospective Review of the Tables and Figures in a Cohort of Randomized Controlled Trials Submitted to the British Medical Journal. Ann Emerg Med. 2006;48(6):750–756.e21.

    Article  Google Scholar 

  23. Jiang Y, Lerrigo R, Ullah A, Alagappan M, Asch SM, Goodman SN, et al. The high resource impact of reformatting requirements for scientific papers. PLoS One. 2019;14(10):e0223976.

    Article  Google Scholar 

  24. Maunsell J. Neuroscience peer review consortium. J Neurosci. 2008;28(4):787–7.

  25. Barroga EF. Cascading peer review for open-access publishing. Eur Sci Ed. 2013;39(4):90–1.

    Google Scholar 

  26. Wolfram D, Wang P, Park H. Open Peer Review: The current landscape and emerging models. 2019;

  27. New Policies on Preprints and Extended Scooping Protection [Internet]. Review Commons. [cited 2021 Aug 6]. Available from:

  28. Larsen P, Von Ins M. The rate of growth in scientific publication and the decline in coverage provided by science citation index. Scientometrics. 2010;84(3):575–603.

    Article  Google Scholar 

  29. de Solla Price DJ, Page T. Science since babylon. Am J Phys. 1961;29(12):863–4.

    Article  Google Scholar 

  30. Fanelli D, Larivière V. Researchers’ individual publication rate has not increased in a century. PLoS One. 2016;11(3):e0149504.

    Article  Google Scholar 

  31. de Rijcke S, Wouters PF, Rushforth AD, Franssen TP, Hammarfelt B. Evaluation practices and effects of indicator use—a literature review. Res Eval. 2016;25(2):161–9.

    Article  Google Scholar 

  32. Garisto D. Diversifying peer review by adding junior scientists. Nat Index. 2020:777–84.

  33. Breuning M, Backstrom J, Brannon J, Gross BI, Widmeier M. Reviewer fatigue? Why scholars decline to review their peers’ work. PS Polit Sci Polit. 2015;48(4):595–600.

    Article  Google Scholar 

  34. Kirman CR, Simon TW, Hays SM. Science peer review for the 21st century: assessing scientific consensus for decision-making while managing conflict of interests, reviewer and process bias. Regul Toxicol Pharmacol. 2019;103:73–85.

    Article  Google Scholar 

  35. Heinemann MK, Gottardi R, Henning PT. “Select crowd review”: a new, Innovative Review Modality for The Thoracic and Cardiovascular Surgeon. Thorac Cardiovasc Surg. 2021, ;

  36. Bang H, Flaherty SP, Kolahi J, Park J. Blinding assessment in clinical trials: a review of statistical methods and a proposal of blinding assessment protocol. Clin Res Regul Aff. 2010;27(2):42–51.

    Article  Google Scholar 

Download references


We are thankful to James Heathers and three anonymous reviewers for providing valuable feedback on an earlier version of the manuscript.


This study was not funded.

Author information

Authors and Affiliations



Conceptualization: BA and BS. Formal Analysis: BA and BS. Methodology: BA and BS. Writing - Original Draft Preparation: BA, BS, and AOH. The author(s) read and approved the final manuscript.

Corresponding authors

Correspondence to Balazs Aczel or Barnabas Szaszi.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aczel, B., Szaszi, B. & Holcombe, A.O. A billion-dollar donation: estimating the cost of researchers’ time spent on peer review. Res Integr Peer Rev 6, 14 (2021).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: