Identifying Patterns and Motivations of ‘Mega’ Peer-Reviewers

Background. The demand for peer reviewers is disproportionate to the supply and availability of reviewers. Identifying the factors associated with peer review behaviour can allow for the development of solutions to manage the growing demand for peer reviewers. The objective of this research was to identify factors associated with completing a large number of peer reviews in a given year. Methods. A case-control study design was used to examine factors associated with individuals completing at least 100 peer reviews (‘mega peer reviewers’) from January 2018 to December 2018 as compared to a control group of peer reviewers completing between 1 and 18 peer reviews over the same time period. Data was provided by Publons, which offers a repository of peer reviewer activities in addition to tracking peer reviewer publications and research metrics. A series of independent sample t-tests and chi-square tests were conducted comparing characteristics (e.g., number of publications, number of citations, word count of peer review) of mega peer reviewers to the control group of reviewers. Results. A total of 1596 peer reviewers had data provided by Publons. A total of 396 mega peer reviewers and a random sample of 1200 control group reviewers were included. Both groups were comprised of a greater number of males than females (mega peer reviews = 92.4% male, control reviewers = 70.0% male). Mega peer reviewers demonstrated a signi�cantly greater average number of total publications, citations, receipt of Publons awards, and a higher average h index as compared to the control group of reviewers (all p < .001). Alternatively, the control group had a signi�cantly greater average number of words in their peer reviews (mean = 332.48, standard deviation [SD] = 346.30) as compared to mega peer-reviewers (mean = 272.50, SD = 219.99). Conclusions. There is a sub-set of highly active peer reviewers that complete a large proportion of peer review activities. These individuals demonstrate signi�cantly different characteristics than reviewers completing a more typical number of peer reviews over a one year period. Additional research that considers motivations associated with peer review behaviour should be conducted to help inform peer reviewing


Introduction
Peer review involves a manuscript undergoing evaluation and scrutiny by experts in the same eld of research. 1 The peer review system has been integrated into the scienti c community for hundreds of years with the goal of validating academic work and improving the quality of published research. 2 When peer reviewers are aware and in agreement with the expectations and responsibilities of reviewing an article, and editors incorporate feedback in a timely manner, peer review has the potential to result in valuable feedback for the authors and improve the quality and usability of research ndings. 3

Page 3/12
The sustainability of peer review relies on the availability and expertise of peer reviewers. Obtaining peer reviews that are high quality (i.e., useful for authors) is di cult for many journal editors. An inaugural Global State of Peer Review report, developed by Publons in collaboration with the Web of Science group (both owned by Clarivate Analytics), reported on (1) characteristics of peer reviewers, (2) e ciencies of the peer-review process, (3) quality of peer review, and (4) future considerations for peer review. Importantly, this report found an increasing demand of peer reviewers, disproportionate to the supply. 3 Certain characteristics were found to be associated with the completion of peer review activities. For example, variability between regions for peer reviewer activity exists, with individuals from the USA and China contributing the greatest number of peer review. 3 Regional variability in the incentives structure has been suggested as one factor that may partially account for these differences. The Publons' 2018 Global Review Survey included over 11,800 researchers and found overwhelming agreement (84.8% of participants) that greater recognition and formalized incentives for peer review would increase willingness to serve as a peer reviewer and would positively impact the e ciency of the peer-review process. 3 Traditional rewards exist, such as journal subscriptions, discounts for open accessing publishing, and acknowledgements through public "thank you" lists. However, these rewards are inconsistently applied among journals and do not meet the preferred rewards and incentives (e.g., waiver of publication fees) and recognition of peer (e.g., incorporated as part of the evaluation criteria for funding applications) being sought by researchers. [3][4][5][6] Findings from the Global State of Peer Review report includes a summary of trends found among a large sample of peer reviewers and the current strain on the peer review system. There remain substantial gaps in the understanding of characteristics of individuals that agree to serve as peer reviewers and the quality of peer reviews. 3 Publons offers a repository where peer reviewers can document their peer review activities in addition to tracking publications and research metrics. 7 Anecdotally, we have observed some researchers that are highly active in peer reviewing (i.e., individuals completing at least 100 peer reviews, annually -we refer to these individuals as 'mega peer reviewers') on the Publons website. A recent study also highlighted the unequal distribution of peer-reviewing tasks among small groups of researchers. 8 Understanding the factors that relate to mega peer reviewing behaviour and the quality of these peer reviews can inform the design of strategies and interventions to facilitate system-and individual-level changes. Identifying the factors associated with peer review behaviour can provide a basis for keeping pace with the growing demand for peer reviewers. As such, the objective of this research was to identify the factors associated with individuals who were highly active peer reviewers in a given year as compared to a control group of peer-reviewers. Given the lack of research on mega peer reviewers, this was an exploratory project, and we did not form hypotheses.

Methods
The protocol for this study was registered within the Open Science Framework database (https://osf.io/vxdhf/?view_only=313fd05399664b94bc7a9042aa225be3) before data collection began.
This was a case-control study to retrospectively examine factors associated with mega peer reviewers as compared to a control group of peer reviewers. Mega reviewers were de ned as individuals that completed peer reviews for 100 or more unique articles from January 2018 to December 2018. All aspects of this study were reported in accordance with the Strengthening (STROBE) reporting guidelines to facilitate the complete and transparent reporting of this work. 9 Case Control Data Participants We gathered information from the Publons database. Publons tracks and publicizes peer-reviewer activities for individuals that create an account and connect their research activities to their pro le. Individuals can download their peer-review, author, and editor metrics and this information can also be made public. Using the Publons database, two groups of individuals were of interest for this study, including (1) mega peer reviewers: all individuals that completed peer reviews for 100 or more unique articles from January 2018 to December 2018, inclusive (case group) (i.e., individuals completing approximately two peer reviews every week) and (2) a control group of individuals completing at least one peer review and less than 18 peer reviews over the same time period (i.e., individuals completing up to 1 peer review every 3 weeks). A random sample of controls were selected from Publons database. areas, top quality reviews based on editor rated evaluations)], review characteristics based on Publons data (i.e., number of unique manuscripts peer-reviewed in 2018, number of unique manuscripts reviewed each month, average number of words per review, average number of words per review at reviewer's institution). Sex was not available on Publons. As such, sex was estimated by using the Genderize data base, which uses data collected from countries to assess the probability of the sex being associated with a given name (https://genderize.io/). For any sex that could not be estimated with more than 80% certainty, this was marked as missing data.

Sample Size Calculation
The mega peer-reviewers sample size was based on the number of peer-reviewers on the Publons website that met our inclusion criteria (i.e., greater than 100 peer-reviews 2018). For the control group, a sample size calculation based on the total number of reviewers that met the control group requirements (i.e., completing at least one review and less than 18 peer-reviews in 2018) was conducted using the standard deviation of the average word count which was estimated using preliminary data from Publons. A sample size calculation was conducted in R package (pwr) for a two sample t-test comparing mega peer reviewers and the control group. The pooled standard deviation was calculated, and a minimum sample size of 1167 was estimated (see Appendix 1). To determine the number of peer reviewers needed for the control group, a 1:1 random sample was selected and the standard deviation of the average word count of peer review quality was determined.

Data Analysis
Primary data analysis calculated descriptive characteristics of both samples of reviewers. The secondary data analysis involved conducting a logistic regression to compare the mega peer reviewer characteristics to the control group, treating mega-reviewing as a binary outcome. Given the exploratory nature of this study, the association between peer reviewer characteristics (i.e., sex, country of institution, and November (mean = 0.5, SD = 0.7). Mega peer reviewers (n=396) completed a total of 54,953 peer reviews within 2018 as compared to the control group (n=1200) that completed a total of 4862 peer reviews in 2018. Characteristics of mega peer reviewers and the control group can be found in Table 1.

Independent Samples T-Tests and Chi-Square
A series of independent sample t-tests were conducted comparing mega peer reviewers to the control group of reviewers (see Table 2). Mega peer reviewers had a signi cantly greater average number of publications themselves (total), publications in 2018, citations (total), citations in 2018, and a signi cantly higher average h index as compared to the control group of reviewers (all p<.05). The control group had a signi cantly greater average number of words in their peer reviews and as did the average word counts of peer reviews conducted at the control groups academic institutions as compared to mega peer reviewers (all p<.05) (see Table 2).
The continent of reviewers signi cantly differed. The majority of mega peer reviewers were from Asia (33.0%), Europe (36.8%), and North America (18.9%). A similar pattern was found among the control group of peer reviewers with 41.1% from Europe, 25.8% from North America, and 21.3% from Asia. The remaining reviewers were from Australia (mega peer reviewers = 4.3%; control peer reviewers = 6.0%), South America (mega peer reviewers = 0.8%; control peer reviewers = 3.1%), and Africa (mega peer reviewers = 6.2%; control peer reviewers = 2.7%). Publons awards were signi cantly more present among mega peer reviewers with 88.1% of mega peer reviewers having received an award from Publons as compared to less than one percent of the control group reviewers (see table 3).

Discussion
In the Publons database, 396 individuals peer reviewed at least 100 papers within a 12 month time-frame. This represents a substantial time commitment among mega peer reviewers in completing a task that is often perceived as burdensome. 8 In addition to active peer reviewing activities, mega peer reviewers demonstrated a signi cantly greater average number of publications overall and within one year time frame reviewed. Mega peer reviewers also had signi cantly more citations overall and within the one-year time frame reviewed, greater receipt of Publon awards, and a higher average h index as compared to the control group of reviewers. Alternatively, the control group had a signi cantly greater average number of words in their peer reviews as compared to mega peer reviewers. Notably, mega peer reviewers were overwhelmingly male. This may align with previous research ndings that demonstrate a gender bias among editors whereby editors invite fewer female peer reviewers as compared to males. 11 It may also re ect female academics managing multiple responsibilities at work and at home, resulting in little extra bandwidth to peer review articles. 12 Mega peer reviewer's altruism and dedication to peer reviewing should be acknowledged. These peer reviewing activities substantially impact research as the total number of articles reviewed in 2018 by mega peer reviewers was over 54,000. These articles were peer reviewed by 396 individuals, and this represents 11 times more peer reviews than the 1200 individuals in the control group completed. When considering the number of peer reviews completed by mega reviewers, it is possible that the level of detail provided to authors is less comprehensive compared to control peer reviewers. The word counts of submitted reviews by mega peer reviewers was signi cantly less than the word count of the control group of reviewers and the average word count of colleagues at the same institution. On the other hand, it is possible that mega peer reviewers are more direct in their comments and are able to provide a review that is equally as helpful to authors as compared to the control group of reviewers. Assessing the comments qualitatively and asking author feedback about the usefulness of reviewer comments will be important to better understand the impact of completing a substantial number of peer reviewing activities.
Both categories of peer reviewers provide approximately two thirds of a page of peer reviewer feedback per article with mega peer reviewers using fewer words. The average word count of other reviewers from the same institutions as the control reviewers was approximately to one and a half pages of text. Both groups of reviewers in this study provide less than a page of text for a review which may be inadequate for providing constructive feedback for an entire manuscript. Assuming a brief opening paragraph to precis the research paper under peer review (i.e., providing the authors with a measure of face validity about the peer reviewers understanding of the research report), followed by optimal reviewing with the help of a reporting guideline, 13 along with any speci c journal reviewing guidance, it is not clear that all of this information can be conveyed in so few words. Additionally, peer reviews are most helpful to authors when they are evidence based, 14 often necessitating citing making the word length even longer.
Publons Academy modules provide relevant knowledge to all peer reviewers who are trainees, which could in uence the quality and completeness of peer reviews. The need to prioritize depth over the number of completed peer reviews may need to be further emphasized in these modules when training reviewers. This emphasis on depth may also be relevant for incentive structures. Peer reviewing is often absent from incentive structures within academia, 8 however, certain institutions have started to incorporate peer reviewing activities into career advancement. 6 It is currently unknown if mega peer reviewers are rewarded at their institutions for peer reviewing and if there are other incentives contributing to mega peer reviewers behaviour. Future research that identi es qualitative and quantitative barriers and enablers associated with peer-reviewing behavior can provide a basis for keeping pace with the growing demand for peer reviewers. It can also identify facilitators and barriers to producing high-quality peer reviews. Conducting a survey with mega reviewers and control reviewers to better understand the current study ndings was a planned part of this work but has not yet been completed. To inform change, also surveying editors and associated editors would provide a more thorough understanding of how mega peer reviewers receive ongoing journal request for peer reviews and understanding why editors may frequently invite speci c reviewers.
There are limitations that should be considered when interpreting our results. First, the data used was collected from researchers that have an account through Publons database which may result in a selection bias. Second, the data collected for this work was not collected for the purpose of this study, limiting the variables that were available. For example, career stage of peer reviewers is not collected within the Publons database, although h-index may provide a proxy for this variable. Third, assessing the quality of peer-reviews was also not possible based on available data. Finally, considering geographical variability in peer reviewing activities was limited, due to few reviewers in either group being located in Australia, South America, or Africa.

Supplementary Files
This is a list of supplementary les associated with this preprint. Click to download. Appendix1.docx