Skip to main content

Biomedical journal speed and efficiency: a cross-sectional pilot survey of author experiences

Abstract

Background

Although the peer review process is believed to ensure scientific rigor, enhance research quality, and improve manuscript clarity, many investigators are concerned that the process is too slow, too expensive, too unreliable, and too static. In this feasibility study, we sought to survey corresponding authors of recently published clinical research studies on the speed and efficiency of the publication process.

Methods

Web-based survey of corresponding authors of a 20% random sample of clinical research studies in MEDLINE-indexed journals with Ovid MEDLINE entry dates between December 1 and 15, 2016. Survey addressed perceived manuscript importance before first submission, approximate first submission and final acceptance dates, and total number of journal submissions, external peer reviews, external peer reviewers, and revisions requested, as well as whether authors would have considered publicly sharing their manuscript on an online platform instead of submitting to a peer-reviewed journal.

Results

Of 1780 surveys distributed, 27 corresponding authors opted out or requested that we stop emailing them and 149 emails failed (e.g., emails that bounced n = 64, returned with an away from office message n = 70, or were changed/incorrect n = 15), leaving 1604 respondents, of which 337 completed the survey (21.0%). Respondents and non-respondents were similar with respect to study type and publication journals’ impact factor, although non-respondent authors had more publications (p = 0.03). Among respondents, the median impact factor of the publications’ journal was 2.7 (interquartile range (IQR), 2.0–3.6) and corresponding authors’ median h-index and number of publications was 9 (IQR, 3–20) and 27 (IQR, 10–77), respectively. The median time from first submission to journal acceptance and publication was 5 months (IQR, 3–8) and 7 months (IQR, 5–12), respectively. Most respondents (62.0%, n = 209) rated the importance of their research as a 4 or 5 (5-point scale) prior to submission. Median number of journal submissions was 1 (IQR, 1–2), external peer reviews was 1 (IQR, 1–2), external peer reviewers was 3 (IQR, 2–4), and revisions requested was 1 (IQR, 1–1). Sharing manuscripts to a public online platform, instead of submitting to a peer-reviewed journal, would have been considered by 55.2% (n = 186) of respondents.

Conclusion

Corresponding authors have high perceptions of their research and reported requiring few manuscript submissions prior to journal acceptance, most commonly by lower impact factor journals.

Peer Review reports

Background

In the scientific community, there is enormous pressure to publish as often and as quickly as possible––preferably in the highest impact peer-reviewed journals. The publication of a scientific manuscript is often considered the final stage of the research process, initiated once a project is complete and ready for peer judgment; it involves manuscript preparation, peer review, revision, and in many cases, re-submission to multiple journals before acceptance. Although the peer review process is believed to ensure scientific rigor, enhance research quality, and improve manuscript clarity, many investigators are concerned that the process is too slow [1, 2], too expensive, too unreliable, and too static [3,4,5,6].

Although there is a growing interest among the scientific community in increasing value and reducing waste in biomedical research [7], little empirical data exist on author experiences when it comes to the speed and efficiency of the publication process. Previous evaluations have focused on author perspectives and experiences submitting their work to one particular journal [4, 6]. While it is possible to collect information on the peer review process from individual journals, including the average number of days from submission of a manuscript to first decision, these data do not reflect the scientists’ perception of the broader publication process, which may involve multiple submissions to multiple different journals and long perceived delays from completion of the work to final dissemination to the peer community of researchers.

In this feasibility study, our objective was to pilot a survey of corresponding authors of recently published clinical research studies in order to quantify the speed and efficiency of the broader publication process, including the history of prior submissions and reviews and the time from first submission to final publication. We were specifically interested in whether a diverse set of corresponding authors of clinical research studies would respond to an online survey in sufficient numbers.

Methods

We conducted a cross-sectional survey of a sample of investigators who published clinical research studies in MEDLINE-indexed biomedical journals with Ovid MEDLINE entry dates between December 1, 2016 and December 15, 2016. These entry dates were selected to help minimize any recall bias and to ensure that the author contact information provided in the manuscript was up-to-date. MEDLINE searches were conducted through Ovid (Ovid Technologies, New York). After mapping the tree for the term “Clinical Study,” we selected the terms “Clinical Trial,” “Observational Study,” and “Meta-Analysis.” Additional custom limits and specific search terms for observational studies are included in the Additional file 1.

Study sample and design

We randomly selected a 20% sample of the 10,631 articles identified through the Ovid MEDLINE search. Two investigators (JDW, ACE) reviewed the titles and abstracts of all potential articles to exclude all non-clinical and non-research publications (i.e., letters, viewpoints).

Data extraction

Information cataloged routinely by Ovid MEDLINE was used to determine the journal name, author(s) names, DOI number, date and country of publication, and title. Based on the abstract and/or introduction, each eligible article was then classified as an observational or other study, clinical trial, or systematic review and/or meta-analysis (Table 1). The first corresponding author named in each article was identified for participation. When the same corresponding author was listed in multiple articles, we randomly selected one publication for inclusion. When full-text articles did not provide an email address for the corresponding author, additional articles listed in the Scopus database for the same corresponding author were reviewed. To determine the 2015 impact factor of each publication’s journal, four investigators (JDW, ACE, ADG, NS) independently searched the journal names for the eligible articles in InCitesTM Journal Citation Reports. No information was recorded for journals without a 2015 impact factor. We then entered article titles into the Scopus database search in order to locate the author profile for each corresponding author. When articles were not indexed in Scopus, corresponding authors’ first and last names were entered into the Scopus author search. For each corresponding author, we then identified the h-index and total number of publications. For each article published by the corresponding author included in our study, we used the full-text articles to determine the date of first publication (online or print). All uncertainties were discussed in detail, and a third reviewer (JSR) resolved any remaining discrepancies.

Table 1 Study design categories

In January 2017, we sent all potential survey respondents an email describing the purpose of the study, requesting their participation, and providing a link to the survey. Authors were informed that the survey should take less than 5 min. Four follow-up requests were sent by email over the course of February 2017. Although invitation to participate did not reference any specific study hypotheses, authors were informed that participation may provide information about the scientific process. Participation was completely voluntary and participants were informed of an opportunity to win one of five $100 gift certificates for Amazon. All internet-based responses were collected using a web-based survey platform (Qualtrics Labs, Provo, Utah, USA) [8].

Survey instrument development

The design of our eight-item survey was informed by previously published surveys [6, 8], a review of the literature on post-submission experiences, and discussion among the authors (JDW, ACE, HMK, JSR). The instrument was pretested by researchers independent of the study team and modified iteratively to improve clarity, face validity, and content validity.

Survey domains and dissemination

Perception of potential importance of manuscript

We asked authors to rate the potential importance (significance) of their manuscript as a contribution to the biomedical literature (1–5-point scale).

Submission history

We asked authors to select the approximate dates (month and year) when they first submitted the manuscript to any journal and when the manuscript was accepted for publication. We asked authors to specify (approximately): the total number of (1) journal submissions prior to acceptance, (2) journals that sent the manuscript for external peer review, (3) external peer reviewers that reviewed the manuscript across all submissions to all journals, and (4) journals that requested revisions to the manuscript.

Pre-print perceptions

We used a yes/no question to ascertain whether authors would have considered disseminating their research using a public online platform instead of submitting to a peer-reviewed journal to be published, assuming a situation in which sharing the manuscript online would provide the same amount of academic credit as a publication in a peer-reviewed journal.

Statistical analysis

To compare characteristics of survey respondents and non-respondents, we used chi-squared tests for categorical variables and Wilcoxon-Mann-Whitney tests for continuous variables, using two-sided tests with a type I error level of 0.05. We then conducted descriptive analyses of the main respondent characteristics, journal details, and survey domains. Considering that the corresponding authors were asked to report the approximate month and year of first submission and acceptance, we recorded all dates as the first day of the corresponding month. After approximating the times from first submission to journal acceptance and publication, we removed all observations where there was evidence of potentially incorrect reporting (e.g., the time from first submission to journal acceptance was longer than the time from first submission to publication). Data were analyzed using SAS version 9.4 (SAS Institute, Cary, North Carolina, USA) and R (version 3.4.0: The R Project for Statistical Computing).

Results

There were 1780 eligible corresponding authors who had published a clinical research article with Ovid MEDLINE entry dates between 1 December 2016 and 15 December 2016. Of the 1780 surveys distributed, 27 corresponding authors opted out or requested that we stop emailing them and 149 emails failed (e.g., emails that bounced n = 64, returned with an away from office message n = 70, or were changed/incorrect n = 15), leaving 1604 eligible respondents, of which 337 submitted a survey (response rate of 21.0%) (Fig. 1). All of the responding authors completed all of the survey questions (completion rate of 100.0%). Similar response rates were observed when we limited the sample to authors with the US articles or educational institution email addresses (e.g., universities or medical schools).

Fig. 1
figure 1

Flow chart showing identification and selection of potential articles

Survey respondents did not differ from non-respondents with respect to study type, publication journals’ impact factor, and journals’ country of publication. However, non-respondent corresponding authors had more publications (p = 0.03), slightly higher h-indices (p = 0.07), and were more likely to have non-institutional or Gmail/Yahoo/Hotmail/Aol email addresses (p = 0.003) (Table 2).

Table 2 Comparison of characteristics of survey respondents and non-respondents

Article and corresponding author characteristics

Most respondents published observational studies (n = 247, 73.3%) in biomedical journals with a median impact factor of 2.7 (interquartile range (IQR), 2.0–3.6; Fig. 2a). Corresponding authors’ median h-index and number of publications was 9 (IQR, 3–20; Fig. 2b) and 27 (IQR, 10–77), respectively. Author and journal characteristics across study design categories are provided in (Table 3). A majority of respondents (62.0%, n = 209) rated the importance of their research as a 4 or 5 (on a 5-point scale) prior to first submission of the manuscript to any journal for consideration for publication.

Fig. 2
figure 2

Histograms of (a) journal impact factor, (b) corresponding author h-indices, (c) submissions necessary prior to acceptances, and (d) time from first submission to publication (months)

Table 3 Characteristics of survey respondents across study design categories

Journal and peer review experience

Authors reported that anywhere between 1 and 10 submissions were necessary prior to acceptance (median 1 (IQR, 1–2), Fig. 2c). The median total number of journals that sent the manuscripts out for external peer review was 1 (IQR, 1–2), and the median total number of external peer reviewers that reviewed the manuscripts across all submissions to all journals was 3 (IQR, 2–4). The median time from first submission to journal acceptance and publication, among the 212 authors who were able to provide approximate months and years, was 5 months (IQR, 3–8; Fig. 3) and 7 months (IQR, 5–12; Fig. 2d), respectively. The median times from first submission to journal acceptances for clinical trials, observational or other studies, and systematic reviews and/or meta-analyses were 6 (IQR, 3–11), 5 (IQR, 3–8), and 4.0 (2–6), respectively. The median times from first submission to publication for clinical trials, observational or other studies, and systematic reviews and/or meta-analyses were 7 (IQR, 6–12), 7 (IQR, 5–12), and 6 (IQR, 5–9), respectively. Sharing manuscripts to an online platform, instead of submitting to a peer-reviewed journal, would have been considered by 55.2% of the corresponding authors (186 of 337).

Fig. 3
figure 3

Time from first submission to acceptance (month) versus journal impact factor

Discussion

In this pilot survey of corresponding authors who had recently published clinical research, we found that authors have high perceptions of their research. Authors reported requiring few manuscript submissions prior to journal acceptance, most commonly by lower impact factor journals. These findings suggest that the majority of clinical research studies in this sample were published within a year of first submission. Although our data come from a pilot study, our results may contradict prior perceptions that the biomedical publication process is too slow. Our results also suggest mixed perceptions about non-traditional publication practices.

In this feasibility study, we found that only 21% of potential participants completed our survey. This may suggest that an insufficient number of diverse corresponding authors of clinical research studies would respond to a large-scale online survey about the speed and efficiency of the broader publication process. Our response rate also does not compare favorably with other surveys of authors, physicians, and clinical trial investigators [1, 4, 6, 8]. Future cross-sectional surveys on this topic may benefit from focusing on specific groups of corresponding authors (e.g., those submitting from English-speaking countries or certain specialty groups). The results from this study are also likely subject to recall bias, including an inability of corresponding authors to remember specific details about a manuscript’s submission history. Considering that non-respondents had more publications than respondents, it is possible that non-respondents were unable to recall the submission history for one of their published articles. Furthermore, the automated emails requests could have been forwarded directly to the corresponding authors’ spam folders and language barriers may have limited the number of eligible respondents.

One possible explanation for our survey findings is that authors publishing clinical research articles are preferentially submitting their manuscripts to lower impact journals, possibly to ensure that articles do not require multiple submissions to multiple higher impact journals prior to eventual acceptance. Evidence suggests that the manuscripts selected for peer review in the highest impact journals can have extensive review times [9]. According to an empirical evaluation of articles submitted to the Annals of Internal Medicine, the majority of rejected manuscripts were published in lower impact factor specialty journals after an average of 18 months [10]. Our findings may reflect the fact that authors are willing to submit to just one lower impact journal in order to prevent publication delays that could result from multiple rejections and revisions at higher impact journals.

Technological advances could also be responsible for faster production times. According to an analysis of journal articles published in PubMed from 1960 to 2015, the median time that manuscripts spend in production is half of what it was in the early 2000s [9, 11]. Over the last few years, many journals have also started to embrace certain open science practices, including the sharing of in press, ahead of print, or online first articles, which could result in faster publication times. Journals may also be working to cut down on review times by increasing the number of papers that go through only one round of revision.

We found mixed perceptions when it comes to non-traditional publication practices. With the growing demand for faster dissemination of knowledge, journals as we know them may be “approaching their final act” [5]. However, publishing in journals remains the de facto currency for many academic investigators. Even though the corresponding authors were asked to imagine a situation where publicly sharing the manuscript online would provide the same amount of academic credit as a publication in a peer-reviewed journal, authors may find this a difficult scenario to envision, given the publish-or-perish atmosphere that exists at many academic institutions. Furthermore, clinical researchers may be fearful that competitive biomedical journals will not want to accept a manuscript that has already been made public.

Limitations

First, only 21% of potential participants completed our survey. Second, the manuscript submission process and selection of a corresponding author may differ by institution, country, and research field. For example, a graduate student who worked on a project and submitted the manuscript likely would know more about the submission history than a senior investigator listed as the corresponding author. Based on the relatively low median h-index and number of publications, it is also possible that the majority of the respondents were early career researchers or less prolific investigators. As a result, our sample may not have sufficient diversity to draw generalizable conclusions. Furthermore, we found differences between respondents and non-respondents that suggest some response bias. In particular, non-respondents were more likely to have non-institutional email addresses and had more publications. These differences may suggest that the two groups differ from one another in more ways than they are similar and is a limitation of our study. Subsequent studies should explore these possibilities, ideally with larger survey samples that would permit multivariable analyses to account for potential differences between the respondents and non-respondents. Lastly, we used a non-validated survey instrument. However, we attempted to maximize ease of completion and limited the scope of the survey to reduce response burden.

Conclusions

We found that corresponding authors have high perceptions of their research and reported requiring few manuscript submissions prior to journal acceptance, most commonly by lower impact factor journals. Furthermore, the majority of clinical research studies were published within a year of first submission, which suggests that the publication of clinical research studies is less delayed than was expected. These trends may explain why corresponding authors have mixed perceptions when it comes to non-traditional publication practices.

Abbreviations

CI:

Confidence interval

IQR:

Interquartile range

References

  1. Tite L, Schroter S. Why do peer reviewers decline to review? A survey. J Epidemiol Community Health. 2007;61(1):9–12.

    Article  Google Scholar 

  2. Kassirer JP, Campion EW. Peer review. Crude and understudied, but indispensable. JAMA. 1994;272(2):96–7.

    Article  Google Scholar 

  3. Rennie D, Knoll E. Investigating peer review. Ann Intern Med. 1988;109(3):181.

    Article  Google Scholar 

  4. Weber EJ, Katz PP, Waeckerle JF, Callaham ML. Author perception of peer review: impact of review quality and acceptance on satisfaction. JAMA. 2002;287(21):2790–3.

    Article  Google Scholar 

  5. Krumholz HM. The end of journals. Circ Cardiovasc Qual Outcomes. 2015;8(6):533–4.

    Article  Google Scholar 

  6. Sweitzer BJ, Cullen DJ. How well does a journal’s peer review process function? A survey of authors’ opinions. JAMA. 1994;272(2):152–3.

    Article  Google Scholar 

  7. Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

    Article  Google Scholar 

  8. Ross JS, Ritchie JD, Finn E, Desai NR, Lehman RL, Krumholz HM, Gross CP. Data sharing through an NIH central database repository: a cross-sectional survey of BioLINCC users. BMJ Open. 2016;6(9):e012769.

    Article  Google Scholar 

  9. Powell K. Does it take too long to publish research? Nature. 2016;530(7589):148–51.

    Article  Google Scholar 

  10. Ray J, Berkwits M, Davidoff F. The fate of manuscripts rejected by a general medical journal. Am J Med. 2000;109(2):131–5.

    Article  Google Scholar 

  11. Himmelstein DS, Powell K. Analysis for “the history of publishing delays” blog post v1.0. Zenodo. 2016. https://doi.org/10.5281/zenodo.45516.

Download references

Acknowledgements

The authors would like to acknowledge Drs. Nihar Desai and Margaret McCarthy and Jeanie Kim for providing feedback during survey development.

Funding

This project was conducted as part of the Collaboration for Research Integrity and Transparency (CRIT) at Yale, funded by the Laura and John Arnold Foundation, which supports Mr. Egilman and Drs. Wallach and Ross. The Laura and John Arnold Foundation played no role in the design of the study, analysis or interpretation of findings, or drafting the manuscript and did not review or approve the manuscript prior to submission.

Availability of data and materials

The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.

Author information

Authors and Affiliations

Authors

Contributions

JDW had full access to all data in the study. JDW takes responsibility for the integrity of the data and accuracy of the data analytics. JDW and JSR conceived the concept and design of study. JDW, ACE, HMK, and JSR formulated the survey questions. JDW, ACE, NS, and ADG were responsible for acquisition of data. JDW analyzed the data. JDW and JSR were responsible for the interpretation of the data. JDW drafted the manuscript. All authors were responsible for critical revision of the manuscript. JSR supervised the study. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Joshua D. Wallach.

Ethics declarations

Ethics approval and consent to participate

Ethics approval from the Yale University School of Medicine Human Research Protection Program was requested prior to study conduct, and consent was considered to be implied when participants completed the online survey. The need for approval was waived.

Consent for publication

Not applicable.

Competing interests

In the past 36 months, JDW, ACE, and JSR received research support through Yale from the Laura and John Arnold Foundation to support the Collaboration on Research Integrity and Transparency at Yale; HMK and JSR received research support through Yale from Johnson and Johnson to develop methods of clinical trial data sharing, from Medtronic, Inc. and the Food and Drug Administration (FDA) to develop methods for postmarket surveillance of medical devices, and from the Centers of Medicare and Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting; HMK received compensation as a member of the Scientific Advisory Board for United Healthcare; and JSR received research support through Yale from the FDA to establish a Center for Excellence in Regulatory Science and Innovation (CERSI) at Yale University and the Mayo Clinic, from the Blue Cross Blue Shield Association to better understand medical technology evaluation, and from the Agency for Healthcare Research and Quality (R01HS022882).

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:

I. Additional custom limits and specific search terms, II. Survey email, and III. Survey questions. (DOCX 129 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wallach, J.D., Egilman, A.C., Gopal, A.D. et al. Biomedical journal speed and efficiency: a cross-sectional pilot survey of author experiences. Res Integr Peer Rev 3, 1 (2018). https://doi.org/10.1186/s41073-017-0045-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s41073-017-0045-8

Keywords