- Open Access
- Open Peer Review
Simple decision-tree tool to facilitate author identification of reporting guidelines during submission: a before–after study
Research Integrity and Peer Reviewvolume 2, Article number: 20 (2017)
There is evidence that direct journal endorsement of reporting guidelines can lead to important improvements in the quality and reliability of the published research. However, over the last 20 years, there has been a proliferation of reporting guidelines for different study designs, making it impractical for a journal to explicitly endorse them all. The objective of this study was to investigate whether a decision tree tool made available during the submission process facilitates author identification of the relevant reporting guideline.
This was a prospective 14-week before–after study across four speciality medical research journals. During the submission process, authors were prompted to follow the relevant reporting guideline from the EQUATOR Network and asked to confirm that they followed the guideline (‘before’). After 7 weeks, this prompt was updated to include a direct link to the decision-tree tool and an additional prompt for those authors who stated that ‘no guidelines were applicable’ (‘after’). For each article submitted, the authors’ response, what guideline they followed (if any) and what reporting guideline they should have followed (including none relevant) were recorded.
Overall, 590 manuscripts were included in this analysis—300 in the before cohort and 290 in the after. There were relevant reporting guidelines for 75% of manuscripts in each group; STROBE was the most commonly applicable reporting guideline, relevant for 35% (n = 106) and 37% (n = 106) of manuscripts, respectively. Use of the tool was associated with an 8.4% improvement in the number of authors correctly identifying the relevant reporting guideline for their study (p < 0.0001), a 14% reduction in the number of authors incorrectly stating that there were no relevant reporting guidelines (p < 0.0001), and a 1.7% reduction in authors choosing a guideline (p = 0.10). However, the ‘after’ cohort also saw a significant increase in the number of authors stating that there were relevant reporting guidelines for their study, but not specifying which (34 vs 29%; p = 0.04).
This study suggests that use of a decision-tree tool during submission of a manuscript is associated with improved author identification of the relevant reporting guidelines for their study type; however, the majority of authors still failed to correctly identify the relevant guidelines.
Communicating what has been done and observed is a key aspect of the scientific process; however, there is a huge amount of evidence that much of published biomedical research is poorly reported .
In the early 1980s, DerSimonian and colleagues suggested a solution to this, stating that ‘editors could greatly improve the reporting of clinical trials by providing authors with a list of items that they expected to be strictly reported’ . This eventually led to the development of the CONSORT Statement—a common set of recommendations for the essential items that should be included in any report of a randomised controlled trial [3, 4].
Since its publication, the CONSORT Statement has been widely shared and supported, reflected both by the number of citations received and its endorsement by major editorial organizations, for example, the International Committee of Medical Journal Editors, Committee on Publication Ethics and World Association of Medical Editors.
Despite the visibility of the CONSORT Statement, recent reviews have demonstrated that reporting of essential information continues to be generally inadequate in trial reports across all areas of medicine [5,6,7,8,9]. Research has also suggested that peer review—the mechanism traditionally used to ensure the integrity of the scientific literature—fails to detect important deficiencies in reporting of the methods and results of randomised trials .
However, there is evidence that direct journal endorsement of the CONSORT Statement can lead to important improvements in the quality and reliability of published research . In their study, Turner et al. defined endorsement as a statement that implies that the CONSORT Statement is incorporated into the editorial process for the journal.
Since publication of the CONSORT Statement, there has been a proliferation of reporting guidelines for other types of research, including observational studies , systematic reviews and meta-analyses  and even case reports . There are now over 350 research reporting statements available from the EQUATOR network (http://www.equator-network.org/).
A journal explicitly endorsing all these guidelines in a statement is impractical, and would make it difficult for authors to find the relevant guideline for their study, further weakening endorsement as an intervention to improve research reporting. Therefore, there is a need to identify and specifically endorse the reporting guideline relevant for an individual author’s study.
The hypothesis of this study was that a simple decision-tree tool (Penelope EQUATOR Wizard), which gathers binary information about a study from the author(s) during the submission process to calculate the study type and link them to the relevant reporting guideline for their study, would improve author identification of the relevant reporting guideline without the need to explicitly endorse it in the journal’s ‘Instructions for Authors’.
This was a prospective before–after study to investigate the impact of a decision-tree tool to support authors in identifying the relevant reporting guidelines for their study.
The study took place across four speciality medical research journals—BMC Family Practice, BMC Gastroenterology, BMC Musculoskeletal Disorders and BMC Nephrology.
A question was introduced into the submission system for each of the journals on 15 February 2016 (Table 1a), linking to the EQUATOR Network website and prompting authors to follow the relevant reporting guidelines for their study type and asking them to confirm that they have done so, or that there are no relevant guidelines for their study type. A similar statement was also added to the submission guidelines for research articles for each journal involved.
After 7 weeks, this question was updated to include a link to the decision tool on 4 April 2016, including a second prompt for those authors who stated that there were no relevant guidelines for their study type (Table 1b, c). This question was then removed after another 7 weeks, on 23 May 2016.
Manuscripts were defined as part of the ‘before’ or ‘after’ group depending on what question they answered during submission.
The intervention was the Penelope EQUATOR Wizard (http://www.peneloperesearch.com/equatorwizard/). This automated decision tree asks authors to provide yes–no answers regarding their study to calculate the study type and the relevant reporting guideline. The decision tree for the tool can be seen in Additional file 1; it includes 11 commonly-used guidelines:
Animal Research: Reporting of In Vivo Experiments (ARRIVE) 
CAse REport (CARE) guidelines 
Consolidated Standards of Reporting Trials (CONSORT) 
ENhancing Transparency in REporting the synthesis of Qualitative research (ENTREQ) 
Meta-analysis Of Observational Studies in Epidemiology (MOOSE) 
Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 
REporting recommendations for tumour MARKer prognostic studies (REMARK) 
Standards for Reporting Qualitative Research (SRQR) 
Standards for Reporting of Diagnostic Accuracy (STARD) 
Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) 
Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD) .
The primary outcome for this study was the percentage of authors who identified the correct reporting guidelines for their study type.
For each article submitted, we recorded the authors’ response to the submission question, what guideline they followed (if any) and what reporting guideline they should have followed (including none relevant). Study protocols and economic evaluations, which are listed by the EQUATOR Network as ‘main study types’ but whose reporting guidelines were not included in the decision tree were excluded from the analysis. Those studies that had explicitly and appropriately followed reporting guidelines that were not included in the decision tree were also excluded.
The reporting guidelines followed by the authors were identified from the full text of the manuscript—this included direct references or mentions of the guidelines followed, submission of a reporting checklist or inclusion of the relevant flow diagram. Author adherence to the reporting guidelines and completeness of reporting were not evaluated as a part of this study.
Each submitted manuscript was independently evaluated by two of the investigators (DRS, DM or ILS) to identify the study type and what reporting guideline should have been followed. In the event of a disagreement between the two investigators, a consensus was reached between the group.
Each manuscript was classified as one of six possible outcomes:
Authors identified the correct reporting guideline;
Authors correctly stated that no reporting guidelines were relevant to their study type;
Authors correctly identified that there were reporting guidelines relevant to their study type, but provided no information as to which;
Authors followed a reporting guideline that was inappropriate for their study type;
Authors incorrectly stated that no reporting guidelines were relevant for their study type;
Authors incorrectly stated that there were relevant reporting guidelines for their study type.
Data were analysed using Microsoft Excel 2010. Analyses were conducted for each outcome separately; the percentage of manuscripts in each possible outcome was recorded both before and after the introduction of the decision-tree tool. One-tailed Student’s t test for proportions was used to evaluate the difference between the proportions, with α < 0.05 and H 0 that there were no differences between the proportions.
Overall, 611 manuscripts were submitted during the study period, with 590 included in the analysis—300 in the before cohort and 290 in the after; 10 and 11 manuscripts were excluded from each cohort, respectively as they concerned study types that had well-established reporting guidelines that were not included in the decision-tree tool. There were no significant differences between the two cohorts at baseline (Table 2).
There were relevant reporting guidelines for 75% of manuscripts in each group (n = 224 in the before cohort, n = 217 in the after cohort). The most commonly applicable reporting guideline was STROBE, which was relevant for 35% (n = 106) of manuscripts submitted in the before cohort, and 37% (n = 106) of manuscripts in the after cohort (Table 2).
Overall, use of the tool was associated with a statistically significant 8.4% improvement in the number of authors correctly identifying the relevant reporting guideline for their study (Table 3; p < 0.0001). Similarly, there was an improvement in the number of authors incorrectly stating that there were no relevant reporting guidelines for their study (37% before vs 23% after; p < 0.0001) and a reduction in the number of authors choosing a reporting guideline that was not applicable to their study on submission; however, this was not statistically significant (3.1 vs 1.4%; p = 0.10).
Overall, the number of authors who correctly stated that there were no relevant reporting guidelines for their study was comparable between the two groups (66 vs 64%; p = 0.65), as were the proportion of authors incorrectly stating that there were relevant reporting guidelines for their study, but not indicating which (34 vs 36%; p = 0.48).
Combining those authors who correctly identified the relevant reporting guidelines and those who correctly stated that there were no relevant reporting guidelines for their study type shows an increase of 6% in the after cohort (40 vs 46%).
A large systematic review involving 50 studies and reports of more than 16,000 randomised trials demonstrated that journal endorsement of the CONSORT checklist was associated with an improvement in the completeness of reporting for 22 of 25 CONSORT checklist items . However, endorsement as an intervention is poorly defined. A recent review by Shamseer et al. on high-impact-factor medical journals demonstrated that 63% (106/168) of the included journals mentioned CONSORT in their ‘Instructions to Authors’, 42% (n = 44) explicitly stated that authors ‘must’ use CONSORT to prepare their trial manuscript, and 38% required an accompanying completed CONSORT checklist as a condition of submission .
Inadequate reporting is also a huge problem in study types other than randomised trials, including systematic reviews , diagnostic studies , animal studies , observational studies , clinical predication studies , qualitative studies  and surveys [29, 30], contributing to an estimated $85billion wasted annually .
To our knowledge, no other studies have been conducted to evaluate the impact of endorsement as an intervention to improve the reporting of published research. This study suggests that use of a simple decision-tree tool during manuscript submission facilitated author identification of the relevant reporting guidelines for their study type. However, even with use of the tool, the majority of authors failed to identify the correct reporting guideline for their study.
One possible explanation for this could be that prompting authors regarding reporting requirements at the point of submission is too late in the publication process, as they will have already written their manuscript. This is supported by the observed increase in the number of authors stating that they had followed the relevant reporting guideline, without presenting any evidence to support this, following the introduction of the tool, and the significant decrease in the number of authors incorrectly stating that there were no relevant reporting guidelines for their study.
This could suggest that the change in the question asked influenced authors’ behavior during article submission, with their ‘default’ answer changing depending on the formulation of the question, rather than the tool influencing how they reported their study. Furthermore, due to the submission system used (Editorial Manager), it was not possible to track which authors actually used the tool during the submission system, which prevents us from strongly correlating use of the tool with an improvement in author identification of reporting guidelines.
As this analysis only concerned the identification of the relevant reporting guideline, not the completeness of reporting of the manuscript, it is not possible to evaluate the impact of the tool on the completeness of the literature. However, the association demonstrated between endorsement of the CONSORT Statement and the completeness of published clinical trials suggests that it could have such an improvement , although further research would be needed to confirm this effect and whether it was meaningful.
This before–after study suggests that use of a decision-tree tool during submission of a manuscript is associated with improved author identification of the relevant reporting guidelines for their study type; however, the majority of authors still failed to correctly identify the relevant guidelines.
Simera I, Altman DG, Moher D, Schulz KF, Hoey J. Guidelines for reporting health research: the EQUATOR network’s survey of guideline authors. PLoS Med. 2008;5(6):e139.
DerSimonian R, Charette LJ, McPeek B, Mosteller F. Reporting on methods in clinical trials. N Engl J Med. 1982;306(22):1332–7.
Begg C. Improving the quality of reporting of randomized controlled trials. JAMA. 1996;276(8):637.
Schulz KF, Altman DG, Moher D, CONSORT Group. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMC Med. 2010;8:18.
Glasziou P, Altman DG, Bossuyt P, Boutron I, Clarke M, Julious S, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet. 2014;383(9913):267–76.
Hoffmann TC, Thomas ST, Shin PNH, Glasziou PP. Cross-sectional analysis of the reporting of continuous outcome measures and clinical significance of results in randomized trials of non-pharmacological interventions. Trials. 2014;15(1):362.
Yurdakul S, Mustafa BN, Fresko I, Seyahi E, Yazici H. Brief report: inadequate description and discussion of enrolled patient characteristics and potential inter-study site differences in reports of randomized controlled trials: a systematic survey in six rheumatology journals. Arthritis Rheumatol. 2014;66(5):1395–9.
Hopewell S, Dutton S, Yu L-M, Chan A-W, Altman DG. The quality of reports of randomised trials in 2000 and 2006: comparative study of articles indexed in PubMed. BMJ. 2010;340:c723.
Dechartres A, Trinquart L, Atal I, Moher D, Dickersin K, Boutron I, et al. Evolution of poor reporting and inadequate methods over time in 20 920 randomised controlled trials included in Cochrane reviews: research on research study. BMJ. 2017;357:j2490.
Hopewell S, Collins GS, Boutron I, Yu L-M, Cook J, Shanyinde M, et al. Impact of peer review on reports of randomised trials published in open peer review journals: retrospective before and after study. BMJ. 2014;349:g4145.
Turner L, Shamseer L, Altman DG, Schulz KF, Moher D. Does use of the CONSORT statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review. Syst Rev. 2012;1:60.
Von Elm E, Altman D, Gøtzsche P, Vandenbroucke J. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS Med. 2007;4:296.
Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7):e1000097.
Gagnier JJ, Kienle G, Altman DG, Moher D, Sox H, Riley D, et al. The CARE guidelines: consensus-based clinical case reporting guideline development. J Med Case Rep. 2013;7:223.
Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol. 2010;8(6):e1000412.
Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol. 2012;12:181.
Stroup DF. Meta-analysis of observational studies in Epidemiology: a proposal for reporting. JAMA. 2000;283(15):2008.
McShane LM, Altman DG, Sauerbrei W, Taube SE, Gion M, Clark GM, et al. REporting recommendations for tumour MARKer prognostic studies (REMARK). Br J Cancer. 2005;93(4):387–91.
O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):1245–51.
Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. BMJ. 2003;326(7379):41–4.
Collins GS, Reitsma JB, Altman DG, Moons KGM. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. BMC Med. 2015;13(1):1.
Shamseer L, Hopewell S, Altman DG, Moher D, Schulz KF. Update on the endorsement of CONSORT by high impact factor journals: a survey of journal “instructions to authors” in 2014. Trials. 2016;17(1):301.
Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4(3):e78.
Fontela PS, Pant Pai N, Schiller I, Dendukuri N, Ramsay A, Pai M. Quality and reporting of diagnostic accuracy studies in TB, HIV and malaria: evaluation using QUADAS and STARD standards. PLoS One. 2009;4(11):e7753.
Kilkenny C, Parsons N, Kadyszewski E, Festing MFW, Cuthill IC, Fry D, et al. Survey of the quality of experimental design, statistical analysis and reporting of research using animals. PLoS One. 2009;4(11):e7824.
Groenwold RHH, Van Deursen AMM, Hoes AW, Hak E. Poor quality of reporting confounding bias in observational intervention studies: a systematic review. Ann Epidemiol. 2008;18(10):746–51.
Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, et al. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.
Lewin S, Glenton C, Oxman AD. Use of qualitative methods alongside randomised controlled trials of complex healthcare interventions: methodological study. BMJ. 2009;339:b3496.
Li AH-T, Thomas SM, Farag A, Duffett M, Garg AX, Naylor KL. Quality of survey reporting in nephrology journals: a methodologic review. Clin J Am Soc Nephrol. 2014;9(12):2089–94.
Bennett C, Khangura S, Brehaut JC, Graham ID, Moher D, Potter BK, et al. Reporting guidelines for survey research: an analysis of published guidance and reporting practices. PLoS Med. 2010;8(8):e1001069.
Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.
We would like to thank the EQUATOR Network and Penelope Research for their help and support.
No funding was sought or provided for this research.
Availability of data and materials
The data and analysis for this article are available from figshare: https://doi.org/10.6084/m9.figshare.5281276.v3
Ethics approval and consent to participate
No ethics approval was required, and no direct consent was sought, as all identifiable details of the manuscripts were kept confidential. Please see https://www.biomedcentral.com/getpublished/editorial-policies#confidentiality for more information.
Consent for publication
DRS and DMM were employed full-time by BioMed Central Ltd. during the study and analysis. ILS is a full-time employee of BioMed Central Ltd., which is part of Springer Nature.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
EQUATOR reporting guideline decision tree. (PDF 234 kb)