Skip to main content

Table 2 Summary characteristics of included studies

From: The role of geographic bias in knowledge diffusion: a systematic review and narrative synthesis

Title

Author and year

Journal

Study question(s)

Sample size

Study design

Intervention

Outcome measures

Results

Do physicians judge a study by its cover? An investigation of journal attribution bias

Christakis, 2000

Journal of Clinical Epidemiology

Does attribution of an article to a "high-prestige" journal versus a "low-prestige" journal affect readers' impressions of the quality of the article, and does formal training in epidemiology and biostatistics mitigate these effects?

264 physicians who listed internal medicine as their primary specialty recruited from the American Medical Association’s master list of licensed physicians.

Randomized, single-blind. It is unclear from the article how randomization was achieved.

Participants were asked to read an article and abstract from either the SMJ or the NEJM. They were given the abstracts or articles either attributed or unattributed. After each article or abstract, respondents were asked to rate the quality of the study, the appropriateness of the methodology employed, the significance of the findings and its likely effects on their practice. Ratings were on a Likert scale, and responses were used to generate an aggregate ‘Impression Score’ ranging from 5-25.

Difference in ‘Impression Score’ given by reviewers who read either correctly attributed abstracts or articles or unattributed abstracts or articles.

The predicted odds for review score prediction for “Top universities” are 1.58 [95% CI (1.09–2.29]. The predicted odds for review score prediction for “Paper from the U.S.” are 1.01 [95% CI (0.66–1.55)]. The predicted odds for review score prediction for “Same country as reviewer” are 1.15 [95% CI (0.71–1.86)].

Explicit bias toward high-income country research: a randomized, blinded, crossover experiment of English clinicians

Harris, 2017

Health Affairs

Assessed the within-individual change in evaluation of research abstracts when the source is experimentally altered - in this case, between high- and low-income countries.

347 clinicians, of any speciality, living and practicing in England.

Randomized, controlled, blinded crossover experiment. The survey platform carried out simple randomization in real-time while respondents entered the survey.

Participants rated the same abstracts on two separate occasions, one month apart, with the source of these abstracts changing, without their knowledge, between high- and low-income countries. Participants were asked to rate the abstracts based on strength of evidence, relevance to their practice, and likelihood to recommend the paper to a colleague. Scores were assigned in each of these categories on a scale of 0–100.

Difference in review scores between the two rounds of reviewing, therefore comparing review scores from HIC abstracts to review scores from LIC abstracts.

Overall mean difference in rating of strength between abstracts from HIC and LIC source was 1.35 [95% CI (− 0.06–2.76)]. Overall mean difference in rating of relevance and likelihood of recommendation to a peer between abstracts HIC and LIC source was 4.50 [95% CI (3.16–5.83)] and 3.05 [95% CI (1.77–4.33)], respectively.

Reviewer bias in single- versus double-blind peer review

Tomkins, 2017

Proceedings of the National Academy of Sciences

Investigated bias resulting from the fame or quality of the authors’ institution(s).

1,957 review committee members at the Web Search and Data Mining (WSDM 2017) conference.

Randomized, double- and single-blind. The authors do not specify how reviewers were randomized into their respective groups.

Four committee members reviewed each paper. Two of these four reviewers are given access to author information (single-blind); the other two are not (double-blind). Reviewer behavior is studied in two settings: reviewing papers and also a preliminary "bidding" stage in which reviewers express interest in papers to review.

A “Blinded paper quality score” (bpqs, the average quality score of the double-blind reviews for that paper) is used as a proxy measure for the intrinsic quality of a paper. This is used to calculate the odds of acceptance among single- versus double-blind reviewers.

The predicted odds for review score prediction for “Top universities” are 1.58 [95% CI (1.09–2.29]. The predicted odds for review score prediction for “Paper from the U.S.” are 1.01 [95% CI (0.66–1.55)]. The predicted odds for review score prediction for “Same country as reviewer” are 1.15 [95% CI (0.71–1.86)].