Skip to main content

Table 1 Contexts-Mechanisms-Outcomes Configurations of past stakeholder interventions in peer review and decision-making for research funding

From: What works for peer review and decision-making in research funding: a realist synthesis

Contexts

Mechanisms

Outcomes

 

Common issues in peer review and decision-making

High-level changes made to the peer review process by stakeholders

Specific interventions implemented by stakeholders (n = 50)

Intervention outcomes (long- and short-term)

Stakeholders involved in/affected by interventions

Unintended consequences

Publications

What were the drivers for change?

What particular area needed addressing to solve the issues?

 

What happened as a result and what were stakeholder reactions?

For whom?

Did the changes create burden/benefit elsewhere in the peer review process?

1. Scientific, economic and social benefit of research

Promoting collaboration between academic research and public sponsors of research

A government-led ‘audition system’ for matching individual research groups to relevant sponsors of social priorities and industries

Increased emphasis on the social relevance and impact of funded research across academia and industry

General public

Research sectors

Funders

 

[29–30]

Enhancing use of metrics to assess research impact

Incorporation of ‘altmetrics’ into decision-making and facilitating international collaboration to achieve open access infrastructure to researcher metrics

Repositories (e.g., Lattes database, Researchfish) serve as examples of open tracking of research performance and impact on a national level to inform funding decisions

2. Career stability of researchers

Considering how changes to funding policy impact the stability and progression of researcher careers

University College Union’s open letter to Research Councils UK recommending interventions that focus on benefiting researchers and subsequent nation-wide campaigns to abolish casual contracts and promote contract continuity

University College Union campaign documented resulting changes to contract culture (e.g., more permanent contracts) and career stability has gained momentum in wider research community conversations

Higher and further education sectors in the UK

 

32–33

3. Funding and support for innovative research

Minimising emphasis on researcher track record and promoting reviewer autonomy in decision-making

Masking applicants’ institutional affiliations/track record from reviewers and allocating them a ‘Golden’ ticket to fund one proposal of their choice

Anonymity of applications allowed reviewers to focus on research ideas that would have otherwise not been funded and encouraged early-career researchers to propose innovative ideas

Early-career researchers

Funders

Reviewers

 

[34–39]

Creating dedicated funding streams

Funding of high-risk, high-reward research from early-career researchers

New Innovator and Early Independence awards at the National Institutes of Health

New approaches to ‘balance’ funding decisions

Use of the Delphi method to promote innovation in niche areas of research

Removing ‘group think’ from decision-making encouraged more funding of innovative ideas and assembling the Delphi panel of experts saved administrative time

4. Selection and accountability of reviewers

Creating reviewer registries and existing diversifying reviewer pools

Use bibliometric data to assess existing/create new registries of multidisciplinary and scientifically active reviewers

Bibliometric data helped reveal gaps in expertise needing recruitment of more experts who remain active in research

General public

Funders

Academic researchers

Reviewers

 

[41–45]

Using bibliometrics to automate the reviewer selection process

A semi-automatic tool for selecting experts based on their Pubmed IDs

More targeted selection of review candidates and rejection of unsuitable reviewers

Using Pubmed IDs as proof of reviewer expertise may encourage ‘performance bias’

Enhancing decision-making processes informed by peer review

Periodic audit of research charities to ensure their funding practices align to ‘core principles’ of peer review

Stronger conflict of interest policies, independent review monitoring, reviewer rotation and transparency of review practices and reviewer identities

5. Allocation of reviewers to proposals

Enhancing reviewer-proposal matching based on expertise and funder guidelines

Semi-automatic tool for matching reviewers to proposals based on bibliometric data

Effective matching of reviewers to all proposals and improved accessibility for reviewers and programme officers

Funders

Reviewers

 

[42, 46–50]

Network flow theory algorithm using funder guidelines for reviewer selection (e.g., number needed per proposal)

Successful and balanced matching of reviewers to proposals without conflict of interest

Increasing the number of reviewers assigned to proposals

Assign more than two reviewers to each proposal and more than one proposal for each reviewer

Increased interrater reliability

Assinging more proposals to each reviewer can lead to ‘reviewer fatigue’ and assigning more than two reviewers to each proposal may not be realistic for smaller funders

Involving applicants in the review process

Applicants review each others’ proposals with the incentive of achieving higher scores for honesty and reliability

High quality and reliability of review and a ‘highly motivated’ reviewer pool that required less administrative effort from programme officers

Requiring group consensus to achieve interrater reliability may discourage applicants from proposing innovative ideas.

6. Quality and reliability of peer review

Training reviewers to improve interrater reliability

Self-taught training material on peer review in publication or research funding (explaining review criteria and the scoring system) for novice and experienced reviewers

Overall significant improvement in review reliability in terms of identification of errors/recommending manuscripts for rejection (for publication review) and understanding of the scoring system and time spent studying review criteria (for grant review)

Funders

Reviewers

Applicants

 

[22, 53–55, 96]

Employing two independent review panels to assess proposals

Using interpanel agreement and the impact of decisions to determine the reliability of review

Increased interrater agreement of funding decisions.

Reduced emphasis of reviewers on the track record

Simplifying scoring systems

Use dichotomous (yes/no, 0/1) scores rather than ‘scale-based’ (‘not at all’/‘somewhat’/‘definitely’, 0/1/2) review scores

An equally reliable but simpler scoring system of reviewing full proposals

7. Patient and public involvement in funding decisions

Promoting community engagement of research through applicant and reviewer training

Collaboration of public members, academic experts, patient representatives and reviewers to analyse barriers to funding more community-based research

A community engagement framework at the National Institutes of HeALTH (definition of ‘community engagement’, strategies for researcher training, and guidance for reviewers)

Funders

Applicants

End-users (the community)

 

[57–60]

Involving public members, patients and health advocates in decision-making

Re-reviewing expert-reviewed proposals by community representatives trained in peer review for relevance of proposals to community needs (i.e., two-tier review system)

Funding of proposals that meet both scientific and community criteria and success of resulting research in the form of external grants and peer-reviewed publications

Two-tier review system involving expert-led review and review by a ‘wider stakeholder’ group (patients, public members)

Increased translational and social relevance of funded research and inclusion of wider stakeholders in further funding calls

Two-tier review system involving review of scientific merit and community engagement of research based on criteria of ‘involvement, priority, and benefit’

Increased emphasis of funding decisions on scientific merit and community engagement criteria, instead of research budget

Two-tier review system led by experts and research ‘consumers’ (survivors or patient advocates), who were also involved in decision-making.

Overall, the intervention received positive feedback from all stakeholders

Involving consumers in peer review and decision-making should involve ensuring that the scientific merit of funded research is not compromised

8. Unnecessary burden on applicants

Shortening applications and limiting technical detail

Reduce the word limit of applications and focus them on the research question, methodology, budget, healthcare partnerships and potential impact of research

A 1200-word proposal took applicants on average only seven days to prepare, and more applications were shortlisted/received invitations to interview

Applicants

Early-career researchers

Faculty staff

Reviewers

 

[34, 63–74]

Shortening the research plan of large project proposals from 25 to 12 pages.

A shorter R01 application format at the National Institutes of Health

Two-page synopsis of the ‘central research idea’

Implementation of the synopsis format in subsequent funding calls at the National Science Foundation, followed by a request from reviewrs for a four-page format

Two-page summary of anonymised proposals

Increased funding of innovative proposals at the Villum Foundation (CMOC 3)

Improving feedback for applicants

Using decision software to generate a feedback summary

ProGrid decision software provided ‘meaningful feedback’, which was well received by applicants

Providing applicants with multiple rounds of online peer feedback

Proposal quality is significantly improved when feedback is given early in the application process

Providing applicants with feedback after application triage, before peer review

RAND Europe recommendation following a consultation exercise

Streamlined funding process

Shortening the review window, convening funding committees earlier and extending resubmission deadlines

A streamlined funding cycle was implemented across the NIH as it gave applicants more time to address panel feedback and submit resubmissions without having to wait for the next cycle

Open access to peer review

Make reviewers’ comments on proposals accessible to other reviewers

Cross-community critique of reviewer comments gives them the opportunity to modify them, promoting transparency and accountability of peer review

Improving applicants’ grant writing skills

Funder outreach in the form of talks and grant writing workshops at universities

Helping researchers write better proposals that align with the funder’s mission achieved high success rates

Publicising outcomes of funding cycles, discussion of submission policies, mentoring and networking, explaining the funding process, and helping write stronger proposals

The National Science Foundation made a long-term plan to increase outreach activities across HEIs

Publicising institutional submission/success rates helps create a culture of open research

Educating applicants on the funding process

Internal student-run funding programme

Educating PhD students and allowing them to engage in peer review gave them valuable experience of how research funding works

Internal programme to improve the quality of education provided by faculty staff and reduce for them the burden of grant writing

The programme led to internal and external funding of research, research publication and dissemination, and an increase in external investment into education

An Educational Research Methods course

Educating applicants in research methods contributed towards improving the quality of proposals

Promoting funding of new investigators

Funding scheme for new investigators

The ‘New Investigator Award’ at the National Institutes of Health equalised success rates for new and established investigators in the pilot round, leading to its implementation and addition of the ‘Early Stage Investigator Award’ and ‘Pathway to Independence Award’.

9. Unnecessary burden for reviewers

Optimising review structures

Larger study sections at the National Institutes of Health (e.g., covering both clinical and basic research), new monitoring systems, and shorter proposals

Less pressure on review panels

Reviewers

Funders

 

[44, 64, 67, 68, 75, 76, 81]

Virtual panels and rotating reviewers

Replacing in-person review with virtual panels, breaks from study sections, asking long-serving reviewers to temporarily step down, and employing ‘ad hoc’ reviewers

Reduced administrative cost to funders, reduced reviewer fatigue, reliable peer review, fresh insight from ad hoc reviewers into review process

Costs of investment to conduct virtual funding panels.

Difficulties in adopting the practice because of high reviewer rejection rates.

Funder investment into virtual technology to standardise the practice of remote review and adding more reviewers to panels

The National Science Foundation published these plans as part of their mission to reduce reviewer fatigue. The practice also reduced funder costs without consequence to interrater reliability or quality of discussions at Marie Sklodowska-Curie Actions and the American Institute of Biological Sciences

Application triage

Incorporate, where possible, application triage into peer review to manage application demand against available funding

Recommendations from RAND to enhance peer review by filtering out low-chance applications in triage and improving feedback for applicants was aimed at all research funders

10. Unnecessary burden for funders

Controlling application demand

Limits re-submissions per applicant per cycle and for the weakest proposals

Limiting R01 re-submissions was part of the long-term reorganisation of review structures at the National Institutes of Health

Funders

 

[63, 65, 68, 77–83]

Place quotas on new (preliminary) proposals, make full proposals invitation-only, limit invitations from each institution and encourage internal peer review

Submission quotas were implemented long-term by the National Science Foundation and further tightened in response to rising demand

Submission quotas creating the need for internal review at Higher Education Institutions was seen as ‘shifting of burden’ from funder to researcher that would increase workload for and competition among applicants

Limit submissions using a cooling-off period between rounds

This approach may be more effective than a ‘multiple proposals strategy’ in increasing success rates

Introducing internal peer review

Require applications to be internally reviewed and scored prior to submission to funders

An increase in publication output

Virtual technology and automation

Standardise the practice of virtual review, invest in virtual technology and automate application processing

Reduced cost of review and more administrative capacity for reviewer management

Additional cost to the funder of investment into virtual technology

Enhancing the reviewer pool

Increasing the number of reviewers per panel, allocating more reviewers per proposal and using group consensus to score applications

Potential reduction in funder burden if demand is also reduced (e.g., with submission quotas)

Decision models

Use software (e.g., ProGrid, Teamworker) to evaluate applications based on a ‘performance matrix’ of researcher/proposal variables

Enhanced funding decisions, simplified discussions, and shorter meetings (ProGrid); fairer proposal discussions (Teamworker)

Correlate review scores with applicant CV data to predict likelihood of research success

Identification of promising candidates based on research productivity and factors that are unrelated but may create bias (e.g., age, gender)

Streamlining the funding process

Shorter applications and simplified scoring

Reduced financial and administrative burden for the funder, and a faster process for applicants

Reduced emphasis on the applicant’s track record