Designing grant-review panels for better funding decisions: Lessons from an empirically calibrated simulation model

  • Thomas Feliciani
  • , Michael Morreau
  • , Junwen Luo
  • , Pablo Lucas
  • , Kalpana Shankar

Research output: Contribution to journalArticlepeer-review

Abstract

Objectives: To explore how factors relating to grades and grading affect the correctness of choices that grant-review panels make among submitted proposals. To identify interventions in panel design that may be expected to increase the correctness of choices. Method: Experimentation with an empirically-calibrated computer simulation model of panel review. Model parameters are set in accordance with procedures at a national science funding agency. Correctness of choices among research proposals is operationalized as agreement with the choices of an elite panel. Conclusions: The simulation model generates several hypotheses to guide further research. Increasing the number of grades used by panel members increases the correctness of simulated choices among submitted proposals. Collective decision procedures giving panels a greater capacity for discriminating among proposals also increase correctness. Surprisingly, differences in grading standards among panel members do not appreciably decrease correctness.

Original languageEnglish
Article number104467
JournalResearch Policy
Volume51
Issue number4
DOIs
Publication statusPublished - May 2022
Externally publishedYes

Keywords

  • Inter-rater reliability
  • Peer review
  • Research evaluation
  • Scoring
  • Social simulation

Cite this