In a different article, we discussed the topic of peer review as a historically established mechanism to ensure the quality and integrity of scientific research in the selection processes of research projects for funding and scientific articles for publication.
Our primary concern was to present the benefits and biases associated with peer review specifically in the context of funding agencies, as well as alternative approaches to evaluation. Although we briefly mentioned more radical options that deviate from peer review, we did not delve into them in detail.
We believe it is crucial to expand the debate on these alternatives, which have been the subject of discussion in specialized literature and are being explored in the management practices of funding agencies globally. This communication aims to further the conversation by discussing some advancements from our ongoing project, “Research on Research and Innovation: Indicators, Methods, and Impact Evidence“.
We have chosen to focus on three specific alternatives: direct distribution of agency resources based on equity criteria for proposal submitters, employing bibliometric tools to inform selection decisions and the implementation of lotteries.
Let us begin with the concept of developing new resource distribution models that challenge the competitive nature of individual project selection. In the literature, we discovered an unapplied model proposing a replacement for peer review in funding agencies with a redistributive resource allocation system (Bollen et al., 2019). In this model, initial criteria are established for program participation, and a group of researchers would receive resources equitably. Within agency-defined cycles, researchers would be required to redistribute a fixed percentage of their funding to another researcher within the same group (the agency may also allocate additional resources each cycle). According to the authors, this approach would offer increased transparency, greater potential to identify biases, reduced peer review costs, and a guarantee that all researchers receive some level of funding. The model assumes that researchers, rather than projects, should be funded, and it empowers the research community to autonomously allocate resources without the mediation of reviewers. In addition to resource allocation, the funding agency would be responsible for establishing governance mechanisms, ensuring transparency, and maintaining financial integrity.
The authors acknowledge potential issues with this proposal, such as stimulating alternative forms of competition among researchers and creating internal tensions and conflicts. They also note that this model may not be suitable for all types of funding programs, especially large long-term projects that require stable resources.
Another alternative involves using bibliometrics as a foundation for selection decisions in research funding agencies. Bibliometrics entails quantitative analysis of researchers’ scientific production, considering metrics such as the number of publications and citations received, to evaluate the impact and relevance of their work. Utilizing bibliometrics in the selection process offers benefits such as objectivity and reduced costs associated with peer review. Moreover, bibliometric analysis can be performed quickly and automatically, enabling efficient evaluation of numerous research proposals.
Johnson (2020) examined the correlation between peer review outcomes and bibliometric performance indicators, specifically the h-index, for approximately 600 researchers from the South African National Research Foundation (NRF) across various biological science disciplines. The study suggested that the h-index can be a useful tool for identifying researchers with greater impact and productivity, but it should not serve as the sole evaluation criterion. Lovegrove and Johnson (2008) also explored the relationship between peer review ratings used by the NRF and several bibliometric measures, including the h-index, m-index, g-index, total number of citations, and average citations per article. The findings, based on a sample of 163 botany and zoology researchers, indicated that although peer review ratings correlated with bibliometric measures, they explained less than 40% of the variation in the indices. The authors attributed much of this variation to limitations in both the peer review system and bibliometric indices.
Therefore, the use of bibliometrics in research funding agency selection processes must be approached cautiously. It is crucial to emphasize that bibliometric metrics do not comprehensively capture the quality and innovation of research, and they may be influenced by subject or author biases. Additionally, an exclusive focus on bibliometrics can lead to a reductionist perspective on research, overlooking important qualitative aspects. A more comprehensive approach would involve combining bibliometric analysis with peer reviews and other evaluation criteria, striking a balance between the objectivity of bibliometric metrics and qualitative assessments of scientific merit and proposal originality.
Roumbanis (2019) explores the potential of lotteries as an alternative to traditional peer review for selecting research projects, suggesting that this method could offer a fairer and less costly selection process. The author emphasizes that lotteries should be preceded by an initial screening phase to identify projects meeting the requirements outlined in the calls for proposals. Random selection would then occur from a set of projects already pre-selected based on specific criteria. Roumbanis argues that, apart from the positive effects on the scientific community, lotteries could foster greater diversity in funded projects and lead to the selection of more innovative proposals.
Woods and Wilsdom (2021) share some experiences of partial lottery implementation in research funding agencies globally through the partnership of the Research on Research Institute (RoRI) in the UK. These experiments aim to test the reduction of biases and the workload associated with peer review, both for researchers submitting proposals and the agencies themselves. They also explore perceptions and attitudes within the scientific community toward these alternative approaches and assess the impacts on funded research. These experiments are motivated by the recognition that peer review, in itself, already operates like a “lottery” due to inherent subjectivity (Fang & Casadevall, 2016).
One notable pilot experiment described by the authors is the Volkswagen Foundation’s Experiment! funding line in Germany, which focuses on high-risk research. Although established in 2012, it was only in 2017 that partial randomization was introduced. In this case, randomization occurs after an initial screening by the agency and a second assessment by an expert panel, resulting in a few highly rated proposals and a set of weaker ones. Randomization is applied within this “gray area” to select proposals that neither excel nor fail to meet excellence criteria. Woods and Wilsdom (2021) also report similar partial randomization approaches for equally qualified proposals in the Swiss National Science Foundation’s postdoctoral mobility program and the Austrian Science Fund’s 1000 Ideas program.
Stafford et al. (2022) discuss the potential of employing randomized controlled trials (RCTs) within lottery experiments in research funding agencies. RCTs offer a powerful means to obtain robust evidence regarding the benefits of partial randomization in funding allocation. The authors highlight the distinct advantage of RCTs, as the study design can incorporate a wide range of outcome measures based on treatment effects, such as fairness, time efficiency, diversity, high-risk/high-reward projects, and exceptional scientific advancements. However, the utilization of RCTs requires meticulous operational planning and study design to ensure reliable results that can ascertain the benefits or drawbacks of lotteries.
While the arguments favoring lotteries may indicate progress towards less biased procedures in research funding, there is no consensus on the potential and limitations of this selection method. Bedessem (2019), for instance, criticizes the utilization of randomness based on epistemological principles, suggesting that other selection mechanisms can better identify innovative, high-quality, and high-impact projects.
Two points deserve attention regarding the presented alternatives. First, it is essential to recognize the potential biases introduced by these practices, as discussed earlier. Each option has its advantages and challenges, and the choice of the most suitable approach will depend on the specific characteristics and goals of each funding agency and program. Second, the legitimacy of these practices, particularly their acceptance within the scientific community, is crucial, considering the longstanding reliance on peer review despite its criticisms.
In conclusion, further research is needed to explore these alternatives, particularly in terms of biases and impacts, through real-world experiences or simulations. Convincing funding agencies and the research community to embrace these changes is a challenging task due to the emerging nature of these practices and their potential consequences for the research system and community.
However, funding agencies must take a leading role in promoting inclusive and participatory discussions on this topic, engaging researchers and stakeholders, to find solutions that ensure quality, equity, and efficiency in research resource allocation. This is a crucial step in adapting evaluation and selection systems to the evolving landscape of scientific research and innovation.
[1] The h-index, also known as the Hirsch index, is a metric used to assess the impact and relevance of a researcher’s publications. It takes into account both the number of publications and the number of citations received by those publications.
[2] The m-index is calculated by dividing the h-index by the researcher’s “scientific age,” which is the period of time from the first publication to the last publication.
[3] The g-index is similar to the h-index but is weighted by the most frequently cited works.
Authors:
Adriana Bin: Faculty member at the Faculty of Applied Sciences (FCA) at Unicamp.
Ana Carolina Spatti: Ph.D. in Science and Technology Policy and researcher at the Laboratory for the Study of Research and Innovation at Unicamp (Lab-GEOPI).
Evandro Cristofoletti Coggo: Researcher at the Department of Science and Technology Policy (DPCT) at Unicamp.
Larissa Aparecida Prevato Lopes: Master’s student in Applied Human and Social Sciences at the Faculty of Applied Sciences (UNICAMP). Fellow at the Laboratory for the Study of Research and Innovation at Unicamp (Lab-GEOPI).
Emily Maciel Campgnolli: Undergraduate student in Architecture and Urbanism at the Faculty of Civil Engineering, Architecture, and Urbanism at the State University of Campinas (FECFAU/UNICAMP).
Raíssa Demattê: Undergraduate student in Architecture and Urbanism at the Faculty of Civil Engineering, Architecture, and Urbanism at the State University of Campinas (FECFAU/UNICAMP). Member of the Laboratory for the Study of Research and Innovation (Lab-GEOPI).
References
Bedessem, B. (2019). Should we fund research randomly? An epistemological criticism of the lottery model as an alternative to peer-review for the funding of science. Research Evaluation, 29(2): 150-157.
Bollen, J., Carpenter, S.R.; Lubchenco, J.; Scheffer, M. (2019). Rethinking resource allocation in science. Ecology and Society, 24 (3):29.
Fang, F.C; Casadevall A. (2016). Research funding: the case for a modified lottery. mBio 7(2): e00422-16.
Johnson, S.D. (2020). Peer review versus the h-index for evaluation of individual researchers in the biological sciences. South African Journal of Science, 116(9-10): 1-5.
Lovegrove, B.G.; Johnson, S.D. (2008) Assessment of Research Performance in Biology: How Well Do Peer Review and Bibliometry Correlate?, BioScience, 58(2): 160–164.
Roumbanis, L. (2019). Peer review or lottery? A critical analysis of two different forms of decision-making mechanisms for allocation of research grants. Science, Technology, & Human Values, 44(6): 994-1019.
Stafford, T.; Rombach, I.; Hind, D.; Mateen, B.; Woods, H.B.; Dimario, M.; Wilsdon, J. (2022) Where next for partial randomisation of research funding? The feasibility of RCTs and alternatives (RoRI Working Paper No. 9). Research on Research Institute. Report.
Woods, H.B.; Wilsdon, J. (2021). Experiments with randomisation in research funding: scoping and workshop report (RoRI Working Paper No.4). Research on Research Institute. Report.