By Giorgia Gulielmi

Nature, January 26, 2018 —

But female scientists suffer when their research proposals are judged primarily on the strength of their CVs.

Women lose out when reviewers are asked to assess the researcher, rather than the research, on a grant application, according to a study on gender bias. Training reviewers to recognize unconscious biases seems to correct this imbalance, despite previous work suggesting that it increased bias instead.

The findings were posted last month on the bioRxiv1 preprint server and are currently in review at a journal. They came out of a 2014 decision by the Canadian Institutes of Health Research (CIHR) to phase out conventional grant programmes, in which reviewers evaluated both the science and the investigator. Instead, the CIHR started one programme that focused its evaluation on the applicants and another that focused mostly on their research. This created a natural experiment that allowed scientists to analyse the outcome of nearly 24,000 grant applications and to test whether funding differences were due to the quality of the applicants’ research or to biased assessments of their gender.

Past studies have looked at gender inequalities in grant funding, but most examined grant programmes that didn’t separate their application pool like the CIHR programmes. Some also didn’t consider other factors, such as whether research fields had different ratios of male to female scientists2. The new analysis, which took into account applicants’ research areas and age — a proxy for career stage — allowed the study authors to draw “more robust conclusions”, says Holly Witteman, a health-informatics researcher at Laval University in Quebec City, Canada, who led the study.

Witteman and her colleagues calculated that, of all the applications submitted to CIHR grant programmes between 2011 and 2016, 15.8% were likely to be successful. And in the conventional grant programmes, the success rate for male applicants was 0.9% higher than the rate for female applicants. When the team analyzed the CIHR grant programme that focused on the researchers’ science, the gap in success rate was the same as in the traditional programmes. But in the grant programme with an explicit focus on the applicants’ previous research and qualifications, the success rate for male applicants was 4% higher than for female applicants. “That’s a significant difference,” Witteman says.

A random act

However, Witteman warns that the study was not randomized, meaning there may be differences between male and female applicants, such as their publication records, which might help to account for the different success rates. Her team was unable to account for those factors, because it didn’t have access to those data.

“That’s a big problem,” says Beate Volker, a social scientist at the University of Amsterdam. In the CIHR experiments, “there would be a bias if two people publish the same papers and one is preferred to the other”, she says. It would be relatively easy to test this by looking at the number and quality of publications for each applicant. But until the researchers collect and analyse such data, the bias is “unproven”, Volker says.

Donna Ginther, an economist at the University of Kansas in Lawrence, who analysed racial bias in grant programmes at the US National Institutes of Health3, echoes this concern. But she says it’s interesting that the gender differences in funding outcomes disappeared after the CIHR implemented new policies, which included asking reviewers to complete a training module about unconscious bias.

Previous work, Ginther notes, showed that training might stir biases and be counterproductive4. The effects of the new CIHR policies suggest the opposite: in the 2016–17 grant cycle, female scientists were as successful as men in both science- and person-focused grant programmes. “It would be helpful to know what kind of training it was,” Ginther says.

The CIHR is committed to eliminating bias against women and minorities by educating and evaluating reviewers, says Robyn Tamblyn, an epidemiologist at McGill University in Montreal, Canada, and scientific director of the CIHR Institute of Health Services and Policy Research. “We’re just at the beginning,” she says.

Witteman now plans to look at the reviewer training module, to see if it might help to reduce biases.