Peer review and gender bias: a study on 145 scholarly journals
A common explanation for why women publish less than men is that peer review is biased against them. It is a plausible theory — and according to the largest study ever conducted on the question, it is wrong.
The scale
Published in Science Advances in 2021, the study analyzed 348,223 submissions to 145 scholarly journals between 2010 and 2016. It covered approximately 1.7 million authors and 745,693 referees who collectively wrote more than 740,000 reviews. The journals spanned biomedicine, health sciences, physical sciences, social sciences, and life sciences. Gender was assigned using a multi-step process that included the gender-guesser Python library, Gender API, and salutation analysis, successfully classifying 77% of authors and 82% of referees.
The gender breakdown
The author pool was 75% male, 25% female. Referees were even more skewed: 79% male, 21% female. The imbalance varied by field — social sciences had 38% women authors, physics just 19%.
What they found
The headline finding ran against expectation. Manuscripts by women were not penalized in peer review. They were, if anything, treated more favorably.
In biomedicine and health sciences, women-authored papers had a 5% higher acceptance probability than men's. In the physical sciences, the advantage was smaller — about 1.5% — but still pointed in the same direction. Women reviewers also gave more positive recommendations across most fields.
The researchers tested three possible channels for bias: whether editors assigned different referees to women's papers, whether referees scored women's papers differently, and whether editors overrode referee recommendations differently for women. None of the three showed systematic bias against women.
What it means
The study did not find that academic publishing is gender-neutral. Women are still dramatically underrepresented as authors, especially in senior positions. But the bottleneck does not appear to be peer review itself. The publication gap is driven by factors upstream — who enters the field, who stays, who gets funding, who gets mentored into positions that produce publishable research — not by what happens once a manuscript lands on a referee's desk.
This is an important distinction. If the problem were biased reviewers, the fix would be reviewer training or double-blind review. If the problem is structural — hiring, funding, retention — then fixing peer review alone changes nothing. The study's findings push the conversation toward the harder, more systemic interventions.
The counterintuitive angle
For Genderize.io as a product, this study matters because it demonstrates that name-based gender inference at massive scale can produce findings that challenge assumptions rather than confirming them. The value of the tool is not in telling people what they expect to hear — it is in letting the data speak.
Author
Charles W. Fox et al.
Year
2021
Categories
Original article
https://www.science.org/doi/10.1126/sciadv.abd0299