Call for Transparency and Reproducibility in AI Research

Oct 16, 2020

benjamin_haibe_kainsInternational scientists are challenging their colleagues to make Artificial Intelligence (AI) research more transparent and reproducible to accelerate the impact of their findings for cancer patients. 

In an article published in Nature on Oct. 14, scientists at University of Toronto, Princess Margaret Cancer Centre, Stanford University, Johns Hopkins University, Harvard University School of Public Health, Massachusetts Institute of Technology, and others, challenge scientific journals to hold computational researchers to higher standards of transparency, and call for their colleagues to share their code, models and computational environments in publications.

Scientific progress depends on the ability of researchers to scrutinize the results of a study and reproduce the main finding to learn from,” says Dr. Benjamin Haibe-Kains, who is jointly appointed as Associate Professor in Medical Biophysics at the University of Toronto and affiliate at the Vector Institute for Artificial Intelligence.

“But in computational research, it’s not yet a widespread criterion for the details of an AI study to be fully accessible. This is detrimental to our progress.”

Haibe-Kains is also a Senior Scientist at Princess Margaret Cancer Centre and first author of the article.

The authors voiced their concern about the lack of transparency and reproducibility in AI research after a Google Health study by McKinney et al., published in a prominent scientific journal in January 2020, claimed an AI system could outperform human radiologists in both robustness and speed for breast cancer screening.

The study made waves in the scientific community and created a buzz with the public, with headlines appearing in BBC News, CBC and CNBC

A closer examination raised some concerns: the study lacked a sufficient description of the methods used, including their code and models. The lack of transparency prohibited researchers from learning exactly how the model works and how they could apply it to their own institutions. 

“On paper and in theory, the McKinney et al. study is beautiful,” says Haibe-Kains, “But if we can't learn from it then it has little to no scientific value.”

According to Haibe-Kains, this is just one example of a problematic pattern in computational research.

“Researchers are more incentivized to publish their finding rather than spend time and resources ensuring their study can be replicated,” explains Haibe-Kains. “Journals are vulnerable to the ‘hype’ of AI and may lower the standards for accepting papers that don’t include all the materials required to make the study reproducible – often in contradiction to their own guidelines.” 

This can actually slow down the translation of AI models into clinical settings. Researchers are not able to learn how the model works and replicate it in a thoughtful way. In some cases, it could lead to unwarranted clinical trials, because a model that works on one group of patients or in one institution, may not be appropriate for another.

In the article titled Transparency and reproducibility in artificial intelligence, the authors offer numerous frameworks and platforms that allow safe and effective sharing to uphold the three pillars of open science to make AI research more transparent and reproducible: sharing data, sharing computer code and sharing predictive models.

“We have high hopes for the utility of AI for our cancer patients,” says Haibe-Kains. “Sharing and building upon our discoveries -- that’s real scientific impact.”


Temerty Medicine
RT : Have you listened to Ep 82 Making Strides: Amputation & Prosthetics yet? 🎧 Aristotle Domingo () shared his…
Temerty Medicine
RT : To avoid a 'twindemic' health officials are telling Canadians to following guidelines for the pandemic, and to get…
Temerty Medicine
“If we can't learn from it then it has little to no scientific value.” Scientists are challenging their colleagues…

Researchers are mobilizing against the novel SARS-CoV-2 coronavirus and COVID-19.

Make a gift and support their important work.
Sep 23 – Nov 11
Safer Opioid Prescribing
Course | 7:00am–9:00pm
Oct 5 – May 6
Academic Health Leadership Training – Now Accepting Applications 2020-2021 Cohort
Course | 8:00am–5:00pm
Oct 13 – Jun 8
Certificate Program in CPD Foundations
Course | 12:00pm–1:30pm
Oct 21 Creating AODA-Compliant Documents for the Web
Webinar | 10:30am–12:00pm
Oct 22 Medicine and the Machine: The Artificial Intelligence Health Revolution
Temerty Medicine Talk | 12:30pm–1:15pm
Oct 23 U of T Oncology Continuing Education Zoom Rounds - October 2020
Grand Rounds | 8:00am–9:00am
24 – 26
Organ Imaging Review
Symposium | 7:15am–5:15pm