Framework Stresses Responsible Machine Learning: “Healthcare Is Not Immune to Pernicious Bias”

Aug 21, 2019
Author: 
Gabrielle Giroday

Anna GoldenbergAnna Goldenberg (Photo courtesy of U of T department of computer science)

 

In the first published guidelines for responsible machine learning in healthcare, experts from around the world – including faculty at U of T and the Vector Institute – are calling for interdisciplinary teams as a starting point.

“The majority of [machine learning] solutions are currently being developed in silos, away from the real-world clinical problems and settings that these [machine learning] models will actually impact," says Anna Goldenberg, who is an assistant professor of computer science at U of T and associate research director of health at the Vector Institute.

“Our guidelines provide a framework within which many issues stemming from the complexity of adopting [machine learning] in health care in particular can be avoided.”

Goldenberg is senior author of Do no harm: a roadmap for responsible machine learning for health care, published in Nature Medicine this week. She is also senior scientist in genetics and genome biology at the Hospital for Sick Children, co-chair of Artificial Intelligence in Medicine for Kids at the hospital, and the Lebovic Fellow in CIFAR's Child & Brain Development program.

The paper recommends that deployment of machine learning in health care involve interdisciplinary teams, including knowledge experts like clinical experts and machine-learning researchers. Decision-makers like hospital administrators and regulatory agencies should also be involved, as well as users of machine-learning, like nurses, physicians, patients and the friends and family of a person affected.

“Health care is not immune to pernicious bias. The health data on which algorithms are trained are likely to be influenced by many facets of social inequality, including bias toward those who contribute the most data,” the paper states.

Co-author Marzyeh Ghassemi, assistant professor in the departments of computer science and medicine, points out that machine learning work can be presented as if “a model on its own is a solution – and most problems in human health are not really solvable by a model.”Marzyeh Ghassemi (Photo provided by Marzyeh Ghassemi)Marzyeh Ghassemi

To successfully identify a solution to a problem, researchers must recognize health care delivery is a process, and not a fixed point.

“You have to engage in the fact that health care is a process, it’s not a static data set that you can pull once, train a model on, and deploy,” says Ghassemi, who also holds the CIFAR AI Chair at the Vector Institute.

“It’s an ongoing process where labels and definitions of clinical conditions can and do change. Populations can shift, treatments and different locations for different groups can vary. I think there is a lot of careful thought that needs to go into deployable solutions, which is very separate from creating an interesting machine learning model.”

A machine learning model can be promising from a technical perspective, however, for an ultimately successful solution, she says there are a wider set of objectives that need to be achieved.

“We tried to focus on things you might not think about initially: choosing the right problems, making sure the solution is useful, really rigorously thinking through the ethical implications of deployment, and evaluation. Evaluation is particularly challenging because you have to thoughtfully report your results, and then think through the caveats for responsible deployments.”

Ghassemi says it’s important to think through the ethical impacts of machine learning that’s developed. A developer’s approach will vary, depending on their background.

“In those with a really strong technical background, what I often try to emphasize is the thoughtful reporting of results, and the ethical implications of a deployment,” she says. “In a technical setting, often we already emphasize really rigorous evaluation and choosing an appropriate problem.”


That approach can shift.

“If somebody has a more clinical background, and already lives and breathes the ethical implications of what they’re doing, so I would emphasize the other facets,” she says. “Especially with the availability of downloadable models, the goal should be to ensure that the technical solution you come up with is useful across different patients, that it’s possible to generalize it to your setting and problem.”

Feb 27 Evidence to Policy 101 Workshop
Workshop/Seminar | 12:00pm–2:00pm
Feb 28 U of T's 18th Annual Black History Month Luncheon
Luncheon | 12:00pm–2:00pm
Feb 28 BRITE Movie Night: The Immortal Life of Henrietta Lacks
Other | 6:00pm–9:00pm
Feb 29 Black Physicians' Association of Ontario Annual Health Symposium
Symposium | 8:00am–4:00pm
Mar 5 CANSSI Ontario Research Day
Research Day | 10:00am–3:30pm
Mar
8 – 11
Aging & Brain Health: Mental Health and Well-being
Conference | 7:30am–5:30pm
Mar 10 Pharmacology & Toxicology Distinguished Visiting Lecture Series with Dr. Yasmin Hurd
Workshop/Seminar | 1:00pm–2:00pm

Tweets

UofT Medicine
@uoftmedicine
RT : Researchers from the have received a funding boost to help realize their vision of using tiny robot… https://t.co/P3rA965UfR
UofT Medicine
@uoftmedicine
RT : Reduce your risk of injury when shovelling snow: 🥾 wear boots with good treads 💪 stretch before shovelling ❄️ sele… https://t.co/2Bm3hWw99S
UofT Medicine
@uoftmedicine
Experts say Canada must ensure the country is ready to handle a pandemic if it occurs. But it’s also important not… https://t.co/MUbBM8zJKj

UofTMed Magazine

The balance of power is changing in medicine.

Sign up for your free digital copy.