May 27, 2016

Start-up Sifts through Speech for Signs of Decline

Alumni, Education, Faculty & Staff, Research, Students
Winterlight Labs team
By

Carolyn Morris

Winterlight Labs teamResearch shows that subtle changes in speech are one of the first symptoms of cognitive decline. And yet, the assessments currently used to evaluate cognitive status may not pick up on these nuances. This is where computer algorithms can step in, according to Toronto Rehabilitation Institute scientist and computer science professor Frank Rudzicz. He spoke with Faculty of Medicine writer Carolyn Morris about his health-care focused start-up, Winterlight Labs.

How can computers help detect signs of cognitive decline early?

When it comes to dementia, some of the earliest signs of cognitive decline are reflected in very subtle — but still measurable — differences in speech. Through algorithms that identify things like the grammatical structure of sentences, combined with speech-recognition software, we can analyze a small sample of speech and determine whether there is cognitive impairment. We’ve developed a tablet-based assessment in which people are asked to describe what’s happening in an image, speaking for anywhere from one to five minutes. We then extract over 400 variables from these recordings, quantify the results and determine cognitive status. We can do this based on population-level data, and we’re also looking to do it on a personalized level, analyzing change over time. Computers have this special ability to sort through reams of data and pull out very subtle differences in ways that humans just don’t have time for.

What sort of signs or variables are you measuring?

We measure things like pauses or hesitation in speech, the complexity of the words and grammar used, as well as inferences and levels of sophistication in the interpretation of the image. When it comes to word choice and grammar, for example, one of the signs of cognitive impairment would be the use of simple verbs versus gerunds — so “the kid runs” instead of “there’s a kid running.” People with early signs of dementia will often use pronouns instead of more specific nouns. So “she is washing dishes” instead of “The mother is washing dishes.” Then there’s the interpretation of the images. So, for example, in what we call the “cookie theft” image there’s a woman in a kitchen, an overflowing sink and kids reaching up to steal cookies. Someone with cognitive impairment might notice “a kid on a stool,” but not take that next step to point out that “the son is trying to steal cookies.” Or they’ll comment on the overflowing sink, but not on the woman failing to notice it.

How will this tool improve health?

Over 47.5 million people have dementia globally, and that number’s expected to triple by 2050. And behavioural and psychiatric symptoms of dementia are prevalent in Alzheimer’s disease. Without a way to assess cognitive state in a continuous quantifiable way in assisted living settings, we risk not being able to plan appropriate care and we also risk putting an increased burden on caregivers. Our assessment is quick and repeatable, so we think it will both reduce the workload of healthcare providers and also deepen the quantifiable analysis available to them. It will ultimately help millions of older adults maintain quality of life through more personalized care.

How did you bring your ideas from the realm of research to the start-up world?

I’ve been doing research in computational linguistics and artificial intelligence for many years as a scientist at the Toronto Rehabilitation Institute and as a status professor in Computer Science at U of T. For the past three years we’ve been focused on this particular idea – combining speech technology with artificial intelligence. I’ve been working with PhD student Kathleen Fraser, who researches the automatic detection of dementia and aphasia and master’s student Maria Yancheva, who focuses on the longitudinal detection of dementia and information content in speech. More recently, entrepreneur and software developer Liam Kaufman joined our team and is pushing us forward from a business perspective.

We’ve also gotten a lot of support and guidance from Rotman’s Creative Destruction Lab, the Department of Computer Science Innovation Lab and the Health Innovation Hub. We’re also grateful to the U of T’s Innovation and Partnerships Office and the team at the national research network AGE-WELL NCE. The network of hospitals connected to U of T is also proving to be invaluable for clinical advice, data, and the increasing realization of the importance of entrepreneurship. We’ve also gotten technical expertise from Graeme Hirst in computer science, clinical expertise from Regina Jokel in speech-language pathology, Sandra Black in neurology, and Jed Meltzer in psychology.

What’s next for Winterlight Labs?

We’re currently collecting additional data to further validate our method in some specific “corner cases,” which include the most subtle or complex diagnoses.  We’re also working with community partners to ensure that access to the technology is maximally beneficial. We’re planning a funding round in the summer to expand the project and we couldn’t be more excited.