A major advance in detecting COVID-19 from the way people cough could pave the way to a new generation of diagnostic mobile phone apps.
New research by computer scientists at RMIT University, Australia, reveals an AI model that can hear the effects of COVID in the sound of a forced cough, even when people are asymptomatic.
Study lead author Dr. Hao Xue said with further development, their algorithm could power a diagnostic mobile phone app.
“We’ve overcome a major hurdle in the development of a reliable, easily-accessible and contactless preliminary diagnosis tool for COVID-19,” said Xue, Research Fellow in RMIT’s School of Computing Technologies.
“This could have significant benefit in slowing the spread of the virus by those who have no obvious symptoms.
“A mobile app that can give you peace of mind during community outbreaks or prompt you to seek a COVID test—that’s the kind of innovative tool we need to better manage this pandemic.
“It could also make a significant difference in regions where medical supplies, testing experts and personal protective equipment are limited.”
Xue said the method they developed could also be extended for other respiratory diseases.
“With just a little tweaking and suitable data we could use this to test for Tuberculosis or other respiratory illnesses, or even design it for combined multi-diseases detection or classification system.”
A major advance in AI training
This is not the first COVID cough classification algorithm to be developed, but the RMIT model outperforms existing approaches and has another major advantage that makes it more practical to use across different regions—the way it learns.
Study co-author Professor Flora Salim said previous attempts to develop this type of technology, like those at MIT and Cambridge, relied on huge amounts of meticulously-labeled data to train the AI system.
“The annotation of respiratory sounds requires specific knowledge from experts, making it expensive and time-consuming, and involves handling sensitive health information,” she said.
“Using a narrowly-targeted data set—such as cough samples from one hospital or one region—to train the algorithm also limits its performance outside that setting.”
Salim said it was this limitation that had proven a challenge for this technology’s practical application in the real world, until now.
“What’s most exciting about our work is we have overcome this problem by developing a method to train the algorithm using unlabelled cough sound data,” she said.
“This can be acquired relatively easily and at larger scale from different countries, genders and ages.”
During the pandemic, many crowdsourcing platforms have been designed to gather respiratory sound audios from both healthy and COVID-19 positive groups for research purposes.
The team accessed datasets from two of these platforms—COVID-19 Sounds App and COSWARA—to train the algorithm using contrastive self-supervised learning, a method by which a system works independently to encode what makes two things similar or different.
The team are open to collaborating with potential partners on developing the technology and expanding its application for a range of respiratory diagnostic tools.
“Exploring Self-Supervised Representation Ensembles for COVID-19 Cough Classification” is being presented at the data science conference KDD 2021 in Singapore, August 14–18.
X-rays combined with AI offer fast diagnostic tool in detecting COVID-19
Smart diagnostics: AI tech can hear COVID in a cough (2021, June 17)
retrieved 17 June 2021
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.