Google and Stanford University researchers are working together to develop AI technology that can listen to conversations between doctors and patients and take relevant notes on them.
The technology is based on the kind of speech and voice recognition technology already in use in products like the Google Home devices. The system in development with Stanford University is being trained to recognize key medical terms as it ‘listens’ to patient-doctor conversations, and to produce the kinds of notes that would normally be kept by trained medical professionals. The aim is to automate the time-consuming note taking work that doctors already perform – important ancillary work that can be a drain on resources, with Google’s Brain research team estimating that the average doctor spends six hours a day dealing with electronic health records.
Google isn’t the only company working to solve this problem. Nuance Communications, which launched a speech recognition system for recording patient stories back in the autumn of 2015, recently adapted its technology to a new virtual assistant tool in its Dragon Medical platform.
Google and Stanford’s researchers appear to be aiming for a particularly sophisticated solution that can not only record stories, but summarize them in note form.
At the moment, the researchers’ technology reportedly has error rates around 20 percent, but they continue to train the AI system with the aim of making it reliable enough for clinical use.
Sources: AndroidHeadlines, 9to5Google
—
(Originally posted on
Follow Us