Stethoscope by a computer keyboard
© Aprescindere

Speech recognition solutions could tackle the administrative burden of clinical documentation and help deliver better-informed patient treatment

The impact of artificial intelligence is something that is being felt across almost every industry, from banking to agriculture, but healthcare is an area where we all hold a stake. Modern technologies are already transforming healthcare delivery and improving the productivity and efficiency of patient care, but a new generation of conversational and AI solutions are now taking that transformation to the next level, redefining the future of healthcare as we know it and pulling us all – doctors and patients alike – into a new chapter.

It all starts with the voice

Over the last ten years, speech recognition solutions have come on leaps and bounds, so much so that clinicians have turned to technology to help them tackle the ever-rising administrative burden of clinical documentation — something that has added to the growing problem of clinician burnout. Research released by Nuance earlier this year revealed that the average clinician is spending 13.5 hours per week on clinical documentation, up more than 25% from 7 years ago. To make matters worse, 3.2 hours per week are spent on clinical documentation outside of working hours, with healthcare professionals having to give up their personal time.

the average clinician is spending 13.5 hours per week on clinical documentation, up more than 25% from 7 years ago

Modern technologies, such as speech recognition, can be used to help relieve some of the administrative pressure on clinicians, enabling them to work more efficiently and intelligently. These technologies are designed to recognise speech passages and convert them into detailed clinical notes directly in the electronic patient record, no matter how fast they are delivered. By reducing duplication and supporting cross-departmental standardisation, they can also improve the accuracy and quality of patient records.

With so much increasingly complex clinical documentation to get through, the ability to create detailed, accurate documents using their voice alone has given clinicians back hours in their working day. Those hours quickly stack up over weeks and months to help them to see more patients and achieve a better work/life balance.

Improving documentation practices and reducing burnout

We’re living in the golden age of speech recognition, and many healthcare organisations are already taking advantage of speech recognition to improve documentation practices and reduce burnout. For example, Frimley Health NHS Foundation Trust is on a journey of digital transformation, with the ultimate goal of delivering exceptional patient care. It started its journey with more than 200 legacy systems and relied upon transcription services and handwritten reports for document and letter creation. Today, it’s well on its way to replacing them with a single system based on the Epic EPR.

Speech recognition on phone
© Chainarong Boonsangiem

With Nuance’s Dragon Medical One solution, healthcare professionals at the hospital are able to use their own smartphones, along with the Power Mic Mobile app, as a dictation tool to create clinical documentation and navigate the EPR. By listening in to relevant patient-clinician conversations, it can capture all the necessary information automatically, so a doctor doesn’t have to. This enables the doctor to turn away from their computer screen and give their entire focus to what the patient is saying, improving the quality of both patient care and the note taking.

Diagnosis through voice

The most exciting thing is this is only the beginning. Speech is a more complex system than we might first realise. When we talk, we use our lungs, vocal cords, tongue, lips, nasal passages and brain. In fact, there are more than 2,500 biomarkers in the sub-language of human speech, all of which offer diagnostic clues that can provide insight into various aspects of our health and well-being.

Data scientists, researchers, and AI developers are working on ways that we can use sensory and signal data in patient voice samples to detect key warning signs for disease, injury or mental health concerns. It could even help to identify the social determinants of health (SDoH) — the more subtle factors that feed into the health and wellness of an individual. This includes things like socioeconomic status, employment, food security, education, and community cohesion that can have a profound impact on healthcare outcomes.

Monitoring patient health for future practice

If future speech recognition technologies could capture SDoH insights in conversations, they could help to mitigate their effects on patient populations. Clinicians and other stakeholders may be able to make better-informed decisions about patient treatment and offer better support, with more awareness of SDoH. And if healthcare organisations can identify and incorporate SDoH into patient care plans to treat the whole person and not just a disease, patient healthcare outcomes could greatly improve.

There’s also the exciting opportunity for using these technologies to monitor patient health and provide treatment in the home. For example, gait analysis using a combination of wearable devices, cameras, depth sensors, radar, and acoustic sensors could enable the detection of Parkinson’s disease years sooner than we can see signs of it now.

As you can see from the few examples here, modern technologies are transforming healthcare right now — and the future possibilities for it to re-imagine the industry going forward are both exciting and extensive. Not just that, but by changing the experience of care for clinicians and patients alike, it can help both groups to lead happier, healthier lives.

This piece was written by Dr Simon Wallace, Chief Clinical Information Officer at Nuance

LEAVE A REPLY

Please enter your comment!
Please enter your name here