Microsoft recently purchased the speech recognition software company Nuance (Microsoft completes acquisition of Nuance, ushering in a new era of outcomes-based AI) for 16 billion dollars.
At first glance, this looks like a good thing for both healthcare and physicians. Nuance is one of the leaders in speech recognition software within healthcare, and with Microsoft behind them, there is a real chance this could improve EHR friendliness and performance.
However, Microsoft is one of the leading AI powerhouses in the world, and as such, it’s unlikely they will sit back and focus on simply making the EHR a more useful tool for physicians. It’s much more likely they have their sights on something much larger: using AI as an analytic tool to analyze the emotional status of both patients and physicians. Keep in mind that healthcare organizations will have both the complete medical record of the patients and the complete employment history of physicians. Why not combine these sets of information with the rich, lovely data sets contained with their newly acquired EHR-enabled voice data, play around a little bit, and see what pops out?
The answer to this question concerns me.
Why?
Because this will be done, and it will be done without the explicit content of the patients or the physicians.
Look, it’s one thing for physicians, face to face with a patient, to decide to actively use their psychological assessment skills to help a specific patient. For example, I think it is well within the explicit and implicit contract between physician and patient—with a face-to-face interaction—for the physician to decide that this specific patient being seen for a sore throat needs an assessment for suicidal ideation. But it is a whole other thing for the software developers in Redmond, Washington to decide to do this—without explicit consent—on every patient with a sore throat (and every patient!) to be screened for suicidal ideation.
Also, there is the question of whether it’s ethical for an employer to assess the emotional status of physicians using this software. (Hint: it’s not).
Anyway, my prediction? When this voice AI emotional evaluation software comes out, there will be lots of chit-chat about privacy technology, equity, and algorithms—but there will be no discussion about the ethics of not getting explicit informed consent from either the patient or the physicians. Big Data and AI’s data sources—such as physicians and patients—have have few rights when it comes to being mined.