To develop deep technical expertise in artificial intelligence (AI) requires years of study and practice.

It requires a solid background in mathematics and experience with programming, and it requires practical experience (and time) using these skills with large data sets on large systems, potentially spending a couple of hundred hours a month, month after month, year after year, climbing the expertise ladder, constantly narrowing oneself in this complex domain—not exactly the optimum environment for extensive reading in ethics.

And guess what this means in practice? That to develop deep expertise in AI you have to be fully vested in the AI/ Big Data ecosystem.

So, you may ask, what’s wrong with that? Isn’t it necessary to have these folks on AI ethics committees explain the technology?

Perhaps, but I think their service on AI ethics committees is problematic, especially within healthcare institutions.

For example, consider one of the bigger, but rarely asked, ethical questions within Healthcare AI: Is it ethical to use patient’s data if it has been de-identified?

For example, the yet to be realized health benefits of Electronic Health Record (EHR) analysis require vast amounts of data extraction from tens of thousands (if not hundreds of thousands) of patients the charts, all done based on the assumption that it is ethical to use patients data for these efforts if the patients have been de-identified.

But have the patients themselves given consent for this, or, more specifically, have they given informed consent?

Pretty unlikely—especially considering that when many of these patients were required to sign up for their EHRs these AI efforts had yet to even be envisioned!

However, my more specific concern is this:

Can a technical expert in Healthcare AI—whose entire career, field, and institution is created upon the premise that the use of de-identified data is ethically acceptable to use—be able to challenge this premise?

It’s difficult. I believe these folks are good people, with good intentions, but the chance of a serious ethical discussion by Healthcare AI technical experts challenging the use of patient de-identified data at institutions such as Google Health (or Amazon or Apple or Microsoft or IBM) is about as likely as there is a serious ethical discussion surrounding the selling of tobacco products at Phillip Morris or missiles at Raytheon.

It’s unlikely to happen.