By: David Shafiei

The century has brought about incredible technological change and has revolutionized the way society thinks about health and the natural sciences. As the world modernizes, so does science, technology, engineering, and mathematics (STEM), and practices in those fields. One such field – the field of medicine – has had a profound metamorphosis occurring over the past few decades. One of the more sophisticated technological advancements in the field is medical imaging artificial intelligence (“medical imaging AI”).

Medical imaging AI can find, and even sometimes diagnose, “polyps, tumors or anomalies that may otherwise go undetected by the human eye.”[1] The implementation of technology on such a wider scale could decrease wait times significantly by making the tedious work automatic.[2] One relevant study implemented medical imaging AI into colonoscopy procedures, finding that the AI technique discovered 61% more polyps as compared to an older method, which only required the human eye.[3] These types of autonomous devices are highly sought, especially during the coronavirus pandemic, because they implement a barrier between the patient and the doctor, thus protecting them both from infection.[4] However, this type of technology turns a lot of heads, because most illnesses are diagnosed with a degree of subjectivity – depending on the doctor.[5] This level of subjectivity could lead to a misdiagnosis, or even worse, an error that could prove fatal to the patient.[6] Another issue that may arise from this kind of medical advancement is the rejection by medical professionals; only 30% of doctors use medical imaging AI.[7] This can be attributed to the possibility that most doctors see medical imaging AI as a threat to their jobs, and thus, will categorize the AI as inconsistent compared to non-AI methods.[8]

As medicine and technology advance simultaneously, so do the legal conundrums that arise from the advancements. At this moment, there appears to be very little legal precedent involving medical imaging AI, and the risks medical professionals could face if AI is used to diagnose patients.[9] One type of risk doctors could face is a greater threat of medical malpractice if they misdiagnose while relying on an AI.[10] The law can usually protect doctors by applying malpractice to circumstances or human error, but when the doctor relies on medical technology, human error becomes much narrower.[11] Thus, doctor’s take on a higher risk of general negligence when relying on medical imaging AI.[12] The obvious counter to this is that medical imaging AI may become so useful in the medical world where it would be negligent in itself to not use the AI.[13]

Another legal issue that may arise from medical imaging AI are potential HIPAA violations. In order for imaging AI to work, it needs to pull from large sets of health data in order to create a baseline for what kinds of medical anomalies are considered to be dangerous or abnormal.[14] The use of a patient’s health information to diagnose other patients’ diseases has legal and ethical concerns related to privacy.[15] HIPAA restricts what providers can do with a patient’s health information, but also balances the confidentiality of a patient’s health records, while facilitating the flow of information for public health.[16] Thus, medical professionals would most likely need to have a patient’s authorization to use their health data to improve medical imaging AI.[17]


[1] Keerthi Vedantam, Venture Cash is Pouring Into AI That Can Diagnose Diseases. Doctor’s Aren’t Sure They Can Trust It, Dot.LA (Aug. 7, 2021, 10:48 AM), https://dot.la/medical-ai-venture-2654560192.html.

[2] Id.

[3] Id.

[4] Id.

[5] Id.

[6] Id.

[7] Marty Stempniak, Only 30% of Radiologists Currently Using Artificial Intelligence as Part of Their Practice, Radiology Business (Apr. 21, 2021), https://www.radiologybusiness.com/topics/artificial-intelligence/30-radiologists-artificial-intelligence-practice.

[8] Supra note 1.

[9] Matt O’Connor, Who Will Be Liable in the Coming AI Age? 4 Things for Radiologists to Know, Health Imaging (Apr. 13, 2021), https://www.healthimaging.com/topics/ai-emerging-technologies/liable-ai-malpractice-4-things-radiologists.

[10] Id.

[11] Id.

[12] Id.

[13] Id.

[14] Joe Fornadel, Recent Developments in Artificial Intelligence and Accompanying Liability Risks, Diagnostic and Interventional Cardiology (Sept. 29, 2020), https://www.dicardiology.com/article/recent-developments-artificial-intelligence-and-accompanying-liability-risks.

[15] Id.

[16] Id.

[17] Id.

Leave a Reply

Your email address will not be published. Required fields are marked *