Artificial Intelligence in Healthcare: Who is Responsible When AI Makes an Error?


The answer is that we do not know yet.

 

Artificial intelligence (AI) has the potential to revolutionize healthcare by improving patient outcomes, reducing costs, and increasing efficiency. However, as AI becomes more integrated into healthcare, questions arise about who has liability for AI decisions. In this article, I will explore the liability issue of AI in healthcare and the different factors that come into play.

 

One of the main challenges with AI in healthcare is the need for clear guidelines and regulations. The responsibility for AI decisions can be unclear, as the technology is often designed and implemented by multiple parties, including vendors, healthcare organizations, and government agencies. As a result, there is a risk of confusion and ambiguity in determining who is liable for any adverse outcomes resulting from AI decisions.

 

The legal system must establish clear guidelines regarding medical malpractice lawsuits involving AI. The traditional principles of negligence and medical malpractice may be insufficient to cover the unique challenges posed by AI, as the technology cannot make decisions in the same way as human healthcare providers. The lack of uniformity across the country in tort law adds to the challenge of developing a clear and consistent liability framework. Furthermore, AI algorithms can be opaque and difficult to understand, making determining the root cause of adverse outcomes difficult.

 

Medical errors and injuries occur in healthcare every day. However, patients and providers may react differently to an injury resulting from a software error than if the injury resulted from human error. Consequently, the scope of this risk has the potential to be enormous. For example, while a human error made by a single healthcare provider can cause harm to one or even a handful of patients, an underlying problem within an AI system could influence hundreds or even thousands of medical decisions resulting in widespread harm.

 

To address this issue, some experts have proposed using “algorithmic accountability” frameworks, requiring transparency and explanation of AI algorithms in healthcare. Such frameworks would enable healthcare providers and patients to understand how AI technology makes decisions and facilitate accountability in cases where adverse outcomes occur.

 

Moreover, it is essential to note that liability for AI decisions is not solely a legal issue. We must also make ethical considerations, as AI algorithms can perpetuate biases and discrimination, leading to unfair or inequitable outcomes for some patient groups. As AI use increases in healthcare, these ethical considerations must be addressed and incorporated into AI development and implementation.

 

In conclusion, the liability issue created by using AI in healthcare is complex and multifaceted. While legal frameworks are necessary for establishing liability, we must also consider ethical implications to ensure fair and equitable outcomes. It is crucial for healthcare organizations and government agencies to work together to establish clear guidelines and regulations for AI use in healthcare, promote transparency and accountability in AI decision-making, and ensure that we use the technology in a way that benefits patients and society as a whole.


Topol, E. J. (2019). High-performance medicine: the convergence of human and artificial intelligence. Nature medicine, 25(1), 44-56.

 
Mittelstadt, B. D., Fairweather, N. B., Shaw, M., & McBride, N. (2019). The ethics of algorithms in healthcare. Communications of the ACM, 62(11), 54-63.


 Kao, L. S., & Thomas, E. J. (2019). The promise and pitfalls of AI in healthcare. JAMA, 322(1), 11-12.


 Written By: Landon Tooke, MLS, CHC, CCEP, CPCO, CHCSP, CHSRAP

Twitter: @LandonNTooke

LinkedIn: Landon Tooke

Previous
Previous

Three Strategies to Successfully Collect from Patients

Next
Next

The Importance of Business Intelligence in Rural Hospitals