Search Our Website:
BIPC Logo

The incorporation of artificial intelligence (AI) into healthcare and medical devices poses many benefits. AI systems can enhance remote patient monitoring, accurately detect disease, expedite drug discovery, and create personalized treatment plans. In the medical device setting, AI has been incorporated into diagnostic tools, but it is seeing increasing use in other areas, like cardiac monitoring and ophthalmological diagnosis.

As the medical device industry continues to evolve, the integration of AI presents both opportunities and challenges for medical device manufacturers, distributors, and suppliers. Understanding the legal intersection between AI and products liability is crucial for medical device companies, hospitals, and their insurers incorporating this new technology. This article aims to provide insight into the current legal landscape for products liability in the context of AI-powered medical devices and offers strategies for mitigating associated risks.

The Role and Risks of AI in Medical Devices

The use of AI to power medical devices is increasing rapidly. In October 2023, the FDA published an updated list of AI-enabled medical devices. When compared to the list from 2022, there was a 33% increase in AI-enabled medical devices. An even greater increase is expected this year. Indeed, as of August 7, 2024, the list had already grown by nearly 38%.1

However, companies seeking to keep up with AI developments should be wary. Improper or hasty adoption of AI technology carries risks for medical device manufacturers, medical facilities, hospitals, distributers, insurers, and sellers which include:

  1. Regulatory Scrutiny: Several federal regulatory bodies, including the Federal Trade Commission (FTC) and the U.S. Department of Justice (DOJ), are increasingly focused on the safe, efficient, and transparent use of AI. Further, in 2024, the Food and Drug Administration (FDA) has already recalled over 60 devices, including devices that incorporate AI. Thus, an overestimation of an AI-enabled product’s capabilities can lead to enforcement actions, product recalls, or fines.
  2. Litigation Exposure: As AI technologies become more prevalent, the potential for litigation increases if harmful defects are not timely discovered. Even a single product failure or lack of training on how to effectively use the AI technology can have serious and an injurious impact resulting in a lawsuit.
  3. Reputational Damage: Negative publicity stemming from medical device defects can tarnish a company’s reputation and impact future sales. When a device is recalled or litigation ensues, consumer trust is potentially damaged.

Ensuring that an AI-enabled device is reliable and safe is even more important as software becomes more sophisticated. For complex AI algorithms, weaknesses may be difficult to uncover until the device is publicly utilized. As AI develops, medical device companies must be prepared to contend with the risks of greater complexity.

Legal Background on Products Liability for Medical Devices

Products liability law is intended to hold designers, manufacturers, and distributors accountable while protecting consumers from unsafe or defective products. These theories of liability are particularly relevant as applied to medical devices, where defects can be disastrous. Each year, the FDA receives over two million medical device reports concerning suspected device-associated deaths, serious injuries, and malfunctions. Given the potential consequences of these malfunctions, an informed awareness of the relevant legal landscape is critical. 

Products liability law categorizes product defects according to the stage at which they occur in the product’s lifecycle. There are three main “defects” in medical devices that can expose manufacturers, distributors, hospitals, medical facilities, and suppliers to potential liability: 

  1. Design Defects: A design defect occurs when a product is inherently unsafe due to its design, regardless of how well it is manufactured. When evaluating a design defect, courts often assess whether a safer alternative design would perform as intended, with consideration to cost differences. 
  1. Manufacturing Defects: A manufacturing defect occurs when a product is improperly manufactured, resulting in a product that deviates from its intended design. In such cases, the focus is on the specific product rather than the design itself, because the product as designed is safe. 
  1. Marketing Defects: Marketing defects, also known as “failure-to-warn defects,” arise when a manufacturer fails to provide adequate warnings or instructions regarding any latent dangers. For medical devices, this could involve insufficient information about potential side effects, contraindications, or proper usage. 

Understanding the nuances of product liability law is essential when navigating the landscape of medical device regulation. For medical device companies, the stakes are especially high. A defect in any of these categories could not only jeopardize patient safety but can also lead to significant legal and reputational repercussions. When manufacturers, distributors, suppliers, or designers release or market a defective product, they could be liable for negligence, breach of warranty and/or strict liability for the harm it causes. Stated otherwise, they could be legally responsible under strict liability, even in the absence of fault or intent.

Litigation Risks: Design and Manufacturing Defects

The integration of AI algorithms into medical devices introduces unique challenges in identifying both design and marketing defects. Software weaknesses and “hallucinations” in AI systems can be difficult to uncover, even for the creators of the algorithms. The complexity of AI models, particularly those that utilize machine learning, means that their decision-making processes can be unclear, making it challenging to predict how they will perform in real-world scenarios.

For instance, consider an AI-driven diagnostic device that analyzes X-rays to detect tumors. During initial testing, the algorithm may demonstrate high accuracy. However, once deployed in a clinical setting, it may misinterpret certain images due to variations in lighting or patient positioning, leading to lower diagnostic accuracy. This delayed recognition of a design defect can have dire consequences, as companies may not be fully aware of the risks until a diagnosis has been missed.

AI-driven devices also have the potential to lead to liability for marketing defects. Marketing defects arise when manufacturers fail to provide adequate warnings or instructions about the safe use of their products. In the context of AI-powered medical devices, this could involve insufficient information about the limitations of the AI algorithms, or the potential risks associated with their use. For example, if a manufacturer of an AI-based diagnostic tool does not clearly communicate that the algorithm may struggle with certain types of images or patient demographics, healthcare providers may unknowingly rely on the technology inappropriately. This lack of transparency can result in patients receiving incorrect diagnoses or inappropriate treatments, leading to adverse health outcomes and legal liability.

Relevant guidance, including some case law, has suggested that the burden of uncovering an algorithm’s failures falls upon the company utilizing AI. Federal agencies have suggested that over-estimating AI capabilities is tantamount with fraud.2 Further, in the case of Holbrook v. Prodomax Automation Ltd., a Michigan district court split from other jurisdictions to hold that software itself is a product that can give rise to design defect liability. Holbrook v. Prodomax Automation Ltd., No. 1:17-CV-219, 2021 WL 4260622 (W.D. Mich. Sept. 20, 2021). This could represent a departure from previous regimes toward liability for all components of a device. In these instances, “blaming the algorithm” is not a proper defense. Because an algorithm is not legally considered a person, courts will hold companies, designers, and manufacturers liable when a defective algorithm causes harm. Overall, the message is clear: ensure your software is reliable, or risk recall and legal action.

Protecting Against Design Defect Liability: Strategies for Medical Device Companies

To mitigate the risks associated with design defect liability in AI-powered medical devices, manufacturers and sellers should consider the following strategies:

  1. Stay Informed on Federal Guidance: Regularly review and adhere to FDA guidelines related to AI and medical devices. For example, the FDA has published specific guidance on machine learning-enabled medical devices, which outlines expectations for safety and transparency.
  2. Thoroughly Test AI Capabilities: Implement comprehensive testing protocols to evaluate the performance and reliability of AI algorithms. This includes testing under various conditions and inputs to identify potential failures before the device reaches the market.
  3. Document Development Processes: Maintain detailed records of the design, testing, and validation processes for AI algorithms. This documentation can be used as evidence in the event of litigation or regulatory inquiries to demonstrate due diligence.
  4. Engage in Continuous Monitoring: Establish systems for ongoing monitoring of AI performance post-deployment. A proactive, collaborative approach can identify and address issues before they result in harm.
  5. Consult Legal Experts: Work with legal professionals who specialize in medical device regulation and litigation to navigate the complexities of products liability and ensure compliance with applicable laws. Engaging specialized legal counsel early in the development process can help identify potential risks and establish best practices.

As medical device companies embrace the potential of AI technologies, understanding the legal contours of products liability is essential. By staying informed, implementing robust testing protocols, and engaging with legal experts, medical device companies can better protect themselves against the risks associated with AI-powered medical devices. In an industry where patient safety is vital, proactive measures are not just advisable—they are imperative. At Buchanan, we have a multi-disciplinary team of attorneys focused on product liability that can assist with ensuring compliance with applicable laws and regulations.

At Buchanan, our team of attorneys and government relations professionals closely monitors the use of AI in product development and actively guides our clients through this shifting regulatory and legal landscape. We can help spot issues before they arise and offer guidance through any product liability claims.

  1. Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices, U.S. Food and Drug Administration, https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices (Aug. 7, 2024).
  2. See, e.g., Michael Atleson, Keep Your AI Claims in Check, Federal Trade Commission (February 27, 2023), https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check