Search Our Website:
BIPC Logo

Artificial intelligence (AI) is making waves across the life sciences industry, reshaping product development and manufacturing in innovative ways. This powerful technology enhances and accelerates the analysis of massive data sets, enabling much faster drug discovery. AI can also cross-reference scientific literature with clinical trial data and other alternative information sources to speed the research and development of new products.

However, introducing AI and machine learning also poses new risks, particularly related to product liability litigation. Here are seven notable challenges related to product liability litigation that life sciences companies using AI face today.

1. Limited Regulatory Guidance and Legal Precedent

Regulations are still catching up to this fast-changing technology, with state and federal regulators yet to establish clear guidelines for AI usage by life sciences companies. A wide array of regulatory approaches are beginning to emerge, creating inconsistencies for companies in this space. Some states, like California, are ahead on the regulation front, while others are taking no action at all, further complicating nationwide operations.

Given the current lack of legal precedent, companies must closely monitor regulatory and judicial developments and adjust their practices and policies accordingly.

2. Algorithm Bias and Discrimination

Life sciences companies must ensure there’s no bias or discrimination inherent in their AI models and algorithms. There have been countless examples where inputs into these algorithms bias the results, often in a way that relates to race or gender, potentially resulting in legal consequences. Similarly, AI products that go against societal opinions or are ethically questionable may face public, political, or legal pushback.

3. Input Inaccuracies and Errors

While it may seem obvious, life sciences companies must verify the accuracy of the information being used by their AI tools. Investigations, or even lawsuits, could ensue if life sciences companies using AI for product design or manufacturing fail to validate the data fed into their algorithms.

4. Explainability & Accountability

If a product liability lawsuit is filed against a life sciences company that leveraged AI, the business must be able to explain what decisions were made around how the AI was developed, what guidance was relied on, what practices were used, and what laws were complied with.

The courts do not consider ignorance as a viable excuse. Therefore, companies must thoroughly document their product development process, which can help them comprehensively answer questions should a lawsuit arise.

5. Lack of Transparency

Some AI systems and algorithms are founded on a black box—It may not be easy to interpret, explain, or understand their decision-making processes. As a result, life sciences companies facing lawsuits may find themselves in a difficult position when they need to defend their algorithm and explain how their systems work. This becomes increasingly complicated when multiple AI systems are working together and companies may not have full visibility into all aspects of the technology.  

6. Security Concerns

Numerous cutting-edge medical devices today are connected to advanced systems and technologies, introducing cybersecurity vulnerabilities that malicious actors can exploit. These vulnerabilities not only pose significant health risks to patients, but also can lead to legal liability for device manufacturers.

Recent changes to U.S. Food & Drug Administration (FDA) policy now require all medical device manufacturers to implement cybersecurity measures to protect patients from cyberattacks. As FDA approval is required before market entry for most new devices, this places the FDA at the forefront of cybersecurity for device manufacturers.

7. Privacy Violations

Many AI algorithms today leverage personal information to improve the system’s functionality. While this can result in valuable insights, it also raises concerns about privacy and data protection. For life sciences companies using AI for product development, it is crucial to fully understand what types of personal data are being collected, used, shared, and retained and to verify that this practice complies with all relevant state and federal privacy laws.

How Life Sciences Companies Can Protect Themselves

Life sciences companies that use AI for product development should be aware of product liability challenges, evolving regulatory frameworks, and emerging legal precedents. They should be fully cognizant of how they are using AI and be able to clearly explain and defend their decisions and policies.

Companies also need to maintain compliance with federal, state, and local laws and stay updated on changing government guidance. When working with vendors that also leverage AI, life sciences companies should ensure that contracts are written in ways that clearly assign responsibility for product liability claims that arise from using their algorithms.

At Buchanan, our team of life sciences attorneys and government relations professionals closely monitors the use of AI in product development and actively guides our clients through this shifting regulatory and legal landscape. We can help spot issues before they arise and offer guidance through any product liability claims. As the adoption of AI grows, product liability will persist as an important issue that the life sciences industry needs to anticipate and address.