X
Artificial intelligence: preparing for a more accountable future

By By Dimitri Stamatiadis, PhD, MBA 22 Apr, 2026

Artificial intelligence: preparing for a more accountable future (3/3)

As AI increasingly influences activities with potential impact on patient safety and data integrity, regulators are establishing clearer expectations around its use. The new EU GMP Chapter 4, Annex 22 marks the introduction of formal oversight for artificial intelligence in pharmaceutical organizations in Europe. In this article, we review some of the implications for clinical development activities.

In recent years, artificial intelligence has emerged as a promising technology for pharmaceutical organizations, offering new ways to process large volumes of data and support complex workflows. From data analysis to operational support, AI-driven tools are increasingly used to enhance efficiency and inform decision-making. As adoption grows, so are the regulatory expectations around their use.

This shift reflects a simple reality: when technology contributes to decisions in regulated environments, it must be subject to the same level of control and scrutiny as other critical systems. AI is no longer just a technological opportunity, but a capability that must be governed with clarity and accountability.

Why regulatory attention is increasing

AI introduces specific challenges. As outputs depend on large and complex processing algorithms, decision pathways are hardly traceable. Furthermore, results can evolve over time as models are updated or exposed to new data.

For regulators, this raises important questions:

  • Can the outcome of an AI-supported process be trusted?
  • Is it possible to understand how a result was produced?
  • Who is accountable for decisions influenced by an AI-supported digital system?

These questions are particularly relevant in clinical development, where leaders rely on data-driven insights to guide key decisions.

A new expectation: AI as a governed component

With the forthcoming new European Regulation GMP Chapter 4, Annex 22, AI is becoming a controlled component of regulated processes.

This implies that organizations must be able to:

  • define the purpose of each AI system
  • understand how outputs are generated at a meaningful level
  • assign clear responsibility for its use
  • monitor its performance over time

In this context, the challenge is no longer whether to use AI, but how to use it in a way that remains understandable, accountable, and acceptable in a regulated environment. “Understandable” does not mean simplifying complex algorithms. It means ensuring that outputs can be explained in a way that supports informed decision-making. Clinical leaders should be able to interpret results, question them when needed, and justify their use.

What organizations will need to demonstrate

As expectations become clearer, organizations will need to show that their use of AI is appropriately controlled. This will typically involve:

  • Clarity of purpose
    Each AI system should have a clearly defined role and a well-understood scope of use.
  • Transparency of data
    The data used to develop and operate the system should be relevant, reliable, and appropriately managed.
  • Human oversight
    Qualified individuals must remain responsible for reviewing outputs and making final decisions.
  • Performance monitoring
    Organizations should ensure that AI systems continue to perform as expected over time.
  • Documentation
    The rationale behind the use of AI, as well as how it operates within a process, should be properly documented.

What this means for clinical development leaders

For clinical development leaders, the growing regulatory oversight of AI should be understood as a reminder of their responsibilities, including — or perhaps especially — when delegating tasks to AI.

AI can support faster analysis, improved coordination, and more informed decision-making, but clinical development teams remain accountable for outcomes, even when those outcomes are supported by automated systems.

This means:

  • understanding, at a high level, how AI-supported tools contribute to decisions
  • ensuring that outputs can be reviewed and challenged when necessary
  • maintaining clear traceability of how decisions are made

Implications for digital platforms

As expectations around the use of AI evolve, organizations will need systems that:

  • provide clear traceability of actions and decisions
  • support structured workflows with appropriate review steps
  • allow documentation of how outputs are generated and used

In conclusion, systems like Ethical’s, that are designed with transparency and governance in mind are well positioned to comply with the emerging regulatory expectations in Europe and the USA.

This article concludes our series of three dealing with the Revision of EU GMP Guidelines Chapter 4 and Annex 11, and New Annex 22. As digital technologies continue to reshape pharmaceutical operations, regulatory expectations are evolving in parallel. Organizations that understand these changes will be better prepared to adapt their systems and processes.

If you would like to discuss how emerging expectations around artificial intelligence may influence the digital platforms used in clinical development, the team at Ethical would be pleased to exchange perspectives. Please feel free to contact us through the form below.

⬅️ Read the Previous Article

Tags

Download Ethical eAdjudication for Endpoint Adjudication Dossier

eAdjudication®
Solution Description

DOWNLOAD NOW

Book a call to discover the eAdjudication® solution

Please fill out this form and we’ll be in touch as soon as we can.