Comparing an agent-supported process with traditional adjudication
In the previous articles of this series, we introduced AI agents, reviewed the endpoint adjudication process and the stakeholders involved, and examined how different agent types may support specific roles. A consistent conclusion has been that successful use of AI depends less on technical performance than on a clear understanding of processes and responsibilities.
This article applies those principles to a concrete case study. Using a typical clinical trial scenario, it compares a traditional, human-driven adjudication process with an AI agent–supported model, highlighting practical differences assuming full human oversight.
Use Case: Study Setup and Adjudication Framework
Clinical trial MEGATEST is preparing to begin patient recruitment in eight countries, with a target enrollment of 300 patients across 40 investigational sites. The protocol has been approved by health authorities and ethics committees, and a steering committee of medical experts oversees the study.
Independent clinical endpoint adjudication has been decided for both primary and secondary composite outcomes, including cardiovascular death, myocardial infarction (NSTEMI, STEMI, types 1–5), and stroke (ischemic versus haemorrhagic). The adjudication committee consists of three expert cardiologists, one acting as chairperson.
An adjudication charter defines adjudication rules, timelines, and roles. All deaths and all cases meeting predefined criteria for myocardial infarction or stroke, whether reported as such by the site or not, are adjudicated by two reviewers within 15 days of occurrence. Disagreements trigger a third review, and 5% of cases are re-adjudicated at random to monitor interpersonal bias.
Triggers and Data Requirements
Adjudication packages must include blinded clinical data from the electronic case report form, together with supporting documentation such as pathology, imaging, ECG, syndrome description, and biomarker reports. Adjudication begins only once the package is assembled, redacted of personal or unblinding information, and transmitted to the adjudicators.
Case adjudication may be triggered by the site or by the sponsor. Deaths reported as serious adverse events automatically trigger adjudication. Myocardial infarction and stroke trigger adjudication when reported directly or when supporting clinical data suggest their possibility. Sponsor medical reviewers also examine incoming data to identify potentially missed events.
Traditional Human-Driven Adjudication
In the traditional process, sites enter clinical data into the electronic case report form and flag events requiring adjudication. Sponsor medical reviewers may identify missed cases and request that sites flag them accordingly.
After being contacted, sites collect and transmit required documentation to the adjudication coordinator, who verifies completeness and redaction, requests additional information if needed, and assigns cases to adjudicators. The coordinator monitors timelines, issues reminders, manages disagreements, organises resolution meetings when required, collects final assessments and transmits to the data manager for inclusion in the clinical database. The coordinator regularly checks for completeness of cases including random re-adjudication.
This process relies heavily on manual coordination and follow-up, particularly for document collection, tracking, and workload management.
AI Agent–Supported Adjudication
In the agent-supported model, investigational sites use simple reflex AI agents to identify reportable events, collect required documentation, apply predefined redaction rules, assemble adjudication packages, and transmit them to the adjudication system. These agents may contact site personnel as needed to request missing information.
A learning AI agent supports adjudicators by reviewing adjudication packages and proposing assessments based on predefined outcome categories. The agent presents its suggested classification together with the reasoning behind it. Adjudicators remain fully responsible for accepting or rejecting the suggestion and may provide feedback to support the agent’s learning.
Another learning agent assists sponsor medical reviewers by analysing incoming clinical data to identify potentially missed adjudication cases. Identified cases are reviewed by a human, whose decisions are used to refine the agent’s performance.
A goal-based agent supports the adjudication coordinator by monitoring case progress, verifying completeness, sending reminders, and reporting status. The agent suggests actions to address delays, while final decisions remain under human control.
Conclusion: A Comparative Perspective
This case study illustrates how AI agents can support endpoint adjudication by reinforcing existing processes rather than replacing human roles. Compared with a traditional approach, the agent-supported model reduces manual workload in data collection, tracking, and coordination, allowing human experts to focus on oversight and decision-making. The use of AI agents is also expected to improve speed and quality.
At the same time, the comparison highlights important constraints related to data access, governance, validation, and sponsor accountability. These factors must be addressed alongside any potential efficiency gains.
By Dimitri Stamatiadis, PhD, MBA
A consultant with extensive experience in clinical research in Europe and the USA, Dimitri has published numerous articles on drug development and the use of enabling technologies in the Pharma industry.
What Comes Next
The final article in this series will examine the investment required to implement AI agents in endpoint adjudication, assess expected benefits, and discuss practical limitations and mitigation strategies when transitioning toward a machine–human collaboration model.