The relationship between sponsors and contract organizations has evolved far beyond simple transactional exchanges. Digital infrastructure has become the cornerstone of trust, transparency, and operational excellence.
The trust equation is fundamentally changing due to the way our supply chains are being challenged.. Traditional quality agreements often functioned as static documents—comprehensive but disconnected from day-to-day operations. Today’s most successful partnerships are built on dynamic, digitally-enabled frameworks that provide real-time visibility into performance, compliance, and risk management.
Regulatory agencies are increasingly scrutinizing the effectiveness of sponsor oversight programs. The FDA’s emphasis on data integrity, combined with EMA’s evolving computerized systems requirements, means that sponsors can no longer rely on periodic audits and static documentation to demonstrate control over their outsourced activities.
Quality Agreements as Digital Trust Frameworks
The modern quality agreement must evolve from a compliance document to a digital trust framework. This transformation requires reimagining three fundamental components:
Dynamic Risk Assessment Integration
Traditional quality agreements categorize suppliers into static risk tiers (for example Category 1, 2, 2.5, or 3 based on material/service risk). Digital frameworks enable continuous risk profiling that adapts based on real-time performance data.
Integrate supplier performance metrics directly into your quality management system. When a Category 2 supplier’s on-time delivery drops below threshold or quality metrics deteriorate, the system should automatically trigger enhanced monitoring protocols without waiting for the next periodic review.
Automated Change Control Workflows
One of the most contentious areas in sponsor-CxO relationships involves change notifications and approvals. Digital infrastructure can transform this friction point into a competitive advantage.
The SMART approach to change control:
Standardized digital templates for change notifications
Machine-readable impact assessments
Automated routing based on change significance
Real-time status tracking for all stakeholders
Traceable decision logs with electronic signatures
Quality agreement language to include: “All change notifications shall be submitted through the designated digital platform within [X] business days of identification, with automated acknowledgment and preliminary impact assessment provided within [Y] hours.”
Transparent Performance Dashboards
The most innovative CxOs are moving beyond quarterly business reviews to continuous performance visibility. Quality agreements should build upon real-time access to key performance indicators (KPIs) that matter most to patient safety and product quality.
Examples of Essential KPIs for digital dashboards:
Batch disposition times and approval rates
Deviation investigation cycle times
CAPA effectiveness metrics
Environmental monitoring excursions and response times
Supplier change notification compliance rates
Communication Architecture for Transparency
Effective communication in pharmaceutical partnerships requires architectural thinking, not just protocol definition. The most successful CxO-sponsor relationships are built on what I call the “Three-Layer Communication Stack” which builds a rhythm of communication:
Layer 1: Operational Communication (Real-Time)
Purpose: Day-to-day coordination and issue resolution
Tools: Integrated messaging within quality management systems, automated alerts, mobile notifications
Quality agreement requirement: “Operational communications shall be conducted through validated, audit-trailed platforms with 24/7 availability and guaranteed delivery confirmation.”
Every quality agreement should include a subsidiary Communication Plan that addresses:
Stakeholder Matrix: Who needs what information, when, and in what format
Escalation Protocols: Clear triggers for moving issues up the communication stack
Performance Metrics: How communication effectiveness will be measured and improved
Technology Requirements: Specified platforms, security requirements, and access controls
Contingency Procedures: Alternative communication methods for system failures or emergencies
Include communication effectiveness as a measurable element in your supplier scorecards. Track metrics like response time to quality notifications, accuracy of status reporting, and proactive problem identification.
Data Governance as a Competitive Differentiator
Data integrity is more than just ensuring ALCOA+—it’s about creating a competitive moat through superior data governance. The organizations that master data sharing, analysis, and decision-making will dominate the next decade of pharmaceutical manufacturing and development.
The Modern Data Governance Framework
Data Architecture Definition
Your quality agreement must specify not just what data will be shared, but how it will be structured, validated, and integrated:
Master data management: Consistent product codes, batch numbering, and material identifiers across all systems
Data quality standards: Validation rules, completeness requirements, and accuracy thresholds
Integration protocols: APIs, data formats, and synchronization frequencies
With increasing regulatory focus on cybersecurity, your data governance plan must address:
Role-based access controls: Granular permissions based on job function and business need
Data classification: Confidentiality levels and handling requirements
Audit logging: Comprehensive tracking of data access, modification, and sharing
Analytics and Intelligence
The real competitive advantage comes from turning shared data into actionable insights:
Predictive analytics: Early warning systems for quality trends and supply chain disruptions
Benchmark reporting: Anonymous industry comparisons to identify improvement opportunities
Root cause analysis: Automated correlation of events across multiple systems and suppliers
The Data Governance Subsidiary Agreement
Consider creating a separate Data Governance Agreement that complements your quality agreement with specific sections covering data sharing objectives, technical architecture, governance oversight, and compliance requirements.
Veeva Summit
Next week I’ll be discussing this topic at the Veeva Summit, where I will bring some organizational learnings on to embrace digital infrastructure as a trust-building mechanism will forge stronger partnerships, achieve superior quality outcomes, and ultimately deliver better patient experiences.
The draft revision of EU GMP Chapter 4 introduces what can only be described as a revolutionary framework for data governance systems. This isn’t merely an update to existing documentation requirements—it is a keystone document that cements the decade long paradigm shift of data governance as the cornerstone of modern pharmaceutical quality systems.
The Genesis of Systematic Data Governance
The most striking aspect of the draft Chapter 4 is the introduction of sections 4.10 through 4.18, which establish data governance systems as mandatory infrastructure within pharmaceutical quality systems. This comprehensive framework emerges from lessons learned during the past decade of data integrity enforcement actions and reflects the reality that modern pharmaceutical manufacturing operates in an increasingly digital environment where traditional documentation approaches are insufficient.
The requirement that regulated users “establish a data governance system integral to the pharmaceutical quality system” moves far beyond the current Chapter 4’s basic documentation requirements. This integration ensures that data governance isn’t treated as an IT afterthought or compliance checkbox, but rather as a fundamental component of how pharmaceutical companies ensure product quality and patient safety. The emphasis on integration with existing pharmaceutical quality systems builds on synergies that I’ve previously discussed in my analysis of how data governance, data quality, and data integrity work together as interconnected pillars.
The requirement for regular documentation and review of data governance arrangements establishes accountability and ensures continuous improvement. This aligns with my observations about risk-based thinking where effective quality systems must anticipate, monitor, respond, and learn from their operational environment.
Comprehensive Data Lifecycle Management
Section 4.12 represents perhaps the most technically sophisticated requirement in the draft, establishing a six-stage data lifecycle framework that covers creation, processing, verification, decision-making, retention, and controlled destruction. This approach acknowledges that data integrity cannot be ensured through point-in-time controls but requires systematic management throughout the entire data journey.
The specific requirement for “reconstruction of all data processing activities” for derived data establishes unprecedented expectations for data traceability and transparency. This requirement will fundamentally change how pharmaceutical companies design their data processing workflows, particularly in areas like process analytical technology (PAT), manufacturing execution systems (MES), and automated batch release systems where raw data undergoes significant transformation before supporting critical quality decisions.
The lifecycle approach also creates direct connections to computerized system validation requirements under Annex 11, as noted in section 4.22. This integration ensures that data governance systems are not separate from, but deeply integrated with, the technical systems that create, process, and store pharmaceutical data. As I’ve discussed in my analysis of computer system validation frameworks, effective validation programs must consider the entire system ecosystem, not just individual software applications.
Risk-Based Data Criticality Assessment
The draft introduces a sophisticated two-dimensional risk assessment framework through section 4.13, requiring organizations to evaluate both data criticality and data risk. Data criticality focuses on the impact to decision-making and product quality, while data risk considers the opportunity for alteration or deletion and the likelihood of detection. This framework provides a scientific basis for prioritizing data protection efforts and designing appropriate controls.
This approach represents a significant evolution from current practices where data integrity controls are often applied uniformly regardless of the actual risk or impact of specific data elements. The risk-based framework allows organizations to focus their most intensive controls on the data that matters most while applying appropriate but proportionate controls to lower-risk information. This aligns with principles I’ve discussed regarding quality risk management under ICH Q9(R1), where structured, science-based approaches reduce subjectivity and improve decision-making.
The requirement to assess “likelihood of detection” introduces a crucial element often missing from traditional data integrity approaches. Organizations must evaluate not only how to prevent data integrity failures but also how quickly and reliably they can detect failures that occur despite preventive controls. This assessment drives requirements for monitoring systems, audit trail analysis capabilities, and incident detection procedures.
Service Provider Oversight and Accountability
Section 4.18 establishes specific requirements for overseeing service providers’ data management policies and risk control strategies. This requirement acknowledges the reality that modern pharmaceutical operations depend heavily on cloud services, SaaS platforms, contract manufacturing organizations, and other external providers whose data management practices directly impact pharmaceutical company compliance.
The risk-based frequency requirement for service provider reviews represents a practical approach that allows organizations to focus oversight efforts where they matter most while ensuring that all service providers receive appropriate attention. For more details on the evolving regulatory expectations around supplier management see the post “draft Annex 11’s supplier oversight requirements“.
The service provider oversight requirement also creates accountability throughout the pharmaceutical supply chain, ensuring that data integrity expectations extend beyond the pharmaceutical company’s direct operations to encompass all entities that handle GMP-relevant data. This approach recognizes that regulatory accountability cannot be transferred to external providers, even when specific activities are outsourced.
Operational Implementation Challenges
The transition to mandatory data governance systems will present significant operational challenges for most pharmaceutical organizations. The requirement for “suitably designed systems, the use of technologies and data security measures, combined with specific expertise” in section 4.14 acknowledges that effective data governance requires both technological infrastructure and human expertise.
Organizations will need to invest in personnel with specialized data governance expertise, implement technology systems capable of supporting comprehensive data lifecycle management, and develop procedures for managing the complex interactions between data governance requirements and existing quality systems. This represents a substantial change management challenge that will require executive commitment and cross-functional collaboration.
The requirement for regular review of risk mitigation effectiveness in section 4.17 establishes data governance as a continuous improvement discipline rather than a one-time implementation project. Organizations must develop capabilities for monitoring the performance of their data governance systems and adjusting controls as risks evolve or new technologies are implemented.
The integration with quality risk management principles throughout sections 4.10-4.22 creates powerful synergies between traditional pharmaceutical quality systems and modern data management practices. This integration ensures that data governance supports rather than competes with existing quality initiatives while providing a systematic framework for managing the increasing complexity of pharmaceutical data environments.
The draft’s emphasis on data ownership throughout the lifecycle in section 4.15 establishes clear accountability that will help organizations avoid the diffusion of responsibility that often undermines data integrity initiatives. Clear ownership models provide the foundation for effective governance, accountability, and continuous improvement.
The pharmaceutical industry stands at an inflection point where artificial intelligence meets regulatory compliance, creating new paradigms for quality decision-making that neither fully automate nor abandon human expertise. The concept of the “missing middle” first articulated by Paul Daugherty and H. James Wilson in their seminal work Human + Machine: Reimagining Work in the Age of AI has found profound resonance in the pharmaceutical sector, particularly as regulators grapple with how to govern AI applications in Good Manufacturing Practice (GMP) environments
The recent publication of EU GMP Annex 22 on Artificial Intelligence marks a watershed moment in this evolution, establishing the first dedicated regulatory framework for AI use in pharmaceutical manufacturing while explicitly mandating human oversight in critical decision-making processes. This convergence of the missing middle concept with regulatory reality creates unprecedented opportunities and challenges for pharmaceutical quality professionals, fundamentally reshaping how we approach GMP decision-making in an AI-augmented world.
Understanding the Missing Middle: Beyond the Binary of Human Versus Machine
The missing middle represents a fundamental departure from the simplistic narrative of AI replacing human workers. Instead, it describes the collaborative space where human expertise and artificial intelligence capabilities combine to create outcomes superior to what either could achieve independently. In Daugherty and Wilson’s framework, this space is characterized by fluid, adaptive work processes that can be modified in real-time—a stark contrast to the rigid, sequential workflows that have dominated traditional business operations.
Within the pharmaceutical context, the missing middle takes on heightened significance due to the industry’s unique requirements for safety, efficacy, and regulatory compliance. Unlike other sectors where AI can operate with relative autonomy, pharmaceutical manufacturing demands a level of human oversight that ensures patient safety while leveraging AI’s analytical capabilities. This creates what we might call a “regulated missing middle”—a space where human-machine collaboration must satisfy not only business objectives but also stringent regulatory requirements.
Traditional pharmaceutical quality relies heavily on human decision-making supported by deterministic systems and established procedures. However, the complexity of modern pharmaceutical manufacturing, coupled with the vast amounts of data generated throughout the production process, creates opportunities for AI to augment human capabilities in ways that were previously unimaginable. The challenge lies in harnessing these capabilities while maintaining the control, traceability, and accountability that GMP requires.
Annex 22: Codifying Human Oversight in AI-Driven GMP Environments
The draft EU GMP Annex 22, published for consultation in July 2025, represents the first comprehensive regulatory framework specifically addressing AI use in pharmaceutical manufacturing. The annex establishes clear boundaries around acceptable AI applications while mandating human oversight mechanisms that reflect the missing middle philosophy in practice.
Scope and Limitations: Defining the Regulatory Boundaries
Annex 22 applies exclusively to static, deterministic AI models—those that produce consistent outputs when given identical inputs. This deliberate limitation reflects regulators’ current understanding of AI risk and their preference for predictable, controllable systems in GMP environments. The annex explicitly excludes dynamic models that continuously learn during operation, generative AI systems, and large language models (LLMs) from critical GMP applications, recognizing that these technologies present challenges in terms of explainability, reproducibility, and risk control that current regulatory frameworks cannot adequately address.
This regulatory positioning creates a clear delineation between AI applications that can operate within established GMP principles and those that require different governance approaches. The exclusion of dynamic learning systems from critical applications reflects a risk-averse stance that prioritizes patient safety and regulatory compliance over technological capability—a decision that has sparked debate within the industry about the pace of AI adoption in regulated environments.
Human-in-the-Loop Requirements: Operationalizing the Missing Middle
Perhaps the most significant aspect of Annex 22 is its explicit requirement for human oversight in AI-driven processes. The guidance mandates that qualified personnel must be responsible for ensuring AI outputs are suitable for their intended use, particularly in processes that could impact patient safety, product quality, or data integrity. This requirement operationalizes the missing middle concept by ensuring that human judgment remains central to critical decision-making processes, even as AI capabilities expand.
The human-in-the-loop (HITL) framework outlined in Annex 22 goes beyond simple approval mechanisms. It requires that human operators understand the AI system’s capabilities and limitations, can interpret its outputs meaningfully, and possess the expertise necessary to intervene when circumstances warrant. This creates new skill requirements for pharmaceutical quality professionals, who must develop what Daugherty and Wilson term “fusion skills”—capabilities that enable effective collaboration with AI systems.
Validation and Performance Requirements: Ensuring Reliability in the Missing Middle
Annex 22 establishes rigorous validation requirements for AI systems used in GMP contexts, mandating that models undergo testing against predefined acceptance criteria that are at least as stringent as the processes they replace. This requirement ensures that AI augmentation does not compromise existing quality standards while providing a framework for demonstrating the value of human-machine collaboration.
The validation framework emphasizes explainability and confidence scoring, requiring AI systems to provide transparent justifications for their decisions. This transparency requirement enables human operators to understand AI recommendations and exercise appropriate judgment in their implementation—a key principle of effective missing middle operations. The focus on explainability also facilitates regulatory inspections and audits, ensuring that AI-driven decisions can be scrutinized and validated by external parties.
The Evolution of GMP Decision Making: From Human-Centric to Human-AI Collaborative
Traditional GMP decision-making has been characterized by hierarchical approval processes, extensive documentation requirements, and risk-averse approaches that prioritize compliance over innovation. While these characteristics have served the industry well in ensuring product safety and regulatory compliance, they have also created inefficiencies and limited opportunities for continuous improvement.
Traditional GMP Decision Paradigms
Conventional pharmaceutical quality assurance relies on trained personnel making decisions based on established procedures, historical data, and their professional judgment. Quality control laboratories generate data through standardized testing protocols, which trained analysts interpret according to predetermined specifications. Deviation investigations follow structured methodologies that emphasize root cause analysis and corrective action implementation. Manufacturing decisions are made through change control processes that require multiple levels of review and approval.
This approach has proven effective in maintaining product quality and regulatory compliance, but it also has significant limitations. Human decision-makers can be overwhelmed by the volume and complexity of data generated in modern pharmaceutical manufacturing. Cognitive biases can influence judgment, and the sequential nature of traditional decision-making processes can delay responses to emerging issues. Additionally, the reliance on historical precedent can inhibit innovation and limit opportunities for process optimization.
AI-Augmented Decision Making: Expanding Human Capabilities
The integration of AI into GMP decision-making processes offers opportunities to address many limitations of traditional approaches while maintaining the human oversight that regulations require. AI systems can process vast amounts of data rapidly, identify patterns that might escape human observation, and provide data-driven recommendations that complement human judgment.
In quality control laboratories, AI-powered image recognition systems can analyze visual inspections with greater speed and consistency than human inspectors, while still requiring human validation of critical decisions. Predictive analytics can identify potential quality issues before they manifest, enabling proactive interventions that prevent problems rather than merely responding to them. Real-time monitoring systems can continuously assess process parameters and alert human operators to deviations that require attention.
The transformation of deviation management exemplifies the potential of AI-augmented decision-making. Traditional deviation investigations can be time-consuming and resource-intensive, often requiring weeks or months to complete. AI systems can rapidly analyze historical data to identify potential root causes, suggest relevant corrective actions based on similar past events, and even predict the likelihood of recurrence. However, the final decisions about root cause determination and corrective action implementation remain with qualified human personnel, ensuring that professional judgment and regulatory accountability are preserved.
Maintaining Human Accountability in AI-Augmented Processes
The integration of AI into GMP decision-making raises important questions about accountability and responsibility. Annex 22 addresses these concerns by maintaining clear lines of human accountability while enabling AI augmentation. The guidance requires that qualified personnel remain responsible for all decisions that could impact patient safety, product quality, or data integrity, regardless of the level of AI involvement in the decision-making process.
This approach reflects the missing middle philosophy by recognizing that AI augmentation should enhance rather than replace human judgment. Human operators must understand the AI system’s recommendations, evaluate them in the context of their broader knowledge and experience, and take responsibility for the final decisions. This creates a collaborative dynamic where AI provides analytical capabilities that exceed human limitations while humans provide contextual understanding, ethical judgment, and regulatory accountability that AI systems cannot replicate.
Fusion Skills for Pharmaceutical Quality Professionals: Navigating the AI-Augmented Landscape
The successful implementation of AI in GMP environments requires pharmaceutical quality professionals to develop new capabilities that enable effective collaboration with AI systems. Daugherty and Wilson identify eight “fusion skills” that are essential for thriving in the missing middle. These skills take on particular significance in the highly regulated pharmaceutical environment, where the consequences of poor decision-making can directly impact patient safety.
Intelligent interrogation involves knowing how to effectively query AI systems to obtain meaningful insights. In pharmaceutical quality contexts, this skill enables professionals to leverage AI analytical capabilities while maintaining critical thinking about the results. For example, when investigating a deviation, a quality professional might use AI to analyze historical data for similar events, but must know how to frame queries that yield relevant and actionable insights.
The development of intelligent interrogation skills requires understanding both the capabilities and limitations of specific AI systems. Quality professionals must learn to ask questions that align with the AI system’s training and design while recognizing when human judgment is necessary to interpret or validate the results. This skill is particularly important in GMP environments, where the accuracy and completeness of information can have significant regulatory and safety implications.
Judgment Integration: Combining AI Insights with Human Wisdom
Judgment integration involves combining AI-generated insights with human expertise to make informed decisions. This skill is critical in pharmaceutical quality, where decisions often require consideration of factors that may not be captured in historical data or AI training sets. For instance, an AI system might recommend a particular corrective action based on statistical analysis, but a human professional might recognize unique circumstances that warrant a different approach.
Effective judgment integration requires professionals to maintain a critical perspective on AI recommendations while remaining open to insights that challenge conventional thinking. In GMP contexts, this balance is particularly important because regulatory compliance demands both adherence to established procedures and responsiveness to unique circumstances. Quality professionals must develop the ability to synthesize AI insights with their understanding of regulatory requirements, product characteristics, and manufacturing constraints.
Reciprocal Apprenticing: Mutual Learning Between Humans and AI
Reciprocal apprenticing describes the process by which humans and AI systems learn from each other to improve performance over time. In pharmaceutical quality applications, this might involve humans providing feedback on AI recommendations that helps the system improve its future performance, while simultaneously learning from AI insights to enhance their own decision-making capabilities.
This bidirectional learning process is particularly valuable in GMP environments, where continuous improvement is both a regulatory expectation and a business imperative. Quality professionals can help AI systems become more effective by providing context about why certain recommendations were or were not appropriate in specific situations. Simultaneously, they can learn from AI analysis to identify patterns or relationships that might inform future decision-making.
Additional Fusion Skills: Building Comprehensive AI Collaboration Capabilities
Beyond the three core skills highlighted by Daugherty and Wilson for generative AI applications, their broader framework includes additional capabilities that are relevant to pharmaceutical quality professionals. Responsible normalizing involves shaping the perception and purpose of human-machine interaction in ways that align with organizational values and regulatory requirements. In pharmaceutical contexts, this skill helps ensure that AI implementation supports rather than undermines the industry’s commitment to patient safety and product quality.
Re-humanizing time involves using AI to free up human capacity for distinctly human activities such as creative problem-solving, relationship building, and ethical decision-making. For pharmaceutical quality professionals, this might mean using AI to automate routine data analysis tasks, creating more time for strategic thinking about quality improvements and regulatory strategy.
Bot-based empowerment and holistic melding involve developing mental models of AI capabilities that enable more effective collaboration. These skills help quality professionals understand how to leverage AI systems most effectively while maintaining appropriate skepticism about their limitations.
Real-World Applications: The Missing Middle in Pharmaceutical Manufacturing
The theoretical concepts of the missing middle and human-AI collaboration are increasingly being translated into practical applications within pharmaceutical manufacturing environments. These implementations demonstrate how the principles outlined in Annex 22 can be operationalized while delivering tangible benefits to product quality, operational efficiency, and regulatory compliance.
Quality Control and Inspection: Augmenting Human Visual Capabilities
One of the most established applications of AI in pharmaceutical manufacturing involves augmenting human visual inspection capabilities. Traditional visual inspection of tablets, capsules, and packaging materials relies heavily on human operators who must identify defects, contamination, or other quality issues. While humans excel at recognizing unusual patterns and exercising judgment about borderline cases, they can be limited by fatigue, inconsistency, and the volume of materials that must be inspected.
AI-powered vision systems can process images at speeds far exceeding human capabilities while maintaining consistent performance standards. These systems can identify defects that might be missed by human inspectors and flag potential issues for further review89. However, the most effective implementations maintain human oversight over critical decisions, with AI serving to augment rather than replace human judgment.
Predictive Maintenance: Preventing Quality Issues Through Proactive Intervention
Predictive maintenance represents another area where AI applications align with the missing middle philosophy by augmenting human decision-making rather than replacing it. Traditional maintenance approaches in pharmaceutical manufacturing have relied on either scheduled maintenance intervals or reactive responses to equipment failures. Both approaches can result in unnecessary costs or quality risks.
AI-powered predictive maintenance systems analyze sensor data, equipment performance histories, and maintenance records to predict when equipment failures are likely to occur. This information enables maintenance teams to schedule interventions before failures impact production or product quality. However, the final decisions about maintenance timing and scope remain with qualified personnel who can consider factors such as production schedules, regulatory requirements, and risk assessments that AI systems cannot fully evaluate.
Real-Time Process Monitoring: Enhancing Human Situational Awareness
Real-time process monitoring applications leverage AI’s ability to continuously analyze large volumes of data to enhance human situational awareness and decision-making capabilities. Traditional process monitoring in pharmaceutical manufacturing relies on control systems that alert operators when parameters exceed predetermined limits. While effective, this approach can result in delayed responses to developing issues and may miss subtle patterns that indicate emerging problems.
AI-enhanced monitoring systems can analyze multiple data streams simultaneously to identify patterns that might indicate developing quality issues or process deviations. These systems can provide early warnings that enable operators to take corrective action before problems become critical. The most effective implementations provide operators with explanations of why alerts were generated, enabling them to make informed decisions about appropriate responses.
The integration of AI into Manufacturing Execution Systems (MES) exemplifies this approach. AI algorithms can monitor real-time production data to detect deviations in drug formulation, dissolution rates, and environmental conditions. When potential issues are identified, the system alerts qualified operators who can evaluate the situation and determine appropriate corrective actions. This approach maintains human accountability for critical decisions while leveraging AI’s analytical capabilities to enhance situational awareness.
Deviation Management: Accelerating Root Cause Analysis
Deviation management represents a critical area where AI applications can significantly enhance human capabilities while maintaining the rigorous documentation and accountability requirements that GMP mandates. Traditional deviation investigations can be time-consuming processes that require extensive data review, analysis, and documentation.
AI systems can rapidly analyze historical data to identify patterns, potential root causes, and relevant precedents for similar deviations. This capability can significantly reduce the time required for initial investigation phases while providing investigators with comprehensive background information. However, the final determinations about root causes, risk assessments, and corrective actions remain with qualified human personnel who can exercise professional judgment and ensure regulatory compliance.
The application of AI to root cause analysis demonstrates the value of the missing middle approach in highly regulated environments. AI can process vast amounts of data to identify potential contributing factors and suggest hypotheses for investigation, but human expertise remains essential for evaluating these hypotheses in the context of specific circumstances, regulatory requirements, and risk considerations.
Regulatory Landscape: Beyond Annex 22
While Annex 22 represents the most comprehensive regulatory guidance for AI in pharmaceutical manufacturing, it is part of a broader regulatory landscape that is evolving to address the challenges and opportunities presented by AI technologies. Understanding this broader context is essential for pharmaceutical organizations seeking to implement AI applications that align with both current requirements and emerging regulatory expectations.
FDA Perspectives: Encouraging Innovation with Appropriate Safeguards
The U.S. Food and Drug Administration (FDA) has taken a generally supportive stance toward AI applications in pharmaceutical manufacturing, recognizing their potential to enhance product quality and manufacturing efficiency. The agency’s approach emphasizes the importance of maintaining human oversight and accountability while encouraging innovation that can benefit public health.
The FDA’s guidance on Process Analytical Technology (PAT) provides a framework for implementing advanced analytical and control technologies, including AI applications, in pharmaceutical manufacturing. The PAT framework emphasizes real-time monitoring and control capabilities that align well with AI applications, while maintaining requirements for validation, risk assessment, and human oversight that are consistent with the missing middle philosophy.
The agency has also indicated interest in AI applications that can enhance regulatory processes themselves, including automated analysis of manufacturing data for inspection purposes and AI-assisted review of regulatory submissions. These applications could potentially streamline regulatory interactions while maintaining appropriate oversight and accountability mechanisms.
International Harmonization: Toward Global Standards
The development of AI governance frameworks in pharmaceutical manufacturing is increasingly taking place within international forums that seek to harmonize approaches across different regulatory jurisdictions. The International Conference on Harmonisation (ICH) has begun considering how existing guidelines might need to be modified to address AI applications, particularly in areas such as quality risk management and pharmaceutical quality systems.
The European Medicines Agency (EMA) has published reflection papers on AI use throughout the medicinal product lifecycle, providing broader context for how AI applications might be governed beyond manufacturing applications. These documents emphasize the importance of human-centric approaches that maintain patient safety and product quality while enabling innovation.
The Pharmaceutical Inspection Convention and Pharmaceutical Inspection Co-operation Scheme (PIC/S) has also begun developing guidance on AI applications, recognizing the need for international coordination in this rapidly evolving area. The alignment between Annex 22 and PIC/S approaches suggests movement toward harmonized international standards that could facilitate global implementation of AI applications.
Industry Standards: Complementing Regulatory Requirements
Professional organizations and industry associations are developing standards and best practices that complement regulatory requirements while providing more detailed guidance for implementation. The International Society for Pharmaceutical Engineering (ISPE) has published guidance on AI governance frameworks that emphasize risk-based approaches and lifecycle management principles.
Emerging Considerations: Preparing for Future Developments
The regulatory landscape for AI in pharmaceutical manufacturing continues to evolve as regulators gain experience with specific applications and technologies advance. Several emerging considerations are likely to influence future regulatory developments and should be considered by organizations planning AI implementations.
The potential for AI applications to generate novel insights that challenge established practices raises questions about how regulatory frameworks should address innovation that falls outside existing precedents. The missing middle philosophy provides a framework for managing these situations by maintaining human accountability while enabling AI-driven insights to inform decision-making.
The increasing sophistication of AI technologies, including advances in explainable AI and federated learning approaches, may enable applications that are currently excluded from critical GMP processes. Regulatory frameworks will need to evolve to address these capabilities while maintaining appropriate safeguards for patient safety and product quality.
Challenges and Limitations: Navigating the Complexities of AI Implementation
Despite the promise of AI applications in pharmaceutical manufacturing, significant challenges and limitations must be addressed to realize the full potential of human-machine collaboration in GMP environments. These challenges span technical, organizational, and regulatory dimensions and require careful consideration in the design and implementation of AI systems.
Technical Challenges: Ensuring Reliability and Performance
The implementation of AI in GMP environments faces significant technical challenges related to data quality, system validation, and performance consistency. Pharmaceutical manufacturing generates vast amounts of data from multiple sources, including process sensors, laboratory instruments, and quality control systems. Ensuring that this data is of sufficient quality to train and operate AI systems requires robust data governance frameworks and quality assurance processes.
Data integrity requirements in GMP environments are particularly stringent, demanding that all data be attributable, legible, contemporaneous, original, and accurate (ALCOA principles). AI systems must be designed to maintain these data integrity principles throughout their operation, including during data preprocessing, model training, and prediction generation phases. This requirement can complicate AI implementations and requires careful attention to system design and validation approaches.
System validation presents another significant technical challenge. Traditional validation approaches for computerized systems rely on deterministic testing methodologies that may not be fully applicable to AI systems, particularly those that employ machine learning algorithms. Annex 22 addresses some of these challenges by focusing on static, deterministic AI models, but even these systems require validation approaches that can demonstrate consistent performance across expected operating conditions.
The black box nature of some AI algorithms creates challenges for meeting explainability requirements. While Annex 22 mandates that AI systems provide transparent justifications for their decisions, achieving this transparency can be technically challenging for complex machine learning models. Organizations must balance the analytical capabilities of sophisticated AI algorithms with the transparency requirements of GMP environments.
Organizational Challenges: Building Capabilities and Managing Change
The successful implementation of AI in pharmaceutical manufacturing requires significant organizational capabilities that many companies are still developing. The missing middle approach demands that organizations build fusion skills across their workforce while maintaining existing competencies in traditional pharmaceutical quality practices.
Skills development represents a particular challenge, as it requires investment in both technical training for AI systems and conceptual training for understanding how to collaborate effectively with AI. Quality professionals must develop capabilities in data analysis, statistical interpretation, and AI system interaction while maintaining their expertise in pharmaceutical science, regulatory requirements, and quality assurance principles.
Change management becomes critical when implementing AI systems that alter established workflows and decision-making processes. Traditional pharmaceutical organizations often have deeply embedded cultures that emphasize risk aversion and adherence to established procedures. Introducing AI systems that recommend changes to established practices or challenge conventional thinking requires careful change management to ensure adoption while maintaining appropriate risk controls.
The integration of AI systems with existing pharmaceutical quality systems presents additional organizational challenges. Many pharmaceutical companies operate with legacy systems that were not designed to interface with AI applications. Integrating AI capabilities while maintaining system reliability and regulatory compliance can require significant investments in system upgrades and integration capabilities.
The evolving nature of regulatory requirements for AI applications creates uncertainty for pharmaceutical organizations planning implementations. While Annex 22 provides important guidance, it is still in draft form and subject to change based on consultation feedback. Organizations must balance the desire to implement AI capabilities with the need to ensure compliance with final regulatory requirements.
The international nature of pharmaceutical manufacturing creates additional regulatory challenges, as organizations must navigate different AI governance frameworks across multiple jurisdictions. While there is movement toward harmonization, differences in regulatory approaches could complicate global implementations.
Inspection readiness represents a particular challenge for AI implementations in GMP environments. Traditional pharmaceutical inspections focus on evaluating documented procedures, training records, and system validations. AI systems introduce new elements that inspectors may be less familiar with, requiring organizations to develop new approaches to demonstrate compliance and explain AI-driven decisions to regulatory authorities.
The dynamic nature of AI systems, even static models as defined by Annex 22, creates challenges for maintaining validation status over time. Unlike traditional computerized systems that remain stable once validated, AI systems may require revalidation as they are updated or as their operating environments change. Organizations must develop lifecycle management approaches that maintain validation status while enabling continuous improvement.
Future Implications: The Evolution of Pharmaceutical Quality Assurance
The integration of AI into pharmaceutical manufacturing represents more than a technological upgrade; it signals a fundamental transformation in how quality assurance is conceptualized and practiced. As AI capabilities continue to advance and regulatory frameworks mature, the implications for pharmaceutical quality assurance extend far beyond current applications to encompass new paradigms for ensuring product safety and efficacy.
The Transformation of Quality Professional Roles
The missing middle philosophy suggests that AI integration will transform rather than eliminate quality professional roles in pharmaceutical manufacturing. Future quality professionals will likely serve as AI collaborators who combine domain expertise with AI literacy to make more informed decisions than either humans or machines could make independently.
These evolved roles will require professionals who can bridge the gap between pharmaceutical science and data science, understanding both the regulatory requirements that govern pharmaceutical manufacturing and the capabilities and limitations of AI systems. Quality professionals will need to develop skills in AI system management, including understanding how to train, validate, and monitor AI applications while maintaining appropriate skepticism about their outputs.
The emergence of new role categories seems likely, including AI trainers who specialize in developing and maintaining AI models for pharmaceutical applications, AI explainers who help interpret AI outputs for regulatory and business purposes, and AI sustainers who ensure that AI systems continue to operate effectively over time. These roles reflect the missing middle philosophy by combining human expertise with AI capabilities to create new forms of value.
Fusion Skill
Category
Definition
Pharmaceutical Quality Application
Current Skill Level (Typical)
Target Skill Level (AI Era)
Intelligent Interrogation
Machines Augment Humans
Knowing how to ask the right questions of AI systems across levels of abstraction to get meaningful insights
Querying AI systems for deviation analysis, asking specific questions about historical patterns and root causes
Low – Basic
High – Advanced
Judgment Integration
Machines Augment Humans
The ability to combine AI-generated insights with human expertise and judgment to make informed decisions
Combining AI recommendations with regulatory knowledge and professional judgment in quality decisions
Medium – Developing
High – Advanced
Reciprocal Apprenticing
Humans + Machines (Both)
Mutual learning where humans train AI while AI teaches humans, creating bidirectional skill development
Training AI on quality patterns while learning from AI insights about process optimization
Low – Basic
High – Advanced
Bot-based Empowerment
Machines Augment Humans
Working effectively with AI agents to extend human capabilities and create enhanced performance
Using AI-powered inspection systems while maintaining human oversight and decision authority
Low – Basic
High – Advanced
Holistic Melding
Machines Augment Humans
Developing robust mental models of AI capabilities to improve collaborative outcomes
Understanding AI capabilities in predictive maintenance to optimize intervention timing
Low – Basic
Medium – Proficient
Re-humanizing Time
Humans Manage Machines
Using AI to free up human capacity for distinctly human activities like creativity and relationship building
Automating routine data analysis to focus on strategic quality improvements and regulatory planning
Medium – Developing
High – Advanced
Responsible Normalizing
Humans Manage Machines
Responsibly shaping the purpose and perception of human-machine interaction for individuals and society
Ensuring AI implementations align with GMP principles and patient safety requirements
Medium – Developing
High – Advanced
Relentless Reimagining
Humans + Machines (Both)
The discipline of creating entirely new processes and business models rather than just automating existing ones
Redesigning quality processes from scratch to leverage AI capabilities while maintaining compliance
Low – Basic
Medium – Proficient
Advanced AI Applications: Beyond Current Regulatory Boundaries
While current regulatory frameworks focus on static, deterministic AI models, the future likely holds opportunities for more sophisticated AI applications that could further transform pharmaceutical quality assurance. Dynamic learning systems, currently excluded from critical GMP applications by Annex 22, may eventually be deemed acceptable as our understanding of their risks and benefits improves.
Generative AI applications, while currently limited to non-critical applications, could potentially revolutionize areas such as deviation investigation, regulatory documentation, and training material development. As these technologies mature and appropriate governance frameworks develop, they may enable new forms of human-AI collaboration that further expand the missing middle in pharmaceutical manufacturing.
The integration of AI with other emerging technologies, such as digital twins and advanced sensor networks, could create comprehensive pharmaceutical manufacturing ecosystems that continuously optimize quality while maintaining human oversight. These integrated systems could enable unprecedented levels of process understanding and control while preserving the human accountability that regulations require.
Personalized Medicine and Quality Assurance Implications
The trend toward personalized medicine presents unique challenges and opportunities for AI applications in pharmaceutical quality assurance. Traditional GMP frameworks are designed around standardized products manufactured at scale, but personalized therapies may require individualized quality approaches that adapt to specific patient or product characteristics.
AI systems could enable quality assurance approaches that adjust to the unique requirements of personalized therapies while maintaining appropriate safety and efficacy standards. This might involve AI-driven risk assessments that consider patient-specific factors or quality control approaches that adapt to the characteristics of individual therapeutic products.
The regulatory frameworks for these applications will likely need to evolve beyond current approaches, potentially incorporating more flexible risk-based approaches that can accommodate the variability inherent in personalized medicine while maintaining patient safety. The missing middle philosophy provides a framework for managing this complexity by ensuring that human judgment remains central to quality decisions while leveraging AI capabilities to manage the increased complexity of personalized manufacturing.
Global Harmonization and Regulatory Evolution
The future of AI in pharmaceutical manufacturing will likely be shaped by efforts to harmonize regulatory approaches across different jurisdictions. The current patchwork of national and regional guidelines creates complexity for global pharmaceutical companies, but movement toward harmonized international standards could facilitate broader AI adoption.
The development of risk-based regulatory frameworks that focus on outcomes rather than specific technologies could enable more flexible approaches to AI implementation while maintaining appropriate safeguards. These frameworks would need to balance the desire for innovation with the fundamental regulatory imperative to protect patient safety and ensure product quality.
The evolution of regulatory science itself may be influenced by AI applications, with regulatory agencies potentially using AI tools to enhance their own capabilities in areas such as data analysis, risk assessment, and inspection planning. This could create new opportunities for collaboration between industry and regulators while maintaining appropriate independence and oversight.
Recommendations for Industry Implementation
Based on the analysis of current regulatory frameworks, technological capabilities, and industry best practices, several key recommendations emerge for pharmaceutical organizations seeking to implement AI applications that align with the missing middle philosophy and regulatory expectations.
Developing AI Governance Frameworks
Organizations should establish comprehensive AI governance frameworks that address the full lifecycle of AI applications from development through retirement. These frameworks should align with existing pharmaceutical quality systems while addressing the unique characteristics of AI technologies. The governance framework should define roles and responsibilities for AI oversight, establish approval processes for AI implementations, and create mechanisms for ongoing monitoring and risk management.
The governance framework should explicitly address the human oversight requirements outlined in Annex 22, ensuring that qualified personnel remain accountable for all decisions that could impact patient safety, product quality, or data integrity. This includes defining the knowledge and training requirements for personnel who will work with AI systems and establishing procedures for ensuring that human operators understand AI capabilities and limitations.
Risk assessment processes should be integrated throughout the AI lifecycle, beginning with initial feasibility assessments and continuing through ongoing monitoring of system performance. These risk assessments should consider not only technical risks but also regulatory, business, and ethical considerations that could impact AI implementations.
AI Family
Description
Key Characteristics
Annex 22 Classification
GMP Applications
Validation Requirements
Risk Level
Rule-Based Systems
If-then logic systems with predetermined decision trees and fixed algorithms
Not applicable – prohibited for critical GMP applications
High
Federated Learning
Distributed learning across multiple sites while keeping data local
Privacy-preserving distributed training, model aggregation
Prohibited for Critical GMP
Multi-site model training while preserving data privacy
Not applicable – prohibited for critical GMP applications
Medium
detailed classification table of AI families and their regulatory status under the draft EU Annex 22
Building Organizational Capabilities
Successful AI implementation requires significant investment in organizational capabilities that enable effective human-machine collaboration. This includes technical capabilities for developing, validating, and maintaining AI systems, as well as human capabilities for collaborating effectively with AI.
Technical capability development should focus on areas such as data science, machine learning, and AI system validation. Organizations may need to hire new personnel with these capabilities or invest in training existing staff. The technical capabilities should be integrated with existing pharmaceutical science and quality assurance expertise to ensure that AI applications align with industry requirements.
Human capability development should focus on fusion skills that enable effective collaboration with AI systems. This includes intelligent interrogation skills for querying AI systems effectively, judgment integration skills for combining AI insights with human expertise, and reciprocal apprenticing skills for mutual learning between humans and AI. Training programs should help personnel understand both the capabilities and limitations of AI systems while maintaining their core competencies in pharmaceutical quality assurance.
Implementing Pilot Programs
Organizations should consider implementing pilot programs that demonstrate AI capabilities in controlled environments before pursuing broader implementations. These pilots should focus on applications that align with current regulatory frameworks while providing opportunities to develop organizational capabilities and understanding.
Pilot programs should be designed to generate evidence of AI effectiveness while maintaining rigorous controls that ensure patient safety and regulatory compliance. This includes comprehensive validation approaches, robust change control processes, and thorough documentation of AI system performance.
The pilot programs should also serve as learning opportunities for developing organizational capabilities and refining AI governance approaches. Lessons learned from pilot implementations should be captured and used to inform broader AI strategies and implementation approaches.
Engaging with Regulatory Authorities
Organizations should actively engage with regulatory authorities to understand expectations and contribute to the development of regulatory frameworks for AI applications. This engagement can help ensure that AI implementations align with regulatory expectations while providing input that shapes future guidance.
Regulatory engagement should begin early in the AI development process, potentially including pre-submission meetings or other formal interaction mechanisms. Organizations should be prepared to explain their AI approaches, demonstrate compliance with existing requirements, and address any novel aspects of their implementations.
Industry associations and professional organizations provide valuable forums for collective engagement with regulatory authorities on AI-related issues. Organizations should participate in these forums to contribute to industry understanding and influence regulatory development.
Conclusion: Embracing the Collaborative Future of Pharmaceutical Quality
The convergence of the missing middle concept with the regulatory reality of Annex 22 represents a defining moment for pharmaceutical quality assurance. Rather than viewing AI as either a replacement for human expertise or a mere automation tool, the industry has the opportunity to embrace a collaborative paradigm that enhances human capabilities while maintaining the rigorous oversight that patient safety demands.
The journey toward effective human-AI collaboration in GMP environments will not be without challenges. Technical hurdles around data quality, system validation, and explainability must be overcome. Organizational capabilities in both AI technology and fusion skills must be developed. Regulatory frameworks will continue to evolve as experience accumulates and understanding deepens. However, the potential benefits—enhanced product quality, improved operational efficiency, and more effective regulatory compliance—justify the investment required to address these challenges.
The missing middle philosophy provides a roadmap for navigating this transformation. By focusing on collaboration rather than replacement, by maintaining human accountability while leveraging AI capabilities, and by developing the fusion skills necessary for effective human-machine partnerships, pharmaceutical organizations can position themselves to thrive in an AI-augmented future while upholding the industry’s fundamental commitment to patient safety and product quality.
Annex 22 represents just the beginning of this transformation. As AI technologies continue to advance and regulatory frameworks mature, new opportunities will emerge for expanding the scope and sophistication of human-AI collaboration in pharmaceutical manufacturing. Organizations that invest now in building the capabilities, governance frameworks, and organizational cultures necessary for effective AI collaboration will be best positioned to benefit from these future developments.
The future of pharmaceutical quality assurance lies not in choosing between human expertise and artificial intelligence, but in combining them in ways that create value neither could achieve alone. The missing middle is not empty space to be filled, but fertile ground for innovation that maintains the human judgment and accountability that regulations require while leveraging the analytical capabilities that AI provides. As we move forward into this new era, the most successful organizations will be those that master the art of human-machine collaboration, creating a future where technology serves to amplify rather than replace the human expertise that has always been at the heart of pharmaceutical quality assurance.
The integration of AI into pharmaceutical manufacturing represents more than a technological evolution; it embodies a fundamental reimagining of how quality is assured, how decisions are made, and how human expertise can be augmented rather than replaced. The missing middle concept, operationalized through frameworks like Annex 22, provides a path forward that honors both the innovative potential of AI and the irreplaceable value of human judgment in ensuring that the medicines we manufacture continue to meet the highest standards of safety, efficacy, and quality that patients deserve.
The draft revision of EU GMP Chapter 4 on Documentation represents more than just an update—it signals a paradigm shift toward digitalization, enhanced data integrity, and risk-based quality management in pharmaceutical manufacturing.
The Digital Transformation Imperative
The draft Chapter 4 emerges from a recognition that pharmaceutical manufacturing has fundamentally changed since 2011. The rise of Industry 4.0, artificial intelligence in manufacturing decisions, and the critical importance of data integrity following numerous regulatory actions have necessitated a complete reconceptualization of documentation requirements.
The new framework introduces comprehensive data governance systems, risk-based approaches throughout the documentation lifecycle, and explicit requirements for hybrid systems that combine paper and electronic elements. These changes reflect lessons learned from data integrity violations that have cost the industry billions in remediation and lost revenue.
Detailed Document Type Analysis
Master Documents: Foundation of Quality Systems
Document Type
Current Chapter 4 (2011) Requirements
Draft Chapter 4 (2025) Requirements
FDA 21 CFR 211
ICH Q7
WHO GMP
ISO 13485
Site Master File
A document describing the GMP related activities of the manufacturer
Refer to EU GMP Guidelines, Volume 4 ‘Explanatory Notes on the preparation of a Site Master File’
No specific equivalent, but facility information requirements under §211.176
Section 2.5 – Documentation system should include site master file equivalent information
Section 4.1 – Site master file requirements similar to EU GMP
Quality manual requirements under Section 4.2.2
Validation Master Plan
Not specified
A document describing the key elements of the site qualification and validation program
Process validation requirements under §211.100 and §211.110
Section 12 – Validation requirements for critical operations
Section 4.2 – Validation and qualification programs
Validation planning under Section 7.5.6 and design validation
The introduction of the Validation Master Plan as a mandatory master document represents the most significant addition to this category. This change acknowledges the critical role of systematic validation in modern pharmaceutical manufacturing and aligns EU GMP with global best practices seen in FDA and ICH frameworks.
The Site Master File requirement, while maintained, now references more detailed guidance, suggesting increased regulatory scrutiny of facility information and manufacturing capabilities.
Instructions: The Operational Backbone
Document Type
Current Chapter 4 (2011) Requirements
Draft Chapter 4 (2025) Requirements
FDA 21 CFR 211
ICH Q7
WHO GMP
ISO 13485
Specifications
Describe in detail the requirements with which the products or materials used or obtained during manufacture have to conform. They serve as a basis for quality evaluation
Refer to glossary for definition
Component specifications §211.84, drug product specifications §211.160
Section 7.3 – Specifications for starting materials, intermediates, and APIs
Section 4.12 – Specifications for starting materials and finished products
Requirements specifications under Section 7.2.1
Manufacturing Formulae, Processing, Packaging and Testing Instructions
Provide detail all the starting materials, equipment and computerised systems (if any) to be used and specify all processing, packaging, sampling and testing instructions
Provide complete detail on all the starting materials, equipment, and computerised systems (if any) to be used and specify all processing, packaging, sampling, and testing instructions to ensure batch to batch consistency
Master production and control records §211.186, production record requirements §211.188
Section 6.4 – Master production instructions and batch production records
Section 4.13 – Manufacturing formulae and processing instructions
Production and service provision instructions Section 7.5.1
Procedures (SOPs)
Give directions for performing certain operations
Otherwise known as Standard Operating Procedures, documented set of instructions for performing and recording operations
Written procedures required throughout Part 211 for various operations
Section 6.1 – Written procedures for all critical operations
Section 4.14 – Standard operating procedures for all operations
Documented procedures throughout the standard, Section 4.2.1
Technical/Quality Agreements
Are agreed between contract givers and acceptors for outsourced activities
Written proof of agreement between contract givers and acceptors for outsourced activities
Section 16 – Contract manufacturers agreements and responsibilities
Section 7 – Contract manufacture and analysis agreements
Outsourcing agreements under Section 7.4 – Purchasing
The enhancement of Manufacturing Instructions to explicitly require “batch to batch consistency” represents a crucial evolution. This change reflects increased regulatory focus on manufacturing reproducibility and aligns with FDA’s process validation lifecycle approach and ICH Q7’s emphasis on consistent API production.
Procedures (SOPs) now explicitly encompass both “performing and recording operations,” emphasizing the dual nature of documentation as both instruction and evidence creation1. This mirrors FDA 21 CFR 211’s comprehensive procedural requirements and ISO 13485’s systematic approach to documented procedures910.
The transformation of Technical Agreements into Technical/Quality Agreements with emphasis on “written proof” reflects lessons learned from outsourcing challenges and regulatory enforcement actions. This change aligns with ICH Q7’s detailed contract manufacturer requirements and strengthens oversight of critical outsourced activities.
Records and Reports: Evidence of Compliance
Document Type
Current Chapter 4 (2011) Requirements
Draft Chapter 4 (2025) Requirements
FDA 21 CFR 211
ICH Q7
WHO GMP
ISO 13485
Records
Provide evidence of various actions taken to demonstrate compliance with instructions, e.g. activities, events, investigations, and in the case of manufactured batches a history of each batch of product
Provide evidence of various actions taken to demonstrate compliance with instructions, e.g. activities, events, investigations, and in the case of manufactured batches a history of each batch of product, including its distribution. Records include the raw data which is used to generate other records
Comprehensive record requirements throughout Part 211, §211.180 general requirements
Section 6.5 – Batch production records and Section 6.6 – Laboratory control records
Section 4.16 – Records requirements for all GMP activities
Quality records requirements under Section 4.2.4
Certificate of Analysis
Provide a summary of testing results on samples of products or materials together with the evaluation for compliance to a stated specification
Provide a summary of testing results on samples of products or materials together with the evaluation for compliance to a stated specification
Laboratory records and test results §211.194, certificate requirements
Section 11.15 – Certificate of analysis for APIs
Section 6.8 – Certificates of analysis requirements
Test records and certificates under Section 7.5.3
Reports
Document the conduct of particular exercises, projects or investigations, together with results, conclusions and recommendations
Document the conduct of exercises, studies, assessments, projects or investigations, together with results, conclusions and recommendations
The expansion of Recordsto explicitly include “raw data” and “distribution information” represents perhaps the most impactful change for day-to-day operations. This enhancement directly addresses data integrity concerns highlighted by regulatory inspections and enforcement actions globally. The definition now states that “Records include the raw data which is used to generate other records,” establishing clear expectations for data traceability that align with FDA’s data integrity guidance and ICH Q7’s comprehensive record requirements.
Reports now encompass “exercises, studies, assessments, projects or investigations,” broadening the scope beyond the current “particular exercises, projects or investigations”. This expansion aligns with modern pharmaceutical operations that increasingly rely on various analytical studies and assessments for decision-making, matching ISO 13485’s comprehensive reporting requirements.
Revolutionary Framework Elements
Data Governance Revolution
The draft introduces an entirely new paradigm through its Data Governance Systems (Sections 4.10-4.18). This framework establishes:
Complete lifecycle management from data creation through retirement
Risk-based approaches considering data criticality and data risk
Service provider oversight with periodic review requirements
Ownership accountability throughout the data lifecycle
This comprehensive approach exceeds traditional GMP requirements and positions EU regulations at the forefront of data integrity management, surpassing even FDA’s current frameworks in systematic approach.
ALCOA++ Formalization
The draft formalizes ALCOA++ principles (Attributable, Legible, Contemporaneous, Original, Accurate, Complete, Consistent, Enduring, Available, Traceable) with detailed definitions for each attribute. This represents a major comprehensive regulatory codification of these principles, providing unprecedented clarity for industry implementation.
ALCOA++ Principles: Comprehensive Data Integrity Framework
The Draft EU GMP Chapter 4 (2025) formalizes the ALCOA++ principles as the foundation for data integrity in pharmaceutical manufacturing. This represents the first comprehensive regulatory codification of these expanded data integrity principles, building upon the traditional ALCOA framework with five additional critical elements.
Complete ALCOA++ Requirements Table
Principle
Core Requirement
Paper Implementation
Electronic Implementation
A – Attributable
Identify who performed the task and when
Signatures, dates, initials
User authentication, e-signatures
L – Legible
Information must be readable and unambiguous
Clear writing, permanent ink
Proper formats, search functionality
C – Contemporaneous
Record actions as they happen in real-time
Immediate recording
System timestamps, workflow controls
O – Original
Preserve first capture of information
Original documents retained
Database integrity, backups
A – Accurate
Ensure truthful representation of facts
Training, calibrated equipment
System validation, automated checks
+ Complete
Include all critical information and metadata
Complete data, no missing pages
Metadata capture, completeness checks
+ Consistent
Standardize data creation and processing
Standard formats, consistent units
Data standards, validation rules
+ Enduring
Maintain records throughout retention period
Archival materials, proper storage
Database integrity, migration plans
+ Available
Ensure accessibility for authorized personnel
Organized filing, access controls
Role-based access, query capabilities
+ Traceable
Enable tracing of data history and changes
Sequential numbering, change logs
Audit trails, version control
Hybrid Systems Management
Recognizing the reality of modern pharmaceutical operations, the draft dedicates sections 4.82-4.85 to hybrid systems that combine paper and electronic elements. This practical approach acknowledges that many manufacturers operate in mixed environments and provides specific requirements for managing these complex systems.
A New Era of Pharmaceutical Documentation
The draft EU GMP Chapter 4 represents the most significant evolution in pharmaceutical documentation requirements in over a decade. By introducing comprehensive data governance frameworks, formalizing data integrity principles, and acknowledging the reality of digital transformation, these changes position European regulations as global leaders in modern pharmaceutical quality management.
For industry professionals, these changes offer both challenges and opportunities. Organizations that proactively embrace these new paradigms will not only achieve regulatory compliance but will also realize operational benefits through improved data quality, enhanced decision-making capabilities, and reduced compliance costs.
The evolution from simple documentation requirements to comprehensive data governance systems reflects the maturation of the pharmaceutical industry and its embrace of digital technologies. As we move toward implementation, the industry’s response to these changes will shape the future of pharmaceutical manufacturing for decades to come.
The message is clear: the future of pharmaceutical documentation is digital, risk-based, and comprehensive. Organizations that recognize this shift and act accordingly will thrive in the new regulatory environment, while those that cling to outdated approaches risk being left behind in an increasingly sophisticated and demanding regulatory landscape.
Pharmaceutical compliance is experiencing a tectonic shift, and nowhere is that more clear than in the looming overhaul of EU GMP Annex 11. Most quality leaders have been laser-focused on the revised demands for electronic signatures, access management, or supplier oversight—as I’ve detailed in my previous deep analyses, but few realize that Section 10: Handling of Data is the sleeping volcano in the draft. It is here that the revised Annex 11 transforms data handling controls from “do your best and patch with SOPs” into an auditable, digital, risk-based discipline shaped by technological change.
This isn’t about stocking up your data archive or flipping the “audit trail” switch. This is about putting every point of data entry, transfer, migration, and security under the microscope—and making their control, verification, and risk mitigation the default, not the exception. If, until now, your team has managed GMP data with a cocktail of trust, periodic spot checks, and a healthy dose of hope, you are about to discover just how high the bar has been raised.
The Heart of Section 10: Every Data Touchpoint Is Critical
Section 10, as rewritten in the draft Annex 11, isn’t long, but it is dense. Its brevity belies the workload it creates: a mandate for systematizing, validating, and documenting every critical movement or entry of GMP-relevant data. The section is split into four thematic requirements, each of which deserves careful analysis:
Input verification—requiring plausibility checks for all manual entry of critical data,
Data transfer—enforcing validated electronic interfaces and exceptional controls for any manual transcription,
Data migration—demanding that every one-off or routine migration goes through a controlled, validated process,
Encryption—making secure storage and movement of critical data a risk-based expectation, not an afterthought.
Understanding these not as checkboxes but as an interconnected risk-control philosophy is the only way to achieve robust compliance—and to survive inspection without scrambling for a “procedural explanation” for each data error found.
Input Verification: Automating the Frontline Defense
The End of “Operator Skill” as a Compliance Pillar
Human error, for as long as there have been batch records and lab notebooks, has been a known compliance risk. Before electronic records, the answer was redundancy: a second set of eyes, a periodic QC review, or—let’s be realistic—a quick initial on a form the day before an audit. But in the age of digital systems, Section 10.1 recognizes the simple truth: where technology can prevent senseless or dangerous entries, it must.
Manual entry of critical data—think product counts, analytical results, process parameters—is now subject to real-time, system-enforced plausibility checks. Gone are the days when outlandish numbers in a yield calculation raises no flag, or when an analyst logs a temperature outside any physically possible range with little more than a raised eyebrow. Section 10 demands that every critical data field is bounded by logic—ranges, patterns, value consistency checks—and that nonsensical entries are not just flagged but, ideally, rejected automatically.
Any field that is critical to product quality or patient safety must be controlled at the entry point by automated means. If such logic is technically feasible but not deployed, expect intensive regulatory scrutiny—and be prepared to defend, in writing, why it isn’t in place.
Designing Plausibility Controls: Making Them Work
What does this mean on a practical level? It means scoping your process maps and digitized workflows to inventory every manual input touching GMP outcomes. For each, you need to:
Establish plausible ranges and patterns based on historical data, scientific rationale, and risk analysis.
Program system logic to enforce these boundaries, including mandatory explanatory overrides for any values outside “normal.”
Ensure every override is logged, investigated, and trended—because “frequent overrides” typically signal either badly set limits or a process slipping out of control.
But it’s not just numeric entries. Selectable options, free-text assessments, and uploads of evidence (e.g., images or files) must also be checked for logic and completeness, and mechanisms must exist to prevent accidental omissions or nonsensical entries (like uploading the wrong batch report for a product lot).
These expectations put pressure on system design teams and user interface developers, but they also fundamentally change the culture: from one where error detection is post hoc and personal, to one where error prevention is systemic and algorithmic.
Data Transfer: Validated Interfaces as the Foundation
Automated Data Flows, Not “Swivel Chair Integration”
The next Section 10 pillar wipes out the old “good enough” culture of manually keying critical data between systems—a common practice all the way up to the present day, despite decades of technical options to network devices, integrate systems, and use direct data feeds.
In this new paradigm, critical data must be transferred between systems electronically whenever possible. That means, for example, that:
Laboratory instruments should push their results to the LIMS automatically, not rely on an analyst to retype them.
The MES should transmit batch data to ERP systems for release decisions without recourse to copy-pasting or printout scanning.
Environmental monitoring systems should use validated data feeds into digital reports, not rely on handwritten transcriptions or spreadsheet imports.
Where technology blocks this approach—due to legacy equipment, bespoke protocols, or prohibitive costs—manual transfer is only justifiable as an explicitly assessed and mitigated risk. In those rare cases, organizations must implement secondary controls: independent verification by a second person, pre- and post-transfer checks, and logging of every step and confirmation.
What does a validated interface mean in this context? Not just that two systems can “talk,” but that the transfer is:
Complete (no dropped or duplicated records)
Accurate (no transformation errors or field misalignments)
Secure (with no risk of tampering or interception)
Every one of these must be tested at system qualification (OQ/PQ) and periodically revalidated if either end of the interface changes. Error conditions (such as data out of expected range, failed transfers, or discrepancies) must be logged, flagged to the user, and if possible, halt the associated GMP process until resolved.
Practical Hurdles—and Why They’re No Excuse
Organizations will protest: not every workflow can be harmonized, and some labyrinthine legacy systems lack the APIs or connectivity for automation. The response is clear: you can do manual transfer only when you’ve mapped, justified, and mitigated the added risk. This risk assessment and control strategy will be expected, and if auditors spot critical data being handed off by paper (including the batch record) or spreadsheet without robust double verification, you’ll have a finding that’s impossible to “train away.”
Remember, Annex 11’s philosophy flows from data integrity risk, not comfort or habit. In the new digital reality, technically possible is the compliance baseline.
Data Migration: Control, Validation, and Traceability
Migration Upgrades Are Compliance Projects, Not IT Favors
Section 10.3 brings overdue clarity to a part of compliance historically left to “IT shops” rather than Quality or data governance leads: migrations. In recent years, as cloud moves and system upgrades have exploded, so have the risks. Data gaps, incomplete mapping, field mismatches, and “it worked in test but not in prod” errors lurk in every migration, and their impact is enormous—lost batch records, orphaned critical information, and products released with documentation that simply vanished after a system reboot.
Annex 11 lays down a clear gauntlet: all data migrations must be planned, risk-assessed, and validated. Both the sending and receiving platforms must be evaluated for data constraints, and the migration process itself is subject to the same quality rigor as any new computerized system implementation.
This requires a full lifecycle approach:
Pre-migration planning to document field mapping, data types, format and allowable value reconciliations, and expected record counts.
Controlled execution with logs of each action, anomalies, and troubleshooting steps.
Post-migration verification—not just a “looks ok” sample, but a full reconciliation of batch counts, search for missing or duplicated records, and (where practical) data integrity spot checks.
Formal sign-off, with electronic evidence and supporting risk assessment, that the migration did not introduce errors, losses, or uncontrolled transformations.
Validating the Entire Chain, Not Just the Output
Annex 11’s approach is process-oriented. You can’t simply “prove a few outputs match”; you must show that the process as executed controlled, logged, and safeguarded every record. If source data was garbage, destination data will be worse—so validation includes both the “what” and the “how.” Don’t forget to document how you’ll highlight or remediate mismatched or orphaned records for future investigation or reprocessing; missing this step is a quality and regulatory land mine.
It’s no longer acceptable to treat migration as a purely technical exercise. Every migration is a compliance event. If you can’t show the system’s record—start-to-finish—of how, by whom, when, and under what procedural/corrective control migrations have been performed, you are vulnerable on every product released or batch referencing that data.
Encryption: Securing Data as a Business and Regulatory Mandate
Beyond “Defense in Depth” to a Compliance Expectation
Historically, data security and encryption were IT problems, and the GMP justification for employing them was often little stronger than “everyone else is doing it.” The revised Section 10 throws that era in the trash bin. Encryption is now a risk-based compliance requirement for storage and transfer of critical GMP data. If you don’t use strong encryption “where applicable,” you’d better have a risk assessment ready that shows why the threat is minimal or the technical/operational risk of encryption is greater than the gain.
This requirement is equally relevant whether you’re holding batch record files, digital signatures, process parameter archives, raw QC data, or product release records. Security compromises aren’t just a hacking story; they’re a data integrity, fraud prevention, and business continuity story. In the new regulatory mindset, unencrypted critical data is always suspicious. This is doubly so when the data moves through cloud services, outsourced IT, or is ever accessible outside the organization’s perimeter.
Implementing and Maintaining Encryption: Avoiding Hollow Controls
To comply, you need to specify and control:
Encryption standards (e.g., minimum AES-256 for rest and transit)
Documentation for every location and method where data is or isn’t encrypted, with reference to risk assessments
Procedures for regularly verifying encryption status and responding to incidents or suspected compromises
Regulators will likely want to see not only system specifications but also periodic tests, audit trails of encryption/decryption, and readouts from recent patch cycles or vulnerability scans proving encryption hasn’t been silently “turned off” or configured improperly.
Section 10 Is the Hub of the Data Integrity Wheel
Section 10 cannot be treated in isolation. It underpins and is fed by virtually every other control in the GMP computerized system ecosystem.
Input controls support audit trails: If data can be entered erroneously or fraudulently, the best audit trail is just a record of error.
Validated transfers prevent downstream chaos: If system A and system B don’t transfer reliably, everything “downstream” is compromised.
Migrations touch batch continuity and product release: If you lose or misplace records, your recall and investigation responses are instantly impaired.
Encryption protects change control and deviation closure: If sensitive data is exposed, audit trails and signature controls can’t protect you from the consequences.
Risk-Based Implementation: From Doctrine to Daily Practice
The draft’s biggest strength is its honest embrace of risk-based thinking. Every expectation in Section 10 is to be scaled by impact to product quality and patient safety. You can—and must—document decisions for why a given control is (or is not) necessary for every data touchpoint in your process universe.
That means your risk assessment does more than check a box. For every GMP data field, every transfer, every planned or surprise migration, every storage endpoint, you need to:
Identify every way the data could be made inaccurate, incomplete, unavailable, or stolen.
Define controls appropriate both to the criticality of the data and the likelihood and detectability of error or compromise.
Test and document both normal and failure scenarios—because what matters in a recall, investigation, or regulatory challenge is what happens when things go wrong, not just when they go right.
ALCOA+ is codified by these risk processes: accuracy via plausibility checks, completeness via transfer validation, longevity via robust migration and storage; contemporaneity and endurability via encryption and audit linkage.
Handling of Data vs. Previous Guidance and Global Norms
While much of this seems “good practice,” make no mistake: the regulatory expectations have fundamentally changed. In 2011, Annex 11 was silent on specifics, and 21 CFR Part 11 relied on broad “input checks” and an expectation that organizations would design controls relative to what was reasonable at the time.
Now:
Electronic input plausibility is not just a “should” but a “must”—if your system can automate it, you must.
Manual transfer is justified, not assumed; all manual steps must have procedural/methodological reinforcement and evidence logs.
Migration is a qualification event. The entire lifecycle, not just the output, must be documented, trended, and reviewed.
Encryption is an expectation, not a best effort. The risk burden now falls on you to prove why it isn’t needed, not why it is.
Responsibility is on the MAH/manufacturer, not the vendor, IT, or “business owner.” You outsource activity, not liability.
This matches, in setting, recent FDA guidance via Computer Software Assurance (CSA), GAMP 5’s digital risk lifecycle, and every modern data privacy regulation. The difference is that, starting with the new Annex 11, these approaches are not “suggested”—they are codified.
Real-Life Scenarios: Application of Section 10
Imagine a high-speed packaging line. The operator enters the number of rejected vials per shift. In the old regime, the operator could mistype “80” as “800” or enter a negative number during a hasty correction. With section 10 in force, the system simply will not permit it—90% confidence that any such error will be caught before it mars the official record.
Now think about laboratory results—analysts transferring HPLC data into the LIMS manually. Every entry runs a risk of decimal misplacement or sample ID mismatch. Annex 11 now demands full instrument-to-LIMS interfacing (where feasible), and when not, a double verification protocol meticulously executed, logged, and reviewed.
On the migration front, consider upgrading your document management system. The stakes: decades of batch release records. In 2019, you might have planned a database export, a few spot checks, and post-migration validation of “high value” documents. Under the new Annex 11, you require a documented mapping of every critical field, technical and process reconciliation, error reporting, and lasting evidence for defensibility two or ten years from now.
Encryption is now expected as a default. Cloud-hosted data with no encryption? Prepare to be asked why, and to defend your choice with up-to-date, context-specific risk assessments—not hand-waving.
Bringing Section 10 to Life: Steps for Implementation
A successful strategy for aligning to Annex 11 Section 10 begins with an exhaustive mapping of all critical data touchpoints and their methods of entry, transfer, and storage. This is a multidisciplinary process, requiring cooperation among quality, IT, operations, and compliance teams.
For each critical data field or process, define:
The party responsible for its entry and management
The system’s capability for plausibility checking, range enforcement, and error prevention;
Mechanisms to block or correct entry outside expected norms
Methods of data handoff and transfer between systems, with documentation of integration or a procedural justification for unavoidable manual steps
Protocols and evidence logs for validation of both routine transfers and one-off (migration) events
For all manual data handling that remains, create detailed, risk-based procedures for independent verification and trending review. For data migration, walk through an end-to-end lifecycle—pre-migration risk mapping, execution protocols, post-migration review, discrepancy handling, and archiving of all planning/validation evidence.
For storage and transfer, produce a risk matrix for where and how critical data is held, updated, and moved, and deploy encryption accordingly. Document both technical standards and the process for periodic review and incident response.
Quality management is not the sole owner; business leads, system admins, and IT architects must be brought to the table. For every major change, tie change control procedures to a Section 10 review—any new process, upgrade, or integration comes back to data handling risk, with a closing check for automation and procedural compliance.
Regulatory Impact and Inspection Strategy
Regulatory expectations around data integrity are not only becoming more stringent—they are also far more precise and actionable than in the past. Inspectors now arrive prepared and trained to probe deeply into what’s called “data provenance”: that is, the complete, traceable life story of every critical data point. It’s no longer sufficient to show where a value appears in a final batch record or report; regulators want to see how that data originated, through which systems and interfaces it was transferred, how each entry or modification was verified, and exactly what controls were in place (or not in place) at each step.
Gone are the days when, if questioned about persistent risks like error-prone manual transcription, a company could deflect with, “that’s how we’ve always done it.” Now, inspectors expect detailed explanations and justifications for every manual, non-automated, or non-encrypted data entry or transfer. They will require you to produce not just policies but actual logs, complete audit trails, electronic signature evidence where required, and documented decision-making within your risk assessments for every process step that isn’t fully controlled by technology.
In practical terms, this means you must be able to reconstruct and defend the exact conditions and controls present at every point data is created, handled, moved, or modified. If a process relies on a workaround, a manual step, or an unvalidated migration, you will need transparent evidence that risks were understood, assessed, and mitigated—not simply asserted away.
The implications are profound: mastering Section 10 isn’t just about satisfying the regulator. Robust, risk-based data handling is fundamental to your operation’s resilience—improving traceability, minimizing costly errors or data loss, ensuring you can withstand disruption, and enabling true digital transformation across your business. Leaders who excel here will find that their compliance posture translates into real business value, competitive differentiation, and lasting operational stability.
The Bigger Picture: Section 10 as Industry Roadmap
What’s clear is this: Section 10 eliminates the excuses that have long made “data handling risk” a tolerated, if regrettable, feature of pharmaceutical compliance. It replaces them with a pathway for digital, risk-based, and auditable control culture. This is not just for global pharma behemoths—cloud-native startups, generics manufacturers, and even virtual companies reliant on CDMOs must take note. The same expectations now apply to every regulated data touchpoint, wherever in the supply chain or manufacturing lifecycle it lies.
Bringing your controls into compliance with Section 10 is a strategic imperative in 2025 and beyond. Those who move fastest will spend less time and money on post-inspection remediation, operate more efficiently, and have a defensible record for every outcome.
Requirement Area
Annex 11 (2011)
Draft Annex 11 Section 10 (2025)
21 CFR Part 11
GAMP 5 / Best Practice
Input verification
General expectation, not defined
Mandatory for critical manual entry; system logic and boundaries
“Input checks” required, methods not specified
Risk-based, ideally automated
Data transfer
Manual allowed, interface preferred
Validated interfaces wherever possible; strict controls for manual
Implicit through system interface requirements
Automated transfer is the baseline, double checked for manual
Manual transcription
Allowed, requires review
Only justified exceptions; robust secondary verification & documentation
Not directly mentioned
Two-person verification, periodic audit and trending
Data migration
Mentioned, not detailed
Must be planned, risk-assessed, validated, and be fully auditable
Implied via system lifecycle controls
Full protocol: mapping, logs, verification, and discrepancy handling
Encryption
Not referenced
Mandated for critical data; exceptions need documented, defensible risk
Recommended, not strictly required
Default for sensitive data; standard in cloud, backup, and distributed setups
Audit trail for handling
Implied via system change auditing
All data moves and handling steps linked/logged in audit trail
Required for modifications/rest/correction
Integrated with system actions, trended for error and compliance
Manual exceptions
Not formally addressed
Must be justified and mitigated; always subject to periodic review
Not directly stated
Exception management, always with trending, review, and CAPA
Handling of Data as Quality Culture, Not Just IT Control
Section 10 in the draft Annex 11 is nothing less than the codification of real data integrity for the digitalized era. It lays out a field guide for what true GMP data governance looks like—not in the clouds of intention, but in the minutiae of everyday operation. Whether you’re designing a new MES integration, cleaning up the residual technical debt of manual record transfer, or planning the next system migration, take heed: how you handle data when no one’s watching is the new standard of excellence in pharmaceutical quality.
As always, the organizations that embrace these requirements as opportunities—not just regulatory burdens—will build a culture, a system, and a supply chain that are robust, efficient, and genuinely defensible.