The relationship between sponsors and contract organizations has evolved far beyond simple transactional exchanges. Digital infrastructure has become the cornerstone of trust, transparency, and operational excellence.
The trust equation is fundamentally changing due to the way our supply chains are being challenged.. Traditional quality agreements often functioned as static documents—comprehensive but disconnected from day-to-day operations. Today’s most successful partnerships are built on dynamic, digitally-enabled frameworks that provide real-time visibility into performance, compliance, and risk management.
Regulatory agencies are increasingly scrutinizing the effectiveness of sponsor oversight programs. The FDA’s emphasis on data integrity, combined with EMA’s evolving computerized systems requirements, means that sponsors can no longer rely on periodic audits and static documentation to demonstrate control over their outsourced activities.
Quality Agreements as Digital Trust Frameworks
The modern quality agreement must evolve from a compliance document to a digital trust framework. This transformation requires reimagining three fundamental components:
Dynamic Risk Assessment Integration
Traditional quality agreements categorize suppliers into static risk tiers (for example Category 1, 2, 2.5, or 3 based on material/service risk). Digital frameworks enable continuous risk profiling that adapts based on real-time performance data.
Integrate supplier performance metrics directly into your quality management system. When a Category 2 supplier’s on-time delivery drops below threshold or quality metrics deteriorate, the system should automatically trigger enhanced monitoring protocols without waiting for the next periodic review.
Automated Change Control Workflows
One of the most contentious areas in sponsor-CxO relationships involves change notifications and approvals. Digital infrastructure can transform this friction point into a competitive advantage.
The SMART approach to change control:
Standardized digital templates for change notifications
Machine-readable impact assessments
Automated routing based on change significance
Real-time status tracking for all stakeholders
Traceable decision logs with electronic signatures
Quality agreement language to include: “All change notifications shall be submitted through the designated digital platform within [X] business days of identification, with automated acknowledgment and preliminary impact assessment provided within [Y] hours.”
Transparent Performance Dashboards
The most innovative CxOs are moving beyond quarterly business reviews to continuous performance visibility. Quality agreements should build upon real-time access to key performance indicators (KPIs) that matter most to patient safety and product quality.
Examples of Essential KPIs for digital dashboards:
Batch disposition times and approval rates
Deviation investigation cycle times
CAPA effectiveness metrics
Environmental monitoring excursions and response times
Supplier change notification compliance rates
Communication Architecture for Transparency
Effective communication in pharmaceutical partnerships requires architectural thinking, not just protocol definition. The most successful CxO-sponsor relationships are built on what I call the “Three-Layer Communication Stack” which builds a rhythm of communication:
Layer 1: Operational Communication (Real-Time)
Purpose: Day-to-day coordination and issue resolution
Tools: Integrated messaging within quality management systems, automated alerts, mobile notifications
Quality agreement requirement: “Operational communications shall be conducted through validated, audit-trailed platforms with 24/7 availability and guaranteed delivery confirmation.”
Every quality agreement should include a subsidiary Communication Plan that addresses:
Stakeholder Matrix: Who needs what information, when, and in what format
Escalation Protocols: Clear triggers for moving issues up the communication stack
Performance Metrics: How communication effectiveness will be measured and improved
Technology Requirements: Specified platforms, security requirements, and access controls
Contingency Procedures: Alternative communication methods for system failures or emergencies
Include communication effectiveness as a measurable element in your supplier scorecards. Track metrics like response time to quality notifications, accuracy of status reporting, and proactive problem identification.
Data Governance as a Competitive Differentiator
Data integrity is more than just ensuring ALCOA+—it’s about creating a competitive moat through superior data governance. The organizations that master data sharing, analysis, and decision-making will dominate the next decade of pharmaceutical manufacturing and development.
The Modern Data Governance Framework
Data Architecture Definition
Your quality agreement must specify not just what data will be shared, but how it will be structured, validated, and integrated:
Master data management: Consistent product codes, batch numbering, and material identifiers across all systems
Data quality standards: Validation rules, completeness requirements, and accuracy thresholds
Integration protocols: APIs, data formats, and synchronization frequencies
With increasing regulatory focus on cybersecurity, your data governance plan must address:
Role-based access controls: Granular permissions based on job function and business need
Data classification: Confidentiality levels and handling requirements
Audit logging: Comprehensive tracking of data access, modification, and sharing
Analytics and Intelligence
The real competitive advantage comes from turning shared data into actionable insights:
Predictive analytics: Early warning systems for quality trends and supply chain disruptions
Benchmark reporting: Anonymous industry comparisons to identify improvement opportunities
Root cause analysis: Automated correlation of events across multiple systems and suppliers
The Data Governance Subsidiary Agreement
Consider creating a separate Data Governance Agreement that complements your quality agreement with specific sections covering data sharing objectives, technical architecture, governance oversight, and compliance requirements.
Veeva Summit
Next week I’ll be discussing this topic at the Veeva Summit, where I will bring some organizational learnings on to embrace digital infrastructure as a trust-building mechanism will forge stronger partnerships, achieve superior quality outcomes, and ultimately deliver better patient experiences.
Traditional document management approaches, rooted in paper-based paradigms, create artificial boundaries between engineering activities and quality oversight. These silos become particularly problematic when implementing Quality Risk Management-based integrated Commissioning and Qualification strategies. The solution lies not in better document control procedures, but in embracing data-centric architectures that treat documents as dynamic views of underlying quality data rather than static containers of information.
The Engineering Quality Process: Beyond Document Control
The Engineering Quality Process (EQP) represents an evolution beyond traditional document management, establishing the critical interface between Good Engineering Practice and the Pharmaceutical Quality System. This integration becomes particularly crucial when we consider that engineering documents are not merely administrative artifacts—they are the embodiment of technical knowledge that directly impacts product quality and patient safety.
EQP implementation requires understanding that documents exist within complex data ecosystems where engineering specifications, risk assessments, change records, and validation protocols are interconnected through multiple quality processes. The challenge lies in creating systems that maintain this connectivity while ensuring ALCOA+ principles are embedded throughout the document lifecycle.
Building Systematic Document Governance
The foundation of effective GEP document management begins with recognizing that documents serve multiple masters—engineering teams need technical accuracy and accessibility, quality assurance requires compliance and traceability, and operations demands practical usability. This multiplicity of requirements necessitates what I call “multi-dimensional document governance”—systems that can simultaneously satisfy engineering, quality, and operational needs without creating redundant or conflicting documentation streams.
Effective governance structures must establish clear boundaries between engineering autonomy and quality oversight while ensuring seamless information flow across these interfaces. This requires moving beyond simple approval workflows toward sophisticated quality risk management integration where document criticality drives the level of oversight and control applied.
Electronic Quality Management System Integration: The Technical Architecture
The integration of eQMS platforms with engineering documentation can be surprisingly complex. The fundamental issue is that most eQMS solutions were designed around quality department workflows, while engineering documents flow through fundamentally different processes that emphasize technical iteration, collaborative development, and evolutionary refinement.
Core Integration Principles
Unified Data Models: Rather than treating engineering documents as separate entities, leading implementations create unified data models where engineering specifications, quality requirements, and validation protocols share common data structures. This approach eliminates the traditional handoffs between systems and creates seamless information flow from initial design through validation and into operational maintenance.
Risk-Driven Document Classification: We need to move beyond user driven classification and implement risk classification algorithms that automatically determine the level of quality oversight required based on document content, intended use, and potential impact on product quality. This automated classification reduces administrative burden while ensuring critical documents receive appropriate attention.
Contextual Access Controls: Advanced eQMS platforms provide dynamic permission systems that adjust access rights based on document lifecycle stage, user role, and current quality status. During active engineering development, technical teams have broader access rights, but as documents approach finalization and quality approval, access becomes more controlled and audited.
Validation Management System Integration
The integration of electronic Validation Management Systems (eVMS) represents a particularly sophisticated challenge because validation activities span the boundary between engineering development and quality assurance. Modern implementations create bidirectional data flows where engineering documents automatically populate validation protocols, while validation results feed back into engineering documentation and quality risk assessments.
Protocol Generation: Advanced systems can automatically generate validation protocols from engineering specifications, user requirements, and risk assessments. This automation ensures consistency between design intent and validation activities while reducing the manual effort typically required for protocol development.
Evidence Linking: Sophisticated eVMS platforms create automated linkages between engineering documents, validation protocols, execution records, and final reports. These linkages ensure complete traceability from initial requirements through final qualification while maintaining the data integrity principles essential for regulatory compliance.
Continuous Verification: Modern systems support continuous verification approaches aligned with ASTM E2500 principles, where validation becomes an ongoing process integrated with change management rather than discrete qualification events.
Data Integrity Foundations: ALCOA+ in Engineering Documentation
The application of ALCOA+ principles to engineering documentation can create challenges because engineering processes involve significant collaboration, iteration, and refinement—activities that can conflict with traditional interpretations of data integrity requirements. The solution lies in understanding that ALCOA+ principles must be applied contextually, with different requirements during active development versus finalized documentation.
Attributability in Collaborative Engineering
Engineering documents often represent collective intelligence rather than individual contributions. Address this challenge through granular attribution mechanisms that can track individual contributions to collaborative documents while maintaining overall document integrity. This includes sophisticated version control systems that maintain complete histories of who contributed what content, when changes were made, and why modifications were implemented.
Contemporaneous Recording in Design Evolution
Traditional interpretations of contemporaneous recording can conflict with engineering design processes that involve iterative refinement and retrospective analysis. Implement design evolution tracking that captures the timing and reasoning behind design decisions while allowing for the natural iteration cycles inherent in engineering development.
Managing Original Records in Digital Environments
The concept of “original” records becomes complex in engineering environments where documents evolve through multiple versions and iterations. Establish authoritative record concepts where the system maintains clear designation of authoritative versions while preserving complete historical records of all iterations and the reasoning behind changes.
Best Practices for eQMS Integration
Systematic Architecture Design
Effective eQMS integration begins with architectural thinking rather than tool selection. Organizations must first establish clear data models that define how engineering information flows through their quality ecosystem. This includes mapping the relationships between user requirements, functional specifications, design documents, risk assessments, validation protocols, and operational procedures.
Cross-Functional Integration Teams: Successful implementations establish integrated teams that include engineering, quality, IT, and operations representatives from project inception. These teams ensure that system design serves all stakeholders’ needs rather than optimizing for a single department’s workflows.
Phased Implementation Strategies: Rather than attempting wholesale system replacement, leading organizations implement phased approaches that gradually integrate engineering documentation with quality systems. This allows for learning and refinement while maintaining operational continuity.
Change Management Integration
The integration of change management across engineering and quality systems represents a critical success factor. Create unified change control processes where engineering changes automatically trigger appropriate quality assessments, risk evaluations, and validation impact analyses.
Automated Impact Assessment: Ensure your system can automatically assess the impact of engineering changes on existing validation status, quality risk profiles, and operational procedures. This automation ensures that changes are comprehensively evaluated while reducing the administrative burden on technical teams.
Stakeholder Notification Systems: Provide contextual notifications to relevant stakeholders based on change impact analysis. This ensures that quality, operations, and regulatory affairs teams are informed of changes that could affect their areas of responsibility.
Knowledge Management Integration
Capturing Engineering Intelligence
One of the most significant opportunities in modern GEP document management lies in systematically capturing engineering intelligence that traditionally exists only in informal networks and individual expertise. Implement knowledge harvesting mechanisms that can extract insights from engineering documents, design decisions, and problem-solving approaches.
Design Decision Rationale: Require and capture the reasoning behind engineering decisions, not just the decisions themselves. This creates valuable organizational knowledge that can inform future projects while providing the transparency required for quality oversight.
Lessons Learned Integration: Rather than maintaining separate lessons learned databases, integrate insights directly into engineering templates and standard documents. This ensures that organizational knowledge is immediately available to teams working on similar challenges.
Expert Knowledge Networks
Create dynamic expert networks where subject matter experts are automatically identified and connected based on document contributions, problem-solving history, and technical expertise areas. These networks facilitate knowledge transfer while ensuring that critical engineering knowledge doesn’t remain locked in individual experts’ experience.
Technology Platform Considerations
System Architecture Requirements
Effective GEP document management requires platform architectures that can support complex data relationships, sophisticated workflow management, and seamless integration with external engineering tools. This includes the ability to integrate with Computer-Aided Design systems, engineering calculation tools, and specialized pharmaceutical engineering software.
API Integration Capabilities: Modern implementations require robust API frameworks that enable integration with the diverse tool ecosystem typically used in pharmaceutical engineering. This includes everything from CAD systems to process simulation software to specialized validation tools.
Scalability Considerations: Pharmaceutical engineering projects can generate massive amounts of documentation, particularly during complex facility builds or major system implementations. Platforms must be designed to handle this scale while maintaining performance and usability.
Validation and Compliance Framework
The platforms supporting GEP document management must themselves be validated according to pharmaceutical industry standards. This creates unique challenges because engineering systems often require more flexibility than traditional quality management applications.
GAMP 5 Compliance: Follow GAMP 5 principles for computerized system validation while maintaining the flexibility required for engineering applications. This includes risk-based validation approaches that focus validation efforts on critical system functions.
Continuous Compliance: Modern systems support continuous compliance monitoring rather than point-in-time validation. This is particularly important for engineering systems that may receive frequent updates to support evolving project needs.
Building Organizational Maturity
Cultural Transformation Requirements
The successful implementation of integrated GEP document management requires cultural transformation that goes beyond technology deployment. Engineering organizations must embrace quality oversight as value-adding rather than bureaucratic, while quality organizations must understand and support the iterative nature of engineering development.
Cross-Functional Competency Development: Success requires developing transdisciplinary competence where engineering professionals understand quality requirements and quality professionals understand engineering processes. This shared understanding is essential for creating systems that serve both communities effectively.
Evidence-Based Decision Making: Organizations must cultivate cultures that value systematic evidence gathering and rigorous analysis across both technical and quality domains. This includes establishing standards for what constitutes adequate evidence for engineering decisions and quality assessments.
Maturity Model Implementation
Organizations can assess and develop their GEP document management capabilities using maturity model frameworks that provide clear progression paths from reactive document control to sophisticated knowledge-enabled quality systems.
Level 1 – Reactive: Basic document control with manual processes and limited integration between engineering and quality systems.
Level 2 – Developing: Electronic systems with basic workflow automation and beginning integration between engineering and quality processes.
Level 3 – Systematic: Comprehensive eQMS integration with risk-based document management and sophisticated workflow automation.
Level 4 – Integrated: Unified data architectures with seamless information flow between engineering, quality, and operational systems.
Level 5 – Optimizing: Knowledge-enabled systems with predictive analytics, automated intelligence extraction, and continuous improvement capabilities.
Future Directions and Emerging Technologies
Artificial Intelligence Integration
The convergence of AI technologies with GEP document management creates unprecedented opportunities for intelligent document analysis, automated compliance checking, and predictive quality insights. The promise is systems that can analyze engineering documents to identify potential quality risks, suggest appropriate validation strategies, and automatically generate compliance reports.
Natural Language Processing: AI-powered systems can analyze technical documents to extract key information, identify inconsistencies, and suggest improvements based on organizational knowledge and industry best practices.
Predictive Analytics: Advanced analytics can identify patterns in engineering decisions and their outcomes, providing insights that improve future project planning and risk management.
Building Excellence Through Integration
The transformation of GEP document management from compliance-driven bureaucracy to value-creating knowledge systems represents one of the most significant opportunities available to pharmaceutical organizations. Success requires moving beyond traditional document control paradigms toward data-centric architectures that treat documents as dynamic views of underlying quality data.
The integration of eQMS platforms with engineering workflows, when properly implemented, creates seamless quality ecosystems where engineering intelligence flows naturally through validation processes and into operational excellence. This integration eliminates the traditional handoffs and translation losses that have historically plagued pharmaceutical quality systems while maintaining the oversight and control required for regulatory compliance.
Organizations that embrace these integrated approaches will find themselves better positioned to implement Quality by Design principles, respond effectively to regulatory expectations for science-based quality systems, and build the organizational knowledge capabilities required for sustained competitive advantage in an increasingly complex regulatory environment.
The future belongs to organizations that can seamlessly blend engineering excellence with quality rigor through sophisticated information architectures that serve both engineering creativity and quality assurance requirements. The technology exists; the regulatory framework supports it; the question remaining is organizational commitment to the cultural and architectural transformations required for success.
As we continue evolving toward more evidence-based quality practice, the organizations that invest in building coherent, integrated document management systems will find themselves uniquely positioned to navigate the increasing complexity of pharmaceutical quality requirements while maintaining the engineering innovation essential for bringing life-saving products to market efficiently and safely.
The draft Annex 11’s Section 15 Security represents nothing less than the regulatory codification of modern cybersecurity principles into pharmaceutical GMP. Where the 2011 version offered three brief security provisions totaling fewer than 100 words, the 2025 draft delivers 20 comprehensive subsections that read like a cybersecurity playbook designed by paranoid auditors who’ve spent too much time investigating ransomware attacks on manufacturing facilities. As someone with a bit of experience in that, I find the draft fascinating.
Section 15 transforms cybersecurity from a peripheral IT concern into a mandatory foundation of pharmaceutical operations, requiring organizations to implement enterprise-grade security controls. The European regulators have essentially declared that pharmaceutical cybersecurity can no longer be treated as someone else’s problem. Nor can it be treated as something outside of the GMPs.
The Philosophical Transformation: From Trust-Based to Threat-Driven Security
The current Annex 11’s security provisions reflect a fundamentally different era of threat landscape with an approach centering on access restriction and basic audit logging, assuming that physical controls and password authentication provide adequate protection. The language suggests that security controls should be “suitable” and scale with system “criticality,” offering organizations considerable discretion in determining what constitutes appropriate protection.
Section 15 obliterates this discretionary approach by mandating specific, measurable security controls that assume persistent, sophisticated threats as the baseline condition. Rather than suggesting organizations “should” implement firewalls and access controls, the draft requires organizations to deploy network segmentation, disaster recovery capabilities, penetration testing programs, and continuous security improvement processes.
The shift from “suitable methods of preventing unauthorised entry” to requiring “effective information security management systems” represents a fundamental change in regulatory philosophy. The 2011 version treats security breaches as unfortunate accidents to be prevented through reasonable precautions. The 2025 draft treats security breaches as inevitable events requiring comprehensive preparation, detection, response, and recovery capabilities.
Section 15.1 establishes this new paradigm by requiring regulated users to “ensure an effective information security management system is implemented and maintained, which safeguards authorised access to, and detects and prevents unauthorised access to GMP systems and data”. This language transforms cybersecurity from an operational consideration into a regulatory mandate with explicit requirements for ongoing management and continuous improvement.
Quite frankly, I worry that many Quality Units may not be ready for this new level of oversight.
Comparing Section 15 Against ISO 27001: Pharmaceutical-Specific Cybersecurity
The draft Section 15 creates striking alignments with ISO 27001’s Information Security Management System requirements while adding pharmaceutical-specific controls that reflect the unique risks of GMP environments. ISO 27001’s emphasis on risk-based security management, continuous improvement, and comprehensive control frameworks becomes regulatory mandate rather than voluntary best practice.
Physical Security Requirements in Section 15.4 exceed typical ISO 27001 implementations by mandating multi-factor authentication for physical access to server rooms and data centers. Where ISO 27001 Control A.11.1.1 requires “physical security perimeters” and “appropriate entry controls,” Section 15.4 specifically mandates protection against unauthorized access, damage, and loss while requiring secure locking mechanisms for data centers.
The pharmaceutical-specific risk profile drives requirements that extend beyond ISO 27001’s framework. Section 15.5’s disaster recovery provisions require data centers to be “constructed to minimise the risk and impact of natural and manmade disasters” including storms, flooding, earthquakes, fires, power outages, and network failures. This level of infrastructure resilience reflects the critical nature of pharmaceutical manufacturing where system failures can impact patient safety and drug supply chains.
Continuous Security Improvement mandated by Section 15.2 aligns closely with ISO 27001’s Plan-Do-Check-Act cycle while adding pharmaceutical-specific language about staying “updated about new security threats” and implementing measures to “counter this development”. The regulatory requirement transforms ISO 27001’s voluntary continuous improvement into a compliance obligation with potential inspection implications.
The Security Training and Testing requirements in Section 15.3 exceed typical ISO 27001 implementations by mandating “recurrent security awareness training” with effectiveness evaluation through “simulated tests”. This requirement acknowledges that pharmaceutical environments face sophisticated social engineering attacks targeting personnel with access to valuable research data and manufacturing systems.
NIST Cybersecurity Framework Convergence: Functions Become Requirements
Section 15’s structure and requirements create remarkable alignment with NIST Cybersecurity Framework 2.0’s core functions while transforming voluntary guidelines into mandatory pharmaceutical compliance requirements. The NIST CSF’s Identify, Protect, Detect, Respond, and Recover functions become implicit organizing principles for Section 15’s comprehensive security controls.
Asset Management and Risk Assessment requirements embedded throughout Section 15 align with NIST CSF’s Identify function. Section 15.8’s network segmentation requirements necessitate comprehensive asset inventories and network topology documentation, while Section 15.10’s platform management requirements demand systematic tracking of operating systems, applications, and support lifecycles.
The Protect function manifests through Section 15’s comprehensive defensive requirements including network segmentation, firewall management, access controls, and encryption. Section 15.8 mandates that “networks should be segmented, and effective firewalls implemented to provide barriers between networks, and control incoming and outgoing network traffic”. This requirement transforms NIST CSF’s voluntary protective measures into regulatory obligations with specific technical implementations.
Detection capabilities appear in Section 15.19’s penetration testing requirements, which mandate “regular intervals” of ethical hacking assessments for “critical systems facing the internet”. Section 15.18’s anti-virus requirements extend detection capabilities to endpoint protection with requirements for “continuously updated” virus definitions and “effectiveness monitoring”.
The Respond function emerges through Section 15.7’s disaster recovery planning requirements, which mandate tested disaster recovery plans ensuring “continuity of operation within a defined Recovery Time Objective (RTO)”. Section 15.13’s timely patching requirements create response obligations for addressing “critical vulnerabilities” that “might be immediately” requiring patches.
Recovery capabilities center on Section 15.6’s data replication requirements, which mandate automatic replication of “critical data” from primary to secondary data centers with “delay which is short enough to minimise the risk of loss of data”. The requirement for secondary data centers to be located at “safe distance from the primary site” ensures geographic separation supporting business continuity objectives.
Summary Across Key Guidance Documents
Security Requirement Area
Draft Annex 11 Section 15 (2025)
Current Annex 11 (2011)
ISO 27001:2022
NIST CSF 2.0 (2024)
Implementation Complexity
Information Security Management System
Mandatory – Effective ISMS implementation and maintenance required (15.1)
Basic – General security measures, no ISMS requirement
Core – ISMS is fundamental framework requirement (Clause 4-10)
Framework – Governance as foundational function across all activities
High – Requires comprehensive ISMS deployment
Continuous Security Improvement
Required – Continuous updates on threats and countermeasures (15.2)
Not specified – No continuous improvement mandate
Mandatory – Continual improvement through PDCA cycle (Clause 10.2)
Built-in – Continuous improvement through framework implementation
Medium – Ongoing process establishment needed
Security Training & Testing
Mandatory – Recurrent training with simulated testing effectiveness evaluation (15.3)
Not mentioned – No training or testing requirements
Required – Information security awareness and training (A.6.3)
Emphasized – Cybersecurity workforce development and training (GV.WF)
Medium – Training programs and testing infrastructure
Physical Security Controls
Explicit – Multi-factor authentication for server rooms, secure data centers (15.4)
Limited – “Suitable methods” for preventing unauthorized entry
Detailed – Physical and environmental security controls (A.11.1-11.2)
Addressed – Physical access controls within Protect function (PR.AC-2)
Medium – Physical infrastructure and access systems
Medium – Supplier assessment and management processes
Encryption & Data Protection
Limited – Not explicitly detailed beyond data replication requirements
Not specified – No encryption requirements
Comprehensive – Cryptography and data protection controls (A.10)
Included – Data security and privacy protection (PR.DS)
Medium – Encryption deployment and key management
Change Management Integration
Integrated – Security updates must align with GMP validation processes
Basic – Change control mentioned generally
Integrated – Change management throughout ISMS (A.14.2.2)
Embedded – Change management within improvement processes
High – Integration with existing GMP change control
Compliance Monitoring
Built-in – Regular reviews, testing, and continuous improvement mandated
Limited – Periodic review mentioned without specifics
Required – Monitoring, measurement, and internal audits (Clause 9)
Systematic – Continuous monitoring and measurement (DE, GV functions)
Medium – Monitoring and measurement systems
Executive Oversight & Governance
Implied – Through ISMS requirements and continuous improvement mandates
Not specified – No governance requirements
Mandatory – Leadership commitment and management responsibility (Clause 5)
Essential – Governance and leadership accountability (GV function)4
Medium – Governance structure and accountability
The alignment with ISO 27001 and NIST CSF demonstrates that pharmaceutical organizations can no longer treat cybersecurity as a separate concern from GMP compliance—they become integrated regulatory requirements demanding enterprise-grade security capabilities that most pharmaceutical companies have historically considered optional.
Technical Requirements That Challenge Traditional Pharmaceutical IT Architecture
Section 15’s technical requirements will force fundamental changes in how pharmaceutical organizations architect, deploy, and manage their IT infrastructure. The regulatory prescriptions extend far beyond current industry practices and demand enterprise-grade security capabilities that many pharmaceutical companies currently lack.
Network Architecture Revolution begins with Section 15.8’s segmentation requirements, which mandate that “networks should be segmented, and effective firewalls implemented to provide barriers between networks”. This requirement eliminates the flat network architectures common in pharmaceutical manufacturing environments where laboratory instruments, manufacturing equipment, and enterprise systems often share network segments for operational convenience.
The firewall rule requirements demand “IP addresses, destinations, protocols, applications, or ports” to be “defined as strict as practically feasible, only allowing necessary and permissible traffic”. For pharmaceutical organizations accustomed to permissive network policies that allow broad connectivity for troubleshooting and maintenance, this represents a fundamental shift toward zero-trust architecture principles.
Section 15.9’s firewall review requirements acknowledge that “firewall rules tend to be changed or become insufficient over time” and mandate periodic reviews to ensure firewalls “continue to be set as tight as possible”. This requirement transforms firewall management from a deployment activity into an ongoing operational discipline requiring dedicated resources and systematic review processes.
Platform and Patch Management requirements in Sections 15.10 through 15.14 create comprehensive lifecycle management obligations that most pharmaceutical organizations currently handle inconsistently. Section 15.10 requires operating systems and platforms to be “updated in a timely manner according to vendor recommendations, to prevent their use in an unsupported state”.
The validation and migration requirements in Section 15.11 create tension between security imperatives and GMP validation requirements. Organizations must “plan and complete” validation of applications on updated platforms “in due time prior to the expiry of the vendor’s support”. This requirement demands coordination between IT security, quality assurance, and validation teams to ensure system updates don’t compromise GMP compliance.
Section 15.12’s isolation requirements for unsupported platforms acknowledge the reality that pharmaceutical organizations often operate legacy systems that cannot be easily updated. The requirement that such systems “should be isolated from computer networks and the internet” creates network architecture challenges where isolated systems must still support critical manufacturing processes.
Endpoint Security and Device Management requirements in Sections 15.15 through 15.18 address the proliferation of connected devices in pharmaceutical environments. Section 15.15’s “strict control” of bidirectional devices like USB drives acknowledges that pharmaceutical manufacturing environments often require portable storage for equipment maintenance and data collection.
The effective scanning requirements in Section 15.16 for devices that “may have been used outside the organisation” create operational challenges for service technicians and contractors who need to connect external devices to pharmaceutical systems. Organizations must implement scanning capabilities that can “effectively” detect malware without disrupting operational workflows.
Section 15.17’s requirements to deactivate USB ports “by default” unless needed for essential devices like keyboards and mice will require systematic review of all computer systems in pharmaceutical facilities. Manufacturing computers, laboratory instruments, and quality control systems that currently rely on USB connectivity for routine operations may require architectural changes or enhanced security controls.
Operational Impact: How Section 15 Changes Day-to-Day Operations
The implementation of Section 15’s security requirements will fundamentally change how pharmaceutical organizations conduct routine operations, from equipment maintenance to data management to personnel access. These changes extend far beyond IT departments to impact every function that interacts with computerized systems.
Manufacturing and Laboratory Operations will experience significant changes through network segmentation and access control requirements. Section 15.8’s segmentation requirements may isolate manufacturing systems from corporate networks, requiring new procedures for accessing data, transferring files, and conducting remote troubleshooting1. Equipment vendors who previously connected remotely to manufacturing systems for maintenance may need to adapt to more restrictive access controls and monitored connections.
The USB control requirements in Sections 15.15-15.17 will particularly impact operations where portable storage devices are routinely used for data collection, equipment calibration, and maintenance activities. Laboratory personnel accustomed to using USB drives for transferring analytical data may need to adopt network-based file transfer systems or enhanced scanning procedures.
Information Technology Operations must expand significantly to support Section 15’s comprehensive requirements. The continuous security improvement mandate in Section 15.2 requires dedicated resources for threat intelligence monitoring, security tool evaluation, and control implementation. Organizations that currently treat cybersecurity as a periodic concern will need to establish ongoing security operations capabilities.
Section 15.19’s penetration testing requirements for “critical systems facing the internet” will require organizations to either develop internal ethical hacking capabilities or establish relationships with external security testing providers. The requirement for “regular intervals” suggests ongoing testing programs rather than one-time assessments.
The firewall review requirements in Section 15.9 necessitate systematic processes for evaluating and updating network security rules. Organizations must establish procedures for documenting firewall changes, reviewing rule effectiveness, and ensuring rules remain “as tight as possible” while supporting legitimate business functions.
Quality Unit functions must expand to encompass cybersecurity validation and documentation requirements. Section 15.11’s requirements to validate applications on updated platforms before vendor support expires will require QA involvement in IT infrastructure changes. Quality systems must incorporate procedures for evaluating the GMP impact of security patches, platform updates, and network changes.
The business continuity requirements in Section 15.7 necessitate testing of disaster recovery plans and validation that systems can meet “defined Recovery Time Objectives”. Quality assurance must develop capabilities for validating disaster recovery processes and documenting that backup systems can support GMP operations during extended outages.
Strategic Implications: Organizational Structure and Budget Priorities
Section 15’s comprehensive security requirements will force pharmaceutical organizations to reconsider their IT governance structures, budget allocations, and strategic priorities. The regulatory mandate for enterprise-grade cybersecurity capabilities creates organizational challenges that extend beyond technical implementation.
IT-OT Convergence Acceleration becomes inevitable as Section 15’s requirements apply equally to traditional IT systems and operational technology supporting manufacturing processes. Organizations must develop unified security approaches spanning enterprise networks, manufacturing systems, and laboratory instruments. The traditional separation between corporate IT and manufacturing systems operations becomes unsustainable when both domains require coordinated security management.
The network segmentation requirements in Section 15.8 demand comprehensive understanding of all connected systems and their communication requirements. Organizations must develop capabilities for mapping and securing complex environments where ERP systems, manufacturing execution systems, laboratory instruments, and quality management applications share network infrastructure.
Cybersecurity Organizational Evolution will likely drive consolidation of security responsibilities under dedicated chief information security officer roles with expanded authority over both IT and operational technology domains. The continuous improvement mandates and comprehensive technical requirements demand specialized cybersecurity expertise that extends beyond traditional IT administration.
Section 15.3’s training and testing requirements necessitate systematic cybersecurity awareness programs with “effectiveness evaluation” through simulated attacks1. Organizations must develop internal capabilities for conducting phishing simulations, security training programs, and measuring personnel security behaviors.
Budget and Resource Reallocation becomes necessary to support Section 15’s comprehensive requirements. The penetration testing, platform management, network segmentation, and disaster recovery requirements represent significant ongoing operational expenses that many pharmaceutical organizations have not historically prioritized.
The validation requirements for security updates in Section 15.11 create ongoing costs for qualifying platform changes and validating application compatibility. Organizations must budget for accelerated validation cycles to ensure security updates don’t result in unsupported systems.
Inspection and Enforcement: The New Reality
Section 15’s detailed technical requirements create specific inspection targets that regulatory authorities can evaluate objectively during facility inspections. Unlike the current Annex 11’s general security provisions, Section 15’s prescriptive requirements enable inspectors to assess compliance through concrete evidence and documentation.
Technical Evidence Requirements emerge from Section 15’s specific mandates for firewalls, network segmentation, patch management, and penetration testing. Inspectors can evaluate firewall configurations, review network topology documentation, assess patch deployment records, and verify penetration testing reports. Organizations must maintain detailed documentation demonstrating compliance with each technical requirement.
The continuous improvement mandate in Section 15.2 creates expectations for ongoing security enhancement activities with documented evidence of threat monitoring and control implementation. Inspectors will expect to see systematic processes for identifying emerging threats and implementing appropriate countermeasures.
Operational Process Validation requirements extend to security operations including incident response, access control management, and backup testing. Section 15.7’s disaster recovery testing requirements create inspection opportunities for validating recovery procedures and verifying RTO achievement1. Organizations must demonstrate that their business continuity plans work effectively through documented testing activities.
The training and testing requirements in Section 15.3 create audit trails for security awareness programs and simulated attack exercises. Inspectors can evaluate training effectiveness through documentation of phishing simulation results, security incident responses, and personnel security behaviors.
Industry Transformation: From Compliance to Competitive Advantage
Organizations that excel at implementing Section 15’s requirements will gain significant competitive advantages through superior operational resilience, reduced cyber risk exposure, and enhanced regulatory relationships. The comprehensive security requirements create opportunities for differentiation through demonstrated cybersecurity maturity.
Supply Chain Security Leadership emerges as pharmaceutical companies with robust cybersecurity capabilities become preferred partners for collaborations, clinical trials, and manufacturing agreements. Section 15’s requirements create third-party evaluation criteria that customers and partners can use to assess supplier cybersecurity capabilities.
The disaster recovery and business continuity requirements in Sections 15.6 and 15.7 create operational resilience that supports supply chain reliability. Organizations that can demonstrate rapid recovery from cyber incidents maintain competitive advantages in markets where supply chain disruptions have significant patient impact.
Regulatory Efficiency Benefits accrue to organizations that proactively implement Section 15’s requirements before they become mandatory. Early implementation demonstrates regulatory leadership and may result in more efficient inspection processes and enhanced regulatory relationships.
The systematic approach to cybersecurity documentation and process validation creates operational efficiencies that extend beyond compliance. Organizations that implement comprehensive cybersecurity management systems often discover improvements in change control, incident response, and operational monitoring capabilities.
Section 15 Security ultimately represents the transformation of pharmaceutical cybersecurity from optional IT initiative to mandatory operational capability that is part of the pharmaceutical quality system. The pharmaceutical industry’s digital future depends on treating cybersecurity as seriously as traditional quality assurance—and Section 15 makes that treatment legally mandatory.
The pharmaceutical industry stands at an inflection point where artificial intelligence meets regulatory compliance, creating new paradigms for quality decision-making that neither fully automate nor abandon human expertise. The concept of the “missing middle” first articulated by Paul Daugherty and H. James Wilson in their seminal work Human + Machine: Reimagining Work in the Age of AI has found profound resonance in the pharmaceutical sector, particularly as regulators grapple with how to govern AI applications in Good Manufacturing Practice (GMP) environments
The recent publication of EU GMP Annex 22 on Artificial Intelligence marks a watershed moment in this evolution, establishing the first dedicated regulatory framework for AI use in pharmaceutical manufacturing while explicitly mandating human oversight in critical decision-making processes. This convergence of the missing middle concept with regulatory reality creates unprecedented opportunities and challenges for pharmaceutical quality professionals, fundamentally reshaping how we approach GMP decision-making in an AI-augmented world.
Understanding the Missing Middle: Beyond the Binary of Human Versus Machine
The missing middle represents a fundamental departure from the simplistic narrative of AI replacing human workers. Instead, it describes the collaborative space where human expertise and artificial intelligence capabilities combine to create outcomes superior to what either could achieve independently. In Daugherty and Wilson’s framework, this space is characterized by fluid, adaptive work processes that can be modified in real-time—a stark contrast to the rigid, sequential workflows that have dominated traditional business operations.
Within the pharmaceutical context, the missing middle takes on heightened significance due to the industry’s unique requirements for safety, efficacy, and regulatory compliance. Unlike other sectors where AI can operate with relative autonomy, pharmaceutical manufacturing demands a level of human oversight that ensures patient safety while leveraging AI’s analytical capabilities. This creates what we might call a “regulated missing middle”—a space where human-machine collaboration must satisfy not only business objectives but also stringent regulatory requirements.
Traditional pharmaceutical quality relies heavily on human decision-making supported by deterministic systems and established procedures. However, the complexity of modern pharmaceutical manufacturing, coupled with the vast amounts of data generated throughout the production process, creates opportunities for AI to augment human capabilities in ways that were previously unimaginable. The challenge lies in harnessing these capabilities while maintaining the control, traceability, and accountability that GMP requires.
Annex 22: Codifying Human Oversight in AI-Driven GMP Environments
The draft EU GMP Annex 22, published for consultation in July 2025, represents the first comprehensive regulatory framework specifically addressing AI use in pharmaceutical manufacturing. The annex establishes clear boundaries around acceptable AI applications while mandating human oversight mechanisms that reflect the missing middle philosophy in practice.
Scope and Limitations: Defining the Regulatory Boundaries
Annex 22 applies exclusively to static, deterministic AI models—those that produce consistent outputs when given identical inputs. This deliberate limitation reflects regulators’ current understanding of AI risk and their preference for predictable, controllable systems in GMP environments. The annex explicitly excludes dynamic models that continuously learn during operation, generative AI systems, and large language models (LLMs) from critical GMP applications, recognizing that these technologies present challenges in terms of explainability, reproducibility, and risk control that current regulatory frameworks cannot adequately address.
This regulatory positioning creates a clear delineation between AI applications that can operate within established GMP principles and those that require different governance approaches. The exclusion of dynamic learning systems from critical applications reflects a risk-averse stance that prioritizes patient safety and regulatory compliance over technological capability—a decision that has sparked debate within the industry about the pace of AI adoption in regulated environments.
Human-in-the-Loop Requirements: Operationalizing the Missing Middle
Perhaps the most significant aspect of Annex 22 is its explicit requirement for human oversight in AI-driven processes. The guidance mandates that qualified personnel must be responsible for ensuring AI outputs are suitable for their intended use, particularly in processes that could impact patient safety, product quality, or data integrity. This requirement operationalizes the missing middle concept by ensuring that human judgment remains central to critical decision-making processes, even as AI capabilities expand.
The human-in-the-loop (HITL) framework outlined in Annex 22 goes beyond simple approval mechanisms. It requires that human operators understand the AI system’s capabilities and limitations, can interpret its outputs meaningfully, and possess the expertise necessary to intervene when circumstances warrant. This creates new skill requirements for pharmaceutical quality professionals, who must develop what Daugherty and Wilson term “fusion skills”—capabilities that enable effective collaboration with AI systems.
Validation and Performance Requirements: Ensuring Reliability in the Missing Middle
Annex 22 establishes rigorous validation requirements for AI systems used in GMP contexts, mandating that models undergo testing against predefined acceptance criteria that are at least as stringent as the processes they replace. This requirement ensures that AI augmentation does not compromise existing quality standards while providing a framework for demonstrating the value of human-machine collaboration.
The validation framework emphasizes explainability and confidence scoring, requiring AI systems to provide transparent justifications for their decisions. This transparency requirement enables human operators to understand AI recommendations and exercise appropriate judgment in their implementation—a key principle of effective missing middle operations. The focus on explainability also facilitates regulatory inspections and audits, ensuring that AI-driven decisions can be scrutinized and validated by external parties.
The Evolution of GMP Decision Making: From Human-Centric to Human-AI Collaborative
Traditional GMP decision-making has been characterized by hierarchical approval processes, extensive documentation requirements, and risk-averse approaches that prioritize compliance over innovation. While these characteristics have served the industry well in ensuring product safety and regulatory compliance, they have also created inefficiencies and limited opportunities for continuous improvement.
Traditional GMP Decision Paradigms
Conventional pharmaceutical quality assurance relies on trained personnel making decisions based on established procedures, historical data, and their professional judgment. Quality control laboratories generate data through standardized testing protocols, which trained analysts interpret according to predetermined specifications. Deviation investigations follow structured methodologies that emphasize root cause analysis and corrective action implementation. Manufacturing decisions are made through change control processes that require multiple levels of review and approval.
This approach has proven effective in maintaining product quality and regulatory compliance, but it also has significant limitations. Human decision-makers can be overwhelmed by the volume and complexity of data generated in modern pharmaceutical manufacturing. Cognitive biases can influence judgment, and the sequential nature of traditional decision-making processes can delay responses to emerging issues. Additionally, the reliance on historical precedent can inhibit innovation and limit opportunities for process optimization.
AI-Augmented Decision Making: Expanding Human Capabilities
The integration of AI into GMP decision-making processes offers opportunities to address many limitations of traditional approaches while maintaining the human oversight that regulations require. AI systems can process vast amounts of data rapidly, identify patterns that might escape human observation, and provide data-driven recommendations that complement human judgment.
In quality control laboratories, AI-powered image recognition systems can analyze visual inspections with greater speed and consistency than human inspectors, while still requiring human validation of critical decisions. Predictive analytics can identify potential quality issues before they manifest, enabling proactive interventions that prevent problems rather than merely responding to them. Real-time monitoring systems can continuously assess process parameters and alert human operators to deviations that require attention.
The transformation of deviation management exemplifies the potential of AI-augmented decision-making. Traditional deviation investigations can be time-consuming and resource-intensive, often requiring weeks or months to complete. AI systems can rapidly analyze historical data to identify potential root causes, suggest relevant corrective actions based on similar past events, and even predict the likelihood of recurrence. However, the final decisions about root cause determination and corrective action implementation remain with qualified human personnel, ensuring that professional judgment and regulatory accountability are preserved.
Maintaining Human Accountability in AI-Augmented Processes
The integration of AI into GMP decision-making raises important questions about accountability and responsibility. Annex 22 addresses these concerns by maintaining clear lines of human accountability while enabling AI augmentation. The guidance requires that qualified personnel remain responsible for all decisions that could impact patient safety, product quality, or data integrity, regardless of the level of AI involvement in the decision-making process.
This approach reflects the missing middle philosophy by recognizing that AI augmentation should enhance rather than replace human judgment. Human operators must understand the AI system’s recommendations, evaluate them in the context of their broader knowledge and experience, and take responsibility for the final decisions. This creates a collaborative dynamic where AI provides analytical capabilities that exceed human limitations while humans provide contextual understanding, ethical judgment, and regulatory accountability that AI systems cannot replicate.
Fusion Skills for Pharmaceutical Quality Professionals: Navigating the AI-Augmented Landscape
The successful implementation of AI in GMP environments requires pharmaceutical quality professionals to develop new capabilities that enable effective collaboration with AI systems. Daugherty and Wilson identify eight “fusion skills” that are essential for thriving in the missing middle. These skills take on particular significance in the highly regulated pharmaceutical environment, where the consequences of poor decision-making can directly impact patient safety.
Intelligent interrogation involves knowing how to effectively query AI systems to obtain meaningful insights. In pharmaceutical quality contexts, this skill enables professionals to leverage AI analytical capabilities while maintaining critical thinking about the results. For example, when investigating a deviation, a quality professional might use AI to analyze historical data for similar events, but must know how to frame queries that yield relevant and actionable insights.
The development of intelligent interrogation skills requires understanding both the capabilities and limitations of specific AI systems. Quality professionals must learn to ask questions that align with the AI system’s training and design while recognizing when human judgment is necessary to interpret or validate the results. This skill is particularly important in GMP environments, where the accuracy and completeness of information can have significant regulatory and safety implications.
Judgment Integration: Combining AI Insights with Human Wisdom
Judgment integration involves combining AI-generated insights with human expertise to make informed decisions. This skill is critical in pharmaceutical quality, where decisions often require consideration of factors that may not be captured in historical data or AI training sets. For instance, an AI system might recommend a particular corrective action based on statistical analysis, but a human professional might recognize unique circumstances that warrant a different approach.
Effective judgment integration requires professionals to maintain a critical perspective on AI recommendations while remaining open to insights that challenge conventional thinking. In GMP contexts, this balance is particularly important because regulatory compliance demands both adherence to established procedures and responsiveness to unique circumstances. Quality professionals must develop the ability to synthesize AI insights with their understanding of regulatory requirements, product characteristics, and manufacturing constraints.
Reciprocal Apprenticing: Mutual Learning Between Humans and AI
Reciprocal apprenticing describes the process by which humans and AI systems learn from each other to improve performance over time. In pharmaceutical quality applications, this might involve humans providing feedback on AI recommendations that helps the system improve its future performance, while simultaneously learning from AI insights to enhance their own decision-making capabilities.
This bidirectional learning process is particularly valuable in GMP environments, where continuous improvement is both a regulatory expectation and a business imperative. Quality professionals can help AI systems become more effective by providing context about why certain recommendations were or were not appropriate in specific situations. Simultaneously, they can learn from AI analysis to identify patterns or relationships that might inform future decision-making.
Additional Fusion Skills: Building Comprehensive AI Collaboration Capabilities
Beyond the three core skills highlighted by Daugherty and Wilson for generative AI applications, their broader framework includes additional capabilities that are relevant to pharmaceutical quality professionals. Responsible normalizing involves shaping the perception and purpose of human-machine interaction in ways that align with organizational values and regulatory requirements. In pharmaceutical contexts, this skill helps ensure that AI implementation supports rather than undermines the industry’s commitment to patient safety and product quality.
Re-humanizing time involves using AI to free up human capacity for distinctly human activities such as creative problem-solving, relationship building, and ethical decision-making. For pharmaceutical quality professionals, this might mean using AI to automate routine data analysis tasks, creating more time for strategic thinking about quality improvements and regulatory strategy.
Bot-based empowerment and holistic melding involve developing mental models of AI capabilities that enable more effective collaboration. These skills help quality professionals understand how to leverage AI systems most effectively while maintaining appropriate skepticism about their limitations.
Real-World Applications: The Missing Middle in Pharmaceutical Manufacturing
The theoretical concepts of the missing middle and human-AI collaboration are increasingly being translated into practical applications within pharmaceutical manufacturing environments. These implementations demonstrate how the principles outlined in Annex 22 can be operationalized while delivering tangible benefits to product quality, operational efficiency, and regulatory compliance.
Quality Control and Inspection: Augmenting Human Visual Capabilities
One of the most established applications of AI in pharmaceutical manufacturing involves augmenting human visual inspection capabilities. Traditional visual inspection of tablets, capsules, and packaging materials relies heavily on human operators who must identify defects, contamination, or other quality issues. While humans excel at recognizing unusual patterns and exercising judgment about borderline cases, they can be limited by fatigue, inconsistency, and the volume of materials that must be inspected.
AI-powered vision systems can process images at speeds far exceeding human capabilities while maintaining consistent performance standards. These systems can identify defects that might be missed by human inspectors and flag potential issues for further review89. However, the most effective implementations maintain human oversight over critical decisions, with AI serving to augment rather than replace human judgment.
Predictive Maintenance: Preventing Quality Issues Through Proactive Intervention
Predictive maintenance represents another area where AI applications align with the missing middle philosophy by augmenting human decision-making rather than replacing it. Traditional maintenance approaches in pharmaceutical manufacturing have relied on either scheduled maintenance intervals or reactive responses to equipment failures. Both approaches can result in unnecessary costs or quality risks.
AI-powered predictive maintenance systems analyze sensor data, equipment performance histories, and maintenance records to predict when equipment failures are likely to occur. This information enables maintenance teams to schedule interventions before failures impact production or product quality. However, the final decisions about maintenance timing and scope remain with qualified personnel who can consider factors such as production schedules, regulatory requirements, and risk assessments that AI systems cannot fully evaluate.
Real-Time Process Monitoring: Enhancing Human Situational Awareness
Real-time process monitoring applications leverage AI’s ability to continuously analyze large volumes of data to enhance human situational awareness and decision-making capabilities. Traditional process monitoring in pharmaceutical manufacturing relies on control systems that alert operators when parameters exceed predetermined limits. While effective, this approach can result in delayed responses to developing issues and may miss subtle patterns that indicate emerging problems.
AI-enhanced monitoring systems can analyze multiple data streams simultaneously to identify patterns that might indicate developing quality issues or process deviations. These systems can provide early warnings that enable operators to take corrective action before problems become critical. The most effective implementations provide operators with explanations of why alerts were generated, enabling them to make informed decisions about appropriate responses.
The integration of AI into Manufacturing Execution Systems (MES) exemplifies this approach. AI algorithms can monitor real-time production data to detect deviations in drug formulation, dissolution rates, and environmental conditions. When potential issues are identified, the system alerts qualified operators who can evaluate the situation and determine appropriate corrective actions. This approach maintains human accountability for critical decisions while leveraging AI’s analytical capabilities to enhance situational awareness.
Deviation Management: Accelerating Root Cause Analysis
Deviation management represents a critical area where AI applications can significantly enhance human capabilities while maintaining the rigorous documentation and accountability requirements that GMP mandates. Traditional deviation investigations can be time-consuming processes that require extensive data review, analysis, and documentation.
AI systems can rapidly analyze historical data to identify patterns, potential root causes, and relevant precedents for similar deviations. This capability can significantly reduce the time required for initial investigation phases while providing investigators with comprehensive background information. However, the final determinations about root causes, risk assessments, and corrective actions remain with qualified human personnel who can exercise professional judgment and ensure regulatory compliance.
The application of AI to root cause analysis demonstrates the value of the missing middle approach in highly regulated environments. AI can process vast amounts of data to identify potential contributing factors and suggest hypotheses for investigation, but human expertise remains essential for evaluating these hypotheses in the context of specific circumstances, regulatory requirements, and risk considerations.
Regulatory Landscape: Beyond Annex 22
While Annex 22 represents the most comprehensive regulatory guidance for AI in pharmaceutical manufacturing, it is part of a broader regulatory landscape that is evolving to address the challenges and opportunities presented by AI technologies. Understanding this broader context is essential for pharmaceutical organizations seeking to implement AI applications that align with both current requirements and emerging regulatory expectations.
FDA Perspectives: Encouraging Innovation with Appropriate Safeguards
The U.S. Food and Drug Administration (FDA) has taken a generally supportive stance toward AI applications in pharmaceutical manufacturing, recognizing their potential to enhance product quality and manufacturing efficiency. The agency’s approach emphasizes the importance of maintaining human oversight and accountability while encouraging innovation that can benefit public health.
The FDA’s guidance on Process Analytical Technology (PAT) provides a framework for implementing advanced analytical and control technologies, including AI applications, in pharmaceutical manufacturing. The PAT framework emphasizes real-time monitoring and control capabilities that align well with AI applications, while maintaining requirements for validation, risk assessment, and human oversight that are consistent with the missing middle philosophy.
The agency has also indicated interest in AI applications that can enhance regulatory processes themselves, including automated analysis of manufacturing data for inspection purposes and AI-assisted review of regulatory submissions. These applications could potentially streamline regulatory interactions while maintaining appropriate oversight and accountability mechanisms.
International Harmonization: Toward Global Standards
The development of AI governance frameworks in pharmaceutical manufacturing is increasingly taking place within international forums that seek to harmonize approaches across different regulatory jurisdictions. The International Conference on Harmonisation (ICH) has begun considering how existing guidelines might need to be modified to address AI applications, particularly in areas such as quality risk management and pharmaceutical quality systems.
The European Medicines Agency (EMA) has published reflection papers on AI use throughout the medicinal product lifecycle, providing broader context for how AI applications might be governed beyond manufacturing applications. These documents emphasize the importance of human-centric approaches that maintain patient safety and product quality while enabling innovation.
The Pharmaceutical Inspection Convention and Pharmaceutical Inspection Co-operation Scheme (PIC/S) has also begun developing guidance on AI applications, recognizing the need for international coordination in this rapidly evolving area. The alignment between Annex 22 and PIC/S approaches suggests movement toward harmonized international standards that could facilitate global implementation of AI applications.
Industry Standards: Complementing Regulatory Requirements
Professional organizations and industry associations are developing standards and best practices that complement regulatory requirements while providing more detailed guidance for implementation. The International Society for Pharmaceutical Engineering (ISPE) has published guidance on AI governance frameworks that emphasize risk-based approaches and lifecycle management principles.
Emerging Considerations: Preparing for Future Developments
The regulatory landscape for AI in pharmaceutical manufacturing continues to evolve as regulators gain experience with specific applications and technologies advance. Several emerging considerations are likely to influence future regulatory developments and should be considered by organizations planning AI implementations.
The potential for AI applications to generate novel insights that challenge established practices raises questions about how regulatory frameworks should address innovation that falls outside existing precedents. The missing middle philosophy provides a framework for managing these situations by maintaining human accountability while enabling AI-driven insights to inform decision-making.
The increasing sophistication of AI technologies, including advances in explainable AI and federated learning approaches, may enable applications that are currently excluded from critical GMP processes. Regulatory frameworks will need to evolve to address these capabilities while maintaining appropriate safeguards for patient safety and product quality.
Challenges and Limitations: Navigating the Complexities of AI Implementation
Despite the promise of AI applications in pharmaceutical manufacturing, significant challenges and limitations must be addressed to realize the full potential of human-machine collaboration in GMP environments. These challenges span technical, organizational, and regulatory dimensions and require careful consideration in the design and implementation of AI systems.
Technical Challenges: Ensuring Reliability and Performance
The implementation of AI in GMP environments faces significant technical challenges related to data quality, system validation, and performance consistency. Pharmaceutical manufacturing generates vast amounts of data from multiple sources, including process sensors, laboratory instruments, and quality control systems. Ensuring that this data is of sufficient quality to train and operate AI systems requires robust data governance frameworks and quality assurance processes.
Data integrity requirements in GMP environments are particularly stringent, demanding that all data be attributable, legible, contemporaneous, original, and accurate (ALCOA principles). AI systems must be designed to maintain these data integrity principles throughout their operation, including during data preprocessing, model training, and prediction generation phases. This requirement can complicate AI implementations and requires careful attention to system design and validation approaches.
System validation presents another significant technical challenge. Traditional validation approaches for computerized systems rely on deterministic testing methodologies that may not be fully applicable to AI systems, particularly those that employ machine learning algorithms. Annex 22 addresses some of these challenges by focusing on static, deterministic AI models, but even these systems require validation approaches that can demonstrate consistent performance across expected operating conditions.
The black box nature of some AI algorithms creates challenges for meeting explainability requirements. While Annex 22 mandates that AI systems provide transparent justifications for their decisions, achieving this transparency can be technically challenging for complex machine learning models. Organizations must balance the analytical capabilities of sophisticated AI algorithms with the transparency requirements of GMP environments.
Organizational Challenges: Building Capabilities and Managing Change
The successful implementation of AI in pharmaceutical manufacturing requires significant organizational capabilities that many companies are still developing. The missing middle approach demands that organizations build fusion skills across their workforce while maintaining existing competencies in traditional pharmaceutical quality practices.
Skills development represents a particular challenge, as it requires investment in both technical training for AI systems and conceptual training for understanding how to collaborate effectively with AI. Quality professionals must develop capabilities in data analysis, statistical interpretation, and AI system interaction while maintaining their expertise in pharmaceutical science, regulatory requirements, and quality assurance principles.
Change management becomes critical when implementing AI systems that alter established workflows and decision-making processes. Traditional pharmaceutical organizations often have deeply embedded cultures that emphasize risk aversion and adherence to established procedures. Introducing AI systems that recommend changes to established practices or challenge conventional thinking requires careful change management to ensure adoption while maintaining appropriate risk controls.
The integration of AI systems with existing pharmaceutical quality systems presents additional organizational challenges. Many pharmaceutical companies operate with legacy systems that were not designed to interface with AI applications. Integrating AI capabilities while maintaining system reliability and regulatory compliance can require significant investments in system upgrades and integration capabilities.
The evolving nature of regulatory requirements for AI applications creates uncertainty for pharmaceutical organizations planning implementations. While Annex 22 provides important guidance, it is still in draft form and subject to change based on consultation feedback. Organizations must balance the desire to implement AI capabilities with the need to ensure compliance with final regulatory requirements.
The international nature of pharmaceutical manufacturing creates additional regulatory challenges, as organizations must navigate different AI governance frameworks across multiple jurisdictions. While there is movement toward harmonization, differences in regulatory approaches could complicate global implementations.
Inspection readiness represents a particular challenge for AI implementations in GMP environments. Traditional pharmaceutical inspections focus on evaluating documented procedures, training records, and system validations. AI systems introduce new elements that inspectors may be less familiar with, requiring organizations to develop new approaches to demonstrate compliance and explain AI-driven decisions to regulatory authorities.
The dynamic nature of AI systems, even static models as defined by Annex 22, creates challenges for maintaining validation status over time. Unlike traditional computerized systems that remain stable once validated, AI systems may require revalidation as they are updated or as their operating environments change. Organizations must develop lifecycle management approaches that maintain validation status while enabling continuous improvement.
Future Implications: The Evolution of Pharmaceutical Quality Assurance
The integration of AI into pharmaceutical manufacturing represents more than a technological upgrade; it signals a fundamental transformation in how quality assurance is conceptualized and practiced. As AI capabilities continue to advance and regulatory frameworks mature, the implications for pharmaceutical quality assurance extend far beyond current applications to encompass new paradigms for ensuring product safety and efficacy.
The Transformation of Quality Professional Roles
The missing middle philosophy suggests that AI integration will transform rather than eliminate quality professional roles in pharmaceutical manufacturing. Future quality professionals will likely serve as AI collaborators who combine domain expertise with AI literacy to make more informed decisions than either humans or machines could make independently.
These evolved roles will require professionals who can bridge the gap between pharmaceutical science and data science, understanding both the regulatory requirements that govern pharmaceutical manufacturing and the capabilities and limitations of AI systems. Quality professionals will need to develop skills in AI system management, including understanding how to train, validate, and monitor AI applications while maintaining appropriate skepticism about their outputs.
The emergence of new role categories seems likely, including AI trainers who specialize in developing and maintaining AI models for pharmaceutical applications, AI explainers who help interpret AI outputs for regulatory and business purposes, and AI sustainers who ensure that AI systems continue to operate effectively over time. These roles reflect the missing middle philosophy by combining human expertise with AI capabilities to create new forms of value.
Fusion Skill
Category
Definition
Pharmaceutical Quality Application
Current Skill Level (Typical)
Target Skill Level (AI Era)
Intelligent Interrogation
Machines Augment Humans
Knowing how to ask the right questions of AI systems across levels of abstraction to get meaningful insights
Querying AI systems for deviation analysis, asking specific questions about historical patterns and root causes
Low – Basic
High – Advanced
Judgment Integration
Machines Augment Humans
The ability to combine AI-generated insights with human expertise and judgment to make informed decisions
Combining AI recommendations with regulatory knowledge and professional judgment in quality decisions
Medium – Developing
High – Advanced
Reciprocal Apprenticing
Humans + Machines (Both)
Mutual learning where humans train AI while AI teaches humans, creating bidirectional skill development
Training AI on quality patterns while learning from AI insights about process optimization
Low – Basic
High – Advanced
Bot-based Empowerment
Machines Augment Humans
Working effectively with AI agents to extend human capabilities and create enhanced performance
Using AI-powered inspection systems while maintaining human oversight and decision authority
Low – Basic
High – Advanced
Holistic Melding
Machines Augment Humans
Developing robust mental models of AI capabilities to improve collaborative outcomes
Understanding AI capabilities in predictive maintenance to optimize intervention timing
Low – Basic
Medium – Proficient
Re-humanizing Time
Humans Manage Machines
Using AI to free up human capacity for distinctly human activities like creativity and relationship building
Automating routine data analysis to focus on strategic quality improvements and regulatory planning
Medium – Developing
High – Advanced
Responsible Normalizing
Humans Manage Machines
Responsibly shaping the purpose and perception of human-machine interaction for individuals and society
Ensuring AI implementations align with GMP principles and patient safety requirements
Medium – Developing
High – Advanced
Relentless Reimagining
Humans + Machines (Both)
The discipline of creating entirely new processes and business models rather than just automating existing ones
Redesigning quality processes from scratch to leverage AI capabilities while maintaining compliance
Low – Basic
Medium – Proficient
Advanced AI Applications: Beyond Current Regulatory Boundaries
While current regulatory frameworks focus on static, deterministic AI models, the future likely holds opportunities for more sophisticated AI applications that could further transform pharmaceutical quality assurance. Dynamic learning systems, currently excluded from critical GMP applications by Annex 22, may eventually be deemed acceptable as our understanding of their risks and benefits improves.
Generative AI applications, while currently limited to non-critical applications, could potentially revolutionize areas such as deviation investigation, regulatory documentation, and training material development. As these technologies mature and appropriate governance frameworks develop, they may enable new forms of human-AI collaboration that further expand the missing middle in pharmaceutical manufacturing.
The integration of AI with other emerging technologies, such as digital twins and advanced sensor networks, could create comprehensive pharmaceutical manufacturing ecosystems that continuously optimize quality while maintaining human oversight. These integrated systems could enable unprecedented levels of process understanding and control while preserving the human accountability that regulations require.
Personalized Medicine and Quality Assurance Implications
The trend toward personalized medicine presents unique challenges and opportunities for AI applications in pharmaceutical quality assurance. Traditional GMP frameworks are designed around standardized products manufactured at scale, but personalized therapies may require individualized quality approaches that adapt to specific patient or product characteristics.
AI systems could enable quality assurance approaches that adjust to the unique requirements of personalized therapies while maintaining appropriate safety and efficacy standards. This might involve AI-driven risk assessments that consider patient-specific factors or quality control approaches that adapt to the characteristics of individual therapeutic products.
The regulatory frameworks for these applications will likely need to evolve beyond current approaches, potentially incorporating more flexible risk-based approaches that can accommodate the variability inherent in personalized medicine while maintaining patient safety. The missing middle philosophy provides a framework for managing this complexity by ensuring that human judgment remains central to quality decisions while leveraging AI capabilities to manage the increased complexity of personalized manufacturing.
Global Harmonization and Regulatory Evolution
The future of AI in pharmaceutical manufacturing will likely be shaped by efforts to harmonize regulatory approaches across different jurisdictions. The current patchwork of national and regional guidelines creates complexity for global pharmaceutical companies, but movement toward harmonized international standards could facilitate broader AI adoption.
The development of risk-based regulatory frameworks that focus on outcomes rather than specific technologies could enable more flexible approaches to AI implementation while maintaining appropriate safeguards. These frameworks would need to balance the desire for innovation with the fundamental regulatory imperative to protect patient safety and ensure product quality.
The evolution of regulatory science itself may be influenced by AI applications, with regulatory agencies potentially using AI tools to enhance their own capabilities in areas such as data analysis, risk assessment, and inspection planning. This could create new opportunities for collaboration between industry and regulators while maintaining appropriate independence and oversight.
Recommendations for Industry Implementation
Based on the analysis of current regulatory frameworks, technological capabilities, and industry best practices, several key recommendations emerge for pharmaceutical organizations seeking to implement AI applications that align with the missing middle philosophy and regulatory expectations.
Developing AI Governance Frameworks
Organizations should establish comprehensive AI governance frameworks that address the full lifecycle of AI applications from development through retirement. These frameworks should align with existing pharmaceutical quality systems while addressing the unique characteristics of AI technologies. The governance framework should define roles and responsibilities for AI oversight, establish approval processes for AI implementations, and create mechanisms for ongoing monitoring and risk management.
The governance framework should explicitly address the human oversight requirements outlined in Annex 22, ensuring that qualified personnel remain accountable for all decisions that could impact patient safety, product quality, or data integrity. This includes defining the knowledge and training requirements for personnel who will work with AI systems and establishing procedures for ensuring that human operators understand AI capabilities and limitations.
Risk assessment processes should be integrated throughout the AI lifecycle, beginning with initial feasibility assessments and continuing through ongoing monitoring of system performance. These risk assessments should consider not only technical risks but also regulatory, business, and ethical considerations that could impact AI implementations.
AI Family
Description
Key Characteristics
Annex 22 Classification
GMP Applications
Validation Requirements
Risk Level
Rule-Based Systems
If-then logic systems with predetermined decision trees and fixed algorithms
Not applicable – prohibited for critical GMP applications
High
Federated Learning
Distributed learning across multiple sites while keeping data local
Privacy-preserving distributed training, model aggregation
Prohibited for Critical GMP
Multi-site model training while preserving data privacy
Not applicable – prohibited for critical GMP applications
Medium
detailed classification table of AI families and their regulatory status under the draft EU Annex 22
Building Organizational Capabilities
Successful AI implementation requires significant investment in organizational capabilities that enable effective human-machine collaboration. This includes technical capabilities for developing, validating, and maintaining AI systems, as well as human capabilities for collaborating effectively with AI.
Technical capability development should focus on areas such as data science, machine learning, and AI system validation. Organizations may need to hire new personnel with these capabilities or invest in training existing staff. The technical capabilities should be integrated with existing pharmaceutical science and quality assurance expertise to ensure that AI applications align with industry requirements.
Human capability development should focus on fusion skills that enable effective collaboration with AI systems. This includes intelligent interrogation skills for querying AI systems effectively, judgment integration skills for combining AI insights with human expertise, and reciprocal apprenticing skills for mutual learning between humans and AI. Training programs should help personnel understand both the capabilities and limitations of AI systems while maintaining their core competencies in pharmaceutical quality assurance.
Implementing Pilot Programs
Organizations should consider implementing pilot programs that demonstrate AI capabilities in controlled environments before pursuing broader implementations. These pilots should focus on applications that align with current regulatory frameworks while providing opportunities to develop organizational capabilities and understanding.
Pilot programs should be designed to generate evidence of AI effectiveness while maintaining rigorous controls that ensure patient safety and regulatory compliance. This includes comprehensive validation approaches, robust change control processes, and thorough documentation of AI system performance.
The pilot programs should also serve as learning opportunities for developing organizational capabilities and refining AI governance approaches. Lessons learned from pilot implementations should be captured and used to inform broader AI strategies and implementation approaches.
Engaging with Regulatory Authorities
Organizations should actively engage with regulatory authorities to understand expectations and contribute to the development of regulatory frameworks for AI applications. This engagement can help ensure that AI implementations align with regulatory expectations while providing input that shapes future guidance.
Regulatory engagement should begin early in the AI development process, potentially including pre-submission meetings or other formal interaction mechanisms. Organizations should be prepared to explain their AI approaches, demonstrate compliance with existing requirements, and address any novel aspects of their implementations.
Industry associations and professional organizations provide valuable forums for collective engagement with regulatory authorities on AI-related issues. Organizations should participate in these forums to contribute to industry understanding and influence regulatory development.
Conclusion: Embracing the Collaborative Future of Pharmaceutical Quality
The convergence of the missing middle concept with the regulatory reality of Annex 22 represents a defining moment for pharmaceutical quality assurance. Rather than viewing AI as either a replacement for human expertise or a mere automation tool, the industry has the opportunity to embrace a collaborative paradigm that enhances human capabilities while maintaining the rigorous oversight that patient safety demands.
The journey toward effective human-AI collaboration in GMP environments will not be without challenges. Technical hurdles around data quality, system validation, and explainability must be overcome. Organizational capabilities in both AI technology and fusion skills must be developed. Regulatory frameworks will continue to evolve as experience accumulates and understanding deepens. However, the potential benefits—enhanced product quality, improved operational efficiency, and more effective regulatory compliance—justify the investment required to address these challenges.
The missing middle philosophy provides a roadmap for navigating this transformation. By focusing on collaboration rather than replacement, by maintaining human accountability while leveraging AI capabilities, and by developing the fusion skills necessary for effective human-machine partnerships, pharmaceutical organizations can position themselves to thrive in an AI-augmented future while upholding the industry’s fundamental commitment to patient safety and product quality.
Annex 22 represents just the beginning of this transformation. As AI technologies continue to advance and regulatory frameworks mature, new opportunities will emerge for expanding the scope and sophistication of human-AI collaboration in pharmaceutical manufacturing. Organizations that invest now in building the capabilities, governance frameworks, and organizational cultures necessary for effective AI collaboration will be best positioned to benefit from these future developments.
The future of pharmaceutical quality assurance lies not in choosing between human expertise and artificial intelligence, but in combining them in ways that create value neither could achieve alone. The missing middle is not empty space to be filled, but fertile ground for innovation that maintains the human judgment and accountability that regulations require while leveraging the analytical capabilities that AI provides. As we move forward into this new era, the most successful organizations will be those that master the art of human-machine collaboration, creating a future where technology serves to amplify rather than replace the human expertise that has always been at the heart of pharmaceutical quality assurance.
The integration of AI into pharmaceutical manufacturing represents more than a technological evolution; it embodies a fundamental reimagining of how quality is assured, how decisions are made, and how human expertise can be augmented rather than replaced. The missing middle concept, operationalized through frameworks like Annex 22, provides a path forward that honors both the innovative potential of AI and the irreplaceable value of human judgment in ensuring that the medicines we manufacture continue to meet the highest standards of safety, efficacy, and quality that patients deserve.
Pharmaceutical compliance is experiencing a tectonic shift, and nowhere is that more clear than in the looming overhaul of EU GMP Annex 11. Most quality leaders have been laser-focused on the revised demands for electronic signatures, access management, or supplier oversight—as I’ve detailed in my previous deep analyses, but few realize that Section 10: Handling of Data is the sleeping volcano in the draft. It is here that the revised Annex 11 transforms data handling controls from “do your best and patch with SOPs” into an auditable, digital, risk-based discipline shaped by technological change.
This isn’t about stocking up your data archive or flipping the “audit trail” switch. This is about putting every point of data entry, transfer, migration, and security under the microscope—and making their control, verification, and risk mitigation the default, not the exception. If, until now, your team has managed GMP data with a cocktail of trust, periodic spot checks, and a healthy dose of hope, you are about to discover just how high the bar has been raised.
The Heart of Section 10: Every Data Touchpoint Is Critical
Section 10, as rewritten in the draft Annex 11, isn’t long, but it is dense. Its brevity belies the workload it creates: a mandate for systematizing, validating, and documenting every critical movement or entry of GMP-relevant data. The section is split into four thematic requirements, each of which deserves careful analysis:
Input verification—requiring plausibility checks for all manual entry of critical data,
Data transfer—enforcing validated electronic interfaces and exceptional controls for any manual transcription,
Data migration—demanding that every one-off or routine migration goes through a controlled, validated process,
Encryption—making secure storage and movement of critical data a risk-based expectation, not an afterthought.
Understanding these not as checkboxes but as an interconnected risk-control philosophy is the only way to achieve robust compliance—and to survive inspection without scrambling for a “procedural explanation” for each data error found.
Input Verification: Automating the Frontline Defense
The End of “Operator Skill” as a Compliance Pillar
Human error, for as long as there have been batch records and lab notebooks, has been a known compliance risk. Before electronic records, the answer was redundancy: a second set of eyes, a periodic QC review, or—let’s be realistic—a quick initial on a form the day before an audit. But in the age of digital systems, Section 10.1 recognizes the simple truth: where technology can prevent senseless or dangerous entries, it must.
Manual entry of critical data—think product counts, analytical results, process parameters—is now subject to real-time, system-enforced plausibility checks. Gone are the days when outlandish numbers in a yield calculation raises no flag, or when an analyst logs a temperature outside any physically possible range with little more than a raised eyebrow. Section 10 demands that every critical data field is bounded by logic—ranges, patterns, value consistency checks—and that nonsensical entries are not just flagged but, ideally, rejected automatically.
Any field that is critical to product quality or patient safety must be controlled at the entry point by automated means. If such logic is technically feasible but not deployed, expect intensive regulatory scrutiny—and be prepared to defend, in writing, why it isn’t in place.
Designing Plausibility Controls: Making Them Work
What does this mean on a practical level? It means scoping your process maps and digitized workflows to inventory every manual input touching GMP outcomes. For each, you need to:
Establish plausible ranges and patterns based on historical data, scientific rationale, and risk analysis.
Program system logic to enforce these boundaries, including mandatory explanatory overrides for any values outside “normal.”
Ensure every override is logged, investigated, and trended—because “frequent overrides” typically signal either badly set limits or a process slipping out of control.
But it’s not just numeric entries. Selectable options, free-text assessments, and uploads of evidence (e.g., images or files) must also be checked for logic and completeness, and mechanisms must exist to prevent accidental omissions or nonsensical entries (like uploading the wrong batch report for a product lot).
These expectations put pressure on system design teams and user interface developers, but they also fundamentally change the culture: from one where error detection is post hoc and personal, to one where error prevention is systemic and algorithmic.
Data Transfer: Validated Interfaces as the Foundation
Automated Data Flows, Not “Swivel Chair Integration”
The next Section 10 pillar wipes out the old “good enough” culture of manually keying critical data between systems—a common practice all the way up to the present day, despite decades of technical options to network devices, integrate systems, and use direct data feeds.
In this new paradigm, critical data must be transferred between systems electronically whenever possible. That means, for example, that:
Laboratory instruments should push their results to the LIMS automatically, not rely on an analyst to retype them.
The MES should transmit batch data to ERP systems for release decisions without recourse to copy-pasting or printout scanning.
Environmental monitoring systems should use validated data feeds into digital reports, not rely on handwritten transcriptions or spreadsheet imports.
Where technology blocks this approach—due to legacy equipment, bespoke protocols, or prohibitive costs—manual transfer is only justifiable as an explicitly assessed and mitigated risk. In those rare cases, organizations must implement secondary controls: independent verification by a second person, pre- and post-transfer checks, and logging of every step and confirmation.
What does a validated interface mean in this context? Not just that two systems can “talk,” but that the transfer is:
Complete (no dropped or duplicated records)
Accurate (no transformation errors or field misalignments)
Secure (with no risk of tampering or interception)
Every one of these must be tested at system qualification (OQ/PQ) and periodically revalidated if either end of the interface changes. Error conditions (such as data out of expected range, failed transfers, or discrepancies) must be logged, flagged to the user, and if possible, halt the associated GMP process until resolved.
Practical Hurdles—and Why They’re No Excuse
Organizations will protest: not every workflow can be harmonized, and some labyrinthine legacy systems lack the APIs or connectivity for automation. The response is clear: you can do manual transfer only when you’ve mapped, justified, and mitigated the added risk. This risk assessment and control strategy will be expected, and if auditors spot critical data being handed off by paper (including the batch record) or spreadsheet without robust double verification, you’ll have a finding that’s impossible to “train away.”
Remember, Annex 11’s philosophy flows from data integrity risk, not comfort or habit. In the new digital reality, technically possible is the compliance baseline.
Data Migration: Control, Validation, and Traceability
Migration Upgrades Are Compliance Projects, Not IT Favors
Section 10.3 brings overdue clarity to a part of compliance historically left to “IT shops” rather than Quality or data governance leads: migrations. In recent years, as cloud moves and system upgrades have exploded, so have the risks. Data gaps, incomplete mapping, field mismatches, and “it worked in test but not in prod” errors lurk in every migration, and their impact is enormous—lost batch records, orphaned critical information, and products released with documentation that simply vanished after a system reboot.
Annex 11 lays down a clear gauntlet: all data migrations must be planned, risk-assessed, and validated. Both the sending and receiving platforms must be evaluated for data constraints, and the migration process itself is subject to the same quality rigor as any new computerized system implementation.
This requires a full lifecycle approach:
Pre-migration planning to document field mapping, data types, format and allowable value reconciliations, and expected record counts.
Controlled execution with logs of each action, anomalies, and troubleshooting steps.
Post-migration verification—not just a “looks ok” sample, but a full reconciliation of batch counts, search for missing or duplicated records, and (where practical) data integrity spot checks.
Formal sign-off, with electronic evidence and supporting risk assessment, that the migration did not introduce errors, losses, or uncontrolled transformations.
Validating the Entire Chain, Not Just the Output
Annex 11’s approach is process-oriented. You can’t simply “prove a few outputs match”; you must show that the process as executed controlled, logged, and safeguarded every record. If source data was garbage, destination data will be worse—so validation includes both the “what” and the “how.” Don’t forget to document how you’ll highlight or remediate mismatched or orphaned records for future investigation or reprocessing; missing this step is a quality and regulatory land mine.
It’s no longer acceptable to treat migration as a purely technical exercise. Every migration is a compliance event. If you can’t show the system’s record—start-to-finish—of how, by whom, when, and under what procedural/corrective control migrations have been performed, you are vulnerable on every product released or batch referencing that data.
Encryption: Securing Data as a Business and Regulatory Mandate
Beyond “Defense in Depth” to a Compliance Expectation
Historically, data security and encryption were IT problems, and the GMP justification for employing them was often little stronger than “everyone else is doing it.” The revised Section 10 throws that era in the trash bin. Encryption is now a risk-based compliance requirement for storage and transfer of critical GMP data. If you don’t use strong encryption “where applicable,” you’d better have a risk assessment ready that shows why the threat is minimal or the technical/operational risk of encryption is greater than the gain.
This requirement is equally relevant whether you’re holding batch record files, digital signatures, process parameter archives, raw QC data, or product release records. Security compromises aren’t just a hacking story; they’re a data integrity, fraud prevention, and business continuity story. In the new regulatory mindset, unencrypted critical data is always suspicious. This is doubly so when the data moves through cloud services, outsourced IT, or is ever accessible outside the organization’s perimeter.
Implementing and Maintaining Encryption: Avoiding Hollow Controls
To comply, you need to specify and control:
Encryption standards (e.g., minimum AES-256 for rest and transit)
Documentation for every location and method where data is or isn’t encrypted, with reference to risk assessments
Procedures for regularly verifying encryption status and responding to incidents or suspected compromises
Regulators will likely want to see not only system specifications but also periodic tests, audit trails of encryption/decryption, and readouts from recent patch cycles or vulnerability scans proving encryption hasn’t been silently “turned off” or configured improperly.
Section 10 Is the Hub of the Data Integrity Wheel
Section 10 cannot be treated in isolation. It underpins and is fed by virtually every other control in the GMP computerized system ecosystem.
Input controls support audit trails: If data can be entered erroneously or fraudulently, the best audit trail is just a record of error.
Validated transfers prevent downstream chaos: If system A and system B don’t transfer reliably, everything “downstream” is compromised.
Migrations touch batch continuity and product release: If you lose or misplace records, your recall and investigation responses are instantly impaired.
Encryption protects change control and deviation closure: If sensitive data is exposed, audit trails and signature controls can’t protect you from the consequences.
Risk-Based Implementation: From Doctrine to Daily Practice
The draft’s biggest strength is its honest embrace of risk-based thinking. Every expectation in Section 10 is to be scaled by impact to product quality and patient safety. You can—and must—document decisions for why a given control is (or is not) necessary for every data touchpoint in your process universe.
That means your risk assessment does more than check a box. For every GMP data field, every transfer, every planned or surprise migration, every storage endpoint, you need to:
Identify every way the data could be made inaccurate, incomplete, unavailable, or stolen.
Define controls appropriate both to the criticality of the data and the likelihood and detectability of error or compromise.
Test and document both normal and failure scenarios—because what matters in a recall, investigation, or regulatory challenge is what happens when things go wrong, not just when they go right.
ALCOA+ is codified by these risk processes: accuracy via plausibility checks, completeness via transfer validation, longevity via robust migration and storage; contemporaneity and endurability via encryption and audit linkage.
Handling of Data vs. Previous Guidance and Global Norms
While much of this seems “good practice,” make no mistake: the regulatory expectations have fundamentally changed. In 2011, Annex 11 was silent on specifics, and 21 CFR Part 11 relied on broad “input checks” and an expectation that organizations would design controls relative to what was reasonable at the time.
Now:
Electronic input plausibility is not just a “should” but a “must”—if your system can automate it, you must.
Manual transfer is justified, not assumed; all manual steps must have procedural/methodological reinforcement and evidence logs.
Migration is a qualification event. The entire lifecycle, not just the output, must be documented, trended, and reviewed.
Encryption is an expectation, not a best effort. The risk burden now falls on you to prove why it isn’t needed, not why it is.
Responsibility is on the MAH/manufacturer, not the vendor, IT, or “business owner.” You outsource activity, not liability.
This matches, in setting, recent FDA guidance via Computer Software Assurance (CSA), GAMP 5’s digital risk lifecycle, and every modern data privacy regulation. The difference is that, starting with the new Annex 11, these approaches are not “suggested”—they are codified.
Real-Life Scenarios: Application of Section 10
Imagine a high-speed packaging line. The operator enters the number of rejected vials per shift. In the old regime, the operator could mistype “80” as “800” or enter a negative number during a hasty correction. With section 10 in force, the system simply will not permit it—90% confidence that any such error will be caught before it mars the official record.
Now think about laboratory results—analysts transferring HPLC data into the LIMS manually. Every entry runs a risk of decimal misplacement or sample ID mismatch. Annex 11 now demands full instrument-to-LIMS interfacing (where feasible), and when not, a double verification protocol meticulously executed, logged, and reviewed.
On the migration front, consider upgrading your document management system. The stakes: decades of batch release records. In 2019, you might have planned a database export, a few spot checks, and post-migration validation of “high value” documents. Under the new Annex 11, you require a documented mapping of every critical field, technical and process reconciliation, error reporting, and lasting evidence for defensibility two or ten years from now.
Encryption is now expected as a default. Cloud-hosted data with no encryption? Prepare to be asked why, and to defend your choice with up-to-date, context-specific risk assessments—not hand-waving.
Bringing Section 10 to Life: Steps for Implementation
A successful strategy for aligning to Annex 11 Section 10 begins with an exhaustive mapping of all critical data touchpoints and their methods of entry, transfer, and storage. This is a multidisciplinary process, requiring cooperation among quality, IT, operations, and compliance teams.
For each critical data field or process, define:
The party responsible for its entry and management
The system’s capability for plausibility checking, range enforcement, and error prevention;
Mechanisms to block or correct entry outside expected norms
Methods of data handoff and transfer between systems, with documentation of integration or a procedural justification for unavoidable manual steps
Protocols and evidence logs for validation of both routine transfers and one-off (migration) events
For all manual data handling that remains, create detailed, risk-based procedures for independent verification and trending review. For data migration, walk through an end-to-end lifecycle—pre-migration risk mapping, execution protocols, post-migration review, discrepancy handling, and archiving of all planning/validation evidence.
For storage and transfer, produce a risk matrix for where and how critical data is held, updated, and moved, and deploy encryption accordingly. Document both technical standards and the process for periodic review and incident response.
Quality management is not the sole owner; business leads, system admins, and IT architects must be brought to the table. For every major change, tie change control procedures to a Section 10 review—any new process, upgrade, or integration comes back to data handling risk, with a closing check for automation and procedural compliance.
Regulatory Impact and Inspection Strategy
Regulatory expectations around data integrity are not only becoming more stringent—they are also far more precise and actionable than in the past. Inspectors now arrive prepared and trained to probe deeply into what’s called “data provenance”: that is, the complete, traceable life story of every critical data point. It’s no longer sufficient to show where a value appears in a final batch record or report; regulators want to see how that data originated, through which systems and interfaces it was transferred, how each entry or modification was verified, and exactly what controls were in place (or not in place) at each step.
Gone are the days when, if questioned about persistent risks like error-prone manual transcription, a company could deflect with, “that’s how we’ve always done it.” Now, inspectors expect detailed explanations and justifications for every manual, non-automated, or non-encrypted data entry or transfer. They will require you to produce not just policies but actual logs, complete audit trails, electronic signature evidence where required, and documented decision-making within your risk assessments for every process step that isn’t fully controlled by technology.
In practical terms, this means you must be able to reconstruct and defend the exact conditions and controls present at every point data is created, handled, moved, or modified. If a process relies on a workaround, a manual step, or an unvalidated migration, you will need transparent evidence that risks were understood, assessed, and mitigated—not simply asserted away.
The implications are profound: mastering Section 10 isn’t just about satisfying the regulator. Robust, risk-based data handling is fundamental to your operation’s resilience—improving traceability, minimizing costly errors or data loss, ensuring you can withstand disruption, and enabling true digital transformation across your business. Leaders who excel here will find that their compliance posture translates into real business value, competitive differentiation, and lasting operational stability.
The Bigger Picture: Section 10 as Industry Roadmap
What’s clear is this: Section 10 eliminates the excuses that have long made “data handling risk” a tolerated, if regrettable, feature of pharmaceutical compliance. It replaces them with a pathway for digital, risk-based, and auditable control culture. This is not just for global pharma behemoths—cloud-native startups, generics manufacturers, and even virtual companies reliant on CDMOs must take note. The same expectations now apply to every regulated data touchpoint, wherever in the supply chain or manufacturing lifecycle it lies.
Bringing your controls into compliance with Section 10 is a strategic imperative in 2025 and beyond. Those who move fastest will spend less time and money on post-inspection remediation, operate more efficiently, and have a defensible record for every outcome.
Requirement Area
Annex 11 (2011)
Draft Annex 11 Section 10 (2025)
21 CFR Part 11
GAMP 5 / Best Practice
Input verification
General expectation, not defined
Mandatory for critical manual entry; system logic and boundaries
“Input checks” required, methods not specified
Risk-based, ideally automated
Data transfer
Manual allowed, interface preferred
Validated interfaces wherever possible; strict controls for manual
Implicit through system interface requirements
Automated transfer is the baseline, double checked for manual
Manual transcription
Allowed, requires review
Only justified exceptions; robust secondary verification & documentation
Not directly mentioned
Two-person verification, periodic audit and trending
Data migration
Mentioned, not detailed
Must be planned, risk-assessed, validated, and be fully auditable
Implied via system lifecycle controls
Full protocol: mapping, logs, verification, and discrepancy handling
Encryption
Not referenced
Mandated for critical data; exceptions need documented, defensible risk
Recommended, not strictly required
Default for sensitive data; standard in cloud, backup, and distributed setups
Audit trail for handling
Implied via system change auditing
All data moves and handling steps linked/logged in audit trail
Required for modifications/rest/correction
Integrated with system actions, trended for error and compliance
Manual exceptions
Not formally addressed
Must be justified and mitigated; always subject to periodic review
Not directly stated
Exception management, always with trending, review, and CAPA
Handling of Data as Quality Culture, Not Just IT Control
Section 10 in the draft Annex 11 is nothing less than the codification of real data integrity for the digitalized era. It lays out a field guide for what true GMP data governance looks like—not in the clouds of intention, but in the minutiae of everyday operation. Whether you’re designing a new MES integration, cleaning up the residual technical debt of manual record transfer, or planning the next system migration, take heed: how you handle data when no one’s watching is the new standard of excellence in pharmaceutical quality.
As always, the organizations that embrace these requirements as opportunities—not just regulatory burdens—will build a culture, a system, and a supply chain that are robust, efficient, and genuinely defensible.