← Back to all posts

IT/OT Convergence in CDMOs: What QA Must Validate for AI

For CDMOs, IT/OT convergence is no longer a future-state architecture discussion. It is already reshaping how batch execution, environmental monitoring, MES, SCADA, historians, QMS platforms, and document repositories interact. Once AI is introduced into that landscape, QA inherits a new validation challenge: not just whether the AI provides useful answers, but whether its inputs, boundaries, traceability, and operational controls remain acceptable in a GxP environment.

In practice, many CDMOs are evaluating AI assistants to help teams navigate SOPs, validation packages, deviations, change records, equipment procedures, and regulatory requirements across both enterprise and shop-floor contexts. That creates immediate questions for QA and CSV:

  • What data from OT systems can the AI access?
  • Is the AI making decisions, or only supporting trained personnel?
  • How is source content controlled when it comes from multiple validated and non-validated repositories?
  • What must be tested under GAMP5, EU Annex 11, and 21 CFR Part 11 expectations?
  • How do you preserve data integrity and segregation of duties when IT and OT data are combined?

For CDMOs operating across DACH and the broader EU, these are not theoretical concerns. Multi-client manufacturing, hybrid legacy estates, and frequent process transfers make IT/OT convergence more complex than in a single-product manufacturer. AI can help, but only if QA validates the right things.

Why IT/OT convergence changes the AI validation scope

In a traditional setup, documentation search, batch review, manufacturing execution, and equipment control were often assessed as separate domains. With convergence, the AI layer may retrieve from quality documents in SharePoint or a DMS, but also reference MES work instructions, SCADA alarm procedures, historian trends, calibration records, and training materials. Even if the AI does not write back into OT systems, it is now operating across a blended compliance boundary.

That matters because validation can no longer focus only on the model or chatbot interface. QA must assess the entire intended use:

  • Source system landscape: where content originates and whether those sources are approved for regulated use
  • Data flow and interfaces: how information moves from OT and IT repositories into the retrieval layer
  • User roles and access control: whether the assistant respects least privilege across manufacturing, engineering, QA, and validation roles
  • Use-case boundaries: whether the AI is limited to decision support rather than automated release, approval, or control actions
  • Auditability: whether every answer can be traced back to version-controlled source content

Under GAMP5 Second Edition, this is entirely consistent with a risk-based, intended-use-driven approach. The regulated question is not “is AI present?” but “what GxP-relevant function is being supported, what risk does it introduce, and what controls demonstrate fitness for intended use?”

What QA must define before validation starts

Before executing test scripts, QA should require a clear system definition. In CDMOs, this is where projects often fail. The AI pilot starts as “document search,” but quickly expands into manufacturing support, deviation triage, batch record interpretation, and regulatory Q&A without a corresponding update to intended use.

At minimum, QA should require the following:

  • Intended use statement identifying whether the AI supports QA, production, validation, engineering, or cross-functional users
  • System boundary defining connected repositories, interfaces, and excluded systems
  • GxP impact assessment identifying whether outputs may influence product quality, patient safety, data integrity, or regulatory decisions
  • Role matrix specifying who may query what content and under which access restrictions
  • Escalation rules stating when users must consult source systems, SMEs, or approved procedures rather than relying on AI summaries

If the AI can answer questions about line clearance, alarm handling, environmental excursions, or batch execution steps, QA should assume the system has direct GxP relevance even if it does not control equipment.

Validation focus area 1: controlled source content

The first validation question is not model accuracy. It is source control. In converged IT/OT environments, AI may pull from approved SOPs, draft work instructions, engineering notes, vendor manuals, and exported SCADA documents unless strict ingestion rules exist.

QA should verify:

  • Only approved and current GxP documents are included for regulated use cases
  • Draft, obsolete, superseded, or training-only content is either excluded or clearly labelled
  • Source metadata includes document ID, version, effective date, owner, and approval state
  • OT-derived content imported from MES/SCADA/historians is subject to defined extraction and review controls
  • Content segregation exists between clients, plants, lines, and products where applicable in CDMO operations

For multi-tenant or multi-client CDMOs, this point is critical. An AI assistant must not expose one client’s manufacturing procedures, campaign assumptions, or validation evidence to another client team. That is both a confidentiality risk and a quality system failure.

Validation focus area 2: access control and Part 11 expectations

Where IT/OT convergence meets AI, access control becomes more difficult. A maintenance engineer may have legitimate access to SCADA procedures, while a QA reviewer may need deviation and CAPA documentation, and a production supervisor may need MES-related work instructions. The AI must enforce the same access principles as the underlying systems, or stricter ones.

Under 21 CFR Part 11 and EU Annex 11, QA should examine whether the AI environment appropriately supports:

  • Unique user identification and secure authentication
  • Role-based access and restriction of unauthorized content
  • Secure, computer-generated audit trails for user activity and administrative changes
  • Record retention and retrievability for GxP-relevant interactions where required by procedure
  • Operational checks to enforce permitted sequencing and intended use

Not every AI query becomes a GxP record. But QA should define which interactions are evidence-bearing. For example, if the assistant is used during deviation investigation, batch review support, or validation protocol execution planning, procedural clarity is needed on what must be retained, reviewed, or referenced.

Validation focus area 3: answer traceability and human decision boundaries

In a converged environment, users may ask the AI questions that appear operationally actionable:

  • “What is the approved response to a SCADA temperature alarm on line 3?”
  • “Which SOP governs bioreactor cleaning verification after campaign changeover?”
  • “Has the environmental monitoring excursion workflow changed since the last revision?”

These are exactly the use cases where generic enterprise chatbots become risky. QA should validate that the AI does not provide unsupported synthesis without clear references. Every regulated answer should be grounded in approved source excerpts with enough context for the user to verify the basis of the response.

Expected controls include:

  • Citation of source documents at answer level
  • Excerpt visibility so users can inspect the exact supporting text
  • No fabricated references under negative and edge-case testing
  • Deferral behavior when evidence is missing, conflicting, or outside approved scope
  • Clear wording that the AI supports decisions but does not replace authorized procedural execution or QA approval

This aligns with ICH Q10 principles on a robust pharmaceutical quality system: management of knowledge is valuable, but final quality decisions must remain under appropriate procedural control.

Validation focus area 4: interface risks between IT and OT

Even when the AI is “read only,” interfaces create risk. A retrieval layer may ingest exports from OT systems, consume APIs from MES, or index files generated by SCADA or historian platforms. QA should not treat these as neutral plumbing. They affect data completeness, timeliness, and contextual accuracy.

Practical test scenarios should include:

  • What happens when a source document is revised but the index has not yet refreshed?
  • Can the AI distinguish between plant-specific and global procedures?
  • Are alarm response instructions linked to the correct equipment version and site context?
  • Does the assistant surface conflicting procedures if multiple repositories contain similar content?
  • What happens when an OT data feed is unavailable or incomplete?

For CDMOs, process transfer adds another complexity. During onboarding of a new client process, temporary documents, draft validation records, and transitional procedures often coexist. QA should explicitly test whether the AI excludes non-effective documents from operational answers.

Validation focus area 5: AI governance under the EU AI Act

Although many internal pharma AI assistants may not fall into the highest-risk categories, QA teams in the EU should still build governance now with the EU AI Act in mind. Even where a system is not classified as high-risk, organizations will still need documented oversight, transparency, supplier management, and acceptable-use controls.

For converged IT/OT AI deployments, useful governance questions are:

  • Who owns the AI system from a quality and technical perspective?
  • How are model or retrieval changes assessed through change control?
  • What training is required for users in regulated functions?
  • How are performance issues, hallucinations, or unsafe suggestions captured and investigated?
  • What vendor assurances exist regarding hosting, confidentiality, and model behavior?

In DACH inspections and client audits, mature governance is often as important as technical architecture. Teams should be able to explain not only how the AI works, but how its use is constrained within the pharmaceutical quality system.

A practical QA position for CDMOs

QA does not need to block IT/OT convergence or AI adoption. But it should insist on one principle: if AI bridges enterprise and operational knowledge in a GxP context, it must be validated as a controlled compliance support system, not deployed as a generic productivity tool.

That means validating approved content, access control, answer traceability, interface behavior, change management, and human oversight. For CDMOs, it also means preserving client segregation, site specificity, and procedural discipline across a more complex digital estate.

When those controls are in place, AI can genuinely reduce search time, improve consistency, and help QA, validation, and operations teams work faster without weakening compliance posture.

See how ComplianceRAG handles IT/OT convergence in CDMOs for AI for pharma and CDMO teams: See it in action →

Running compliance on manual search? See how ComplianceRAG handles this.

See It In Action