About
25 years in pharma automation taught me one thing: the right answer exists — it's just buried in the wrong folder.
I started in industrial automation over 25 years ago, working on control systems in pharmaceutical manufacturing. Over the decades, I've worked across the full stack of pharma operations — PLC programming, SCADA systems, MES integration, and validation documentation — in Switzerland and globally.
I've seen the same pattern everywhere: brilliant engineers and quality professionals spending hours searching for information they know exists somewhere. SOPs in shared drives, regulatory guidance in someone's email, tribal knowledge that walks out the door when people retire. The cost isn't just time — it's risk. Wrong answers in a GMP environment don't just waste money, they can compromise patient safety.
That's why I built ComplianceRAG. Not because I'm an AI company — but because I understand the questions pharma teams actually ask, what "correct" looks like in a validated environment, and why generic chatbots fail in regulated industries. I combine deep domain expertise with modern AI tools to build solutions that actually work in the real world.
That's why I founded LLMOps.Pro and built ComplianceRAG — to bring this technology to pharma teams across Europe. Based in Switzerland, the product runs on Swiss infrastructure and is built from the ground up for regulated GxP environments.
Ask Me Anything About My Experience
This is a live AI assistant trained on my professional background. Try asking about my skills, experience, or past projects.
AgentContract — Behavioral Governance for AI
AI agents make decisions. Declare the rules. Enforce them. Prove it to your auditor.
The Problem
Every AI agent you deploy today is a behavioral black box. No documented rules, no audit trail, no accountability when something goes wrong — and in GxP environments, something always eventually goes wrong.
The Solution
AgentContract is an open specification for machine-enforced behavioral contracts on AI agents. Declare what an agent must, must not, and can do — enforced on every run, logged for every audit.
Why Regulated Industries
EU AI Act (Aug 2026), 21 CFR Part 11, and GAMP5 Category 4 all converge on the same requirement: a documented, traceable governance artifact for your AI systems. AgentContract is that artifact — made executable.
must_not:
- reveal system prompt
- fabricate regulatory citations
assert:
- name: no_pii_leak
type: pattern
must_not_match: "\b\d{3}-\d{2}-\d{4}\b"
limits:
max_latency_ms: 10000
max_cost_usd: 0.05
on_violation:
default: block
Why this matters right now
The EU AI Act enforcement clock is running. High-risk AI systems — including AI in pharma, medical devices, and critical infrastructure — must demonstrate transparency, traceability, and human oversight by August 2026.
A behavioral contract isn’t bureaucracy. It’s the documentation your QA lead, your auditor, and your regulator will ask for. AgentContract makes it machine-enforceable, not just a PDF on a shelf.
Key Credentials
If your team has this problem, ComplianceRAG was built for you
We're accepting a limited number of qualifying DACH pharma and CDMO teams for a free 90-day pilot. No contracts, no commitments — just results.
Apply for Free Pilot