Skip to main content
EU AI Act deadline: 2 August 2026

Your lawyer writes the policy. We make it real.

The EU AI Act demands operational evidence — running risk management systems, human oversight workflows, automated logging, and months of audit-ready records. A legal opinion alone won't pass conformity assessment. We build the governance systems your AI product actually needs to sell into EU markets.

Compliance Countdown
111
days until EU AI Act high-risk obligations take effect. Fines up to €35M or 7% of global turnover.
89%
not fully ready
54%
minimal or no governance

Paper compliance won't protect you

Regulators have learned from GDPR. The EU AI Act is designed to catch companies that have the documents but haven't done the work. Here's what enforcement actually targets.

01

Regulators check systems, not documents

Under GDPR, "inadequate technical and organisational measures" is the third most common fine category — 18.6% of all violations. Companies had policies. They just hadn't implemented them. The AI Act is structured the same way.

02

Conformity assessment demands operational evidence

Article 9 requires a continuous, iterative risk management system. Article 12 requires automated logging. Article 14 requires designed-in human oversight. These aren't documents — they're running systems that produce months of auditable records.

03

Your enterprise clients will audit you first

Before any regulator arrives, your enterprise customers will demand compliance evidence in their vendor assessments. If you can't demonstrate operational governance, you lose the contract — and that revenue is gone.

We build governance that produces evidence, not just documents

Your legal counsel interprets the regulation. We design and implement the operational systems — risk management processes, human oversight interfaces, logging architectures, and team workflows — that generate the continuous evidence conformity assessment actually requires. Built with 20 years of user-centred design methodology, so your teams adopt it instead of working around it.

Map your AI systems and exposure

We map every AI system you build or deploy, trace decision flows, and classify your risk under Annex III. We identify which systems need conformity assessment and what evidence gaps exist. Not a questionnaire — a working session with your product and engineering teams.

Design the operational systems

We co-design human oversight interfaces (Article 14), risk management workflows (Article 9), and logging architectures (Article 12) with your engineering and product teams — so governance fits your development cycle, not the other way around.

Build and generate evidence

We implement the systems and start generating the operational evidence — risk registers, oversight records, incident logs, training completion, decision audit trails — that you'll need months of before any conformity assessment.

Hand over and stand behind it

We train your teams to run governance independently, hand over the full evidence package, and remain available as retained advisers. When an enterprise client or regulator asks for proof — you have it.

Legal advice alone

  • Interprets the regulation
  • Drafts policies and risk assessments
  • Produces a compliance opinion
  • Leaves you to build the systems

Legal + implementation

  • Builds the risk management system
  • Designs human oversight interfaces
  • Implements logging and monitoring
  • Generates months of audit-ready evidence

Three ways to begin

We work alongside your legal counsel — they handle the regulatory interpretation, we handle the operational build. Every engagement starts with your specific AI systems and EU market exposure.

Good first step

“We need to know our exposure”

Conformity Gap Analysis · 2–3 weeks

You build AI products that serve EU markets, but you're not sure what conformity assessment requires in practice. We map your systems against Annex III, identify which need assessment, and show you exactly what operational evidence you're missing.

  • Inventory every AI system and classify risk under Annex III
  • Separate legal gaps from operational gaps — so you know what your lawyer handles vs. what needs building
  • Benchmark against AESIA and CNIL published conformity guidance
  • Deliver a prioritised implementation roadmap with evidence timelines
1
Initial conversation — We learn about your organisation, your AI systems, and your EU market exposure. No forms. A proper discussion.
2
System mapping — We work with your teams to document every AI system, trace decision flows, and understand who's affected by each one.
3
Gap analysis — A written assessment comparing your current state against EU AI Act obligations, with clear RAG-rated findings.
4
Walkthrough — We present findings face-to-face and agree the priority actions together. You leave with a plan, not a PDF to interpret alone.
Delivered remotely · Priced on scope, not day rates
Let's Talk

“We need to pass conformity assessment”

Implementation Build · 8–12 weeks

You know your AI is high-risk and you need the operational systems to prove it. We build the risk management system, design human oversight workflows, implement logging, and start generating the audit-ready evidence your conformity assessment requires.

  • Risk management system design and implementation (Article 9)
  • Human oversight interface design — built into your product UX (Article 14)
  • Logging architecture and monitoring setup (Article 12)
  • Evidence generation — so you have months of operational records before assessment
1
Deep assessment — Full governance assessment plus technical review of your AI architecture and data pipelines.
2
Impact assessment — Fundamental Rights Impact Assessment for each high-risk system, as required by Article 27.
3
Co-design workshops — We sit with your teams to design human oversight mechanisms that fit how they actually work. Not templates imposed from outside.
4
Build & document — Technical documentation, risk management records, and conformity evidence — tested against your real workflows before sign-off.
5
Handover & training — Governance playbook and hands-on team training. You should be able to run this without us.
Delivered remotely with on-site workshops · Priced on scope
Let's Talk

“We need someone in our corner”

Retained Advisory · Ongoing

AI regulation is moving fast. You've done the initial work, but you need a trusted adviser who understands your systems, tracks the regulatory landscape, and is there when your board or clients have questions.

  • Monthly governance review — we know your systems, so advice is specific
  • Regulatory monitoring — enforcement actions, guidance changes, national rules
  • Board-ready reporting so leadership has a clear AI governance narrative
  • Direct access when something urgent comes up — client queries, incidents, procurement
Monthly review — We review your governance posture, discuss new AI deployments, and flag emerging regulatory changes.
Regulatory radar — Proactive alerts on EU AI Act enforcement, guidance updates, and how national implementations differ.
Board reporting — Quarterly governance status reports written for board and investor audiences.
On-call support — Direct line for urgent questions. We already know your context, so we can respond fast.
Minimum 3-month commitment · 30 days notice to cancel
Let's Talk

Find out where your AI system sits

The EU AI Act classifies AI systems into four risk tiers: prohibited, high-risk, limited-risk, and minimal-risk. Your obligations — and the deadline pressure — depend entirely on which tier applies to you. This 60-second assessment gives you an indicative classification based on five questions about what your system does and where it operates.

Is your AI system high-risk?

Answer these questions to get an indicative risk classification under the EU AI Act.

Question 1 of 5

Does your AI system make or influence decisions about individual people?

For example: hiring decisions, credit scoring, medical diagnosis, student assessment, insurance pricing, or benefit eligibility.

The implementation partner your lawyer needs

Law firms interpret the regulation. We build the operational systems that make it real. Our network brings together legal expertise, technical implementation, and 20 years of user-centred design — so governance works for the people using it, not just the auditors checking it.

  • Government-grade methodology

    Currently delivering on UK MOD AI programmes. We bring the same rigour to commercial governance — tested in environments where getting it wrong has real consequences.

  • Article 14 is a design problem

    Human oversight isn't a legal clause — it's a UX challenge. 20+ years designing human-in-the-loop systems for MOD, Royal Navy, GDS, and Google. We design oversight interfaces your teams actually use.

  • Legal + technical network

    We work alongside specialist AI regulation lawyers, data scientists, and information security professionals. You get one team, assembled for your specific engagement — without the overhead of a Big Four firm.

  • Built for AI companies, not enterprises

    We work at your pace, inside your development cycle. No 18-month transformation programmes. No 200-page strategy decks. Operational governance that ships alongside your product.

UK AI companies selling into EU regulated sectors

If your AI product makes decisions about people — in healthcare, finance, recruitment, or critical infrastructure — it's likely classified high-risk under Annex III. These are the sectors where conformity assessment is mandatory.

Healthcare AI

Clinical decision support, diagnostic AI, patient triage. CE marking under EU MDR plus AI Act Annex III dual compliance. We've worked in clinical environments where oversight design is life-critical.

Healthcare services →

Insurance & FinTech AI

Underwriting automation, credit scoring, fraud detection. Annex III Category 6 — access to essential services. Enterprise clients like Allianz and HDI already demand operational compliance evidence from vendors.

Recruitment & HR AI

AI-powered hiring, skills assessment, workforce analytics. Annex III Category 5 — employment decisions. One of the most scrutinised categories, with explicit human oversight requirements.

Defence & Critical Infrastructure

Threat detection, security AI, infrastructure monitoring. Annex III Categories 7-8. We bring direct MOD delivery experience to commercial defence AI governance.

Common questions about AI governance

Straight answers, no jargon.

Yes, if your AI system's output is used within the EU — by customers, employees, or users — you're in scope regardless of where your company is based. This is similar to how GDPR applies extraterritorially.

The Act's Annex III defines specific use cases: recruitment and HR decisions, credit scoring, education assessment, healthcare diagnosis, critical infrastructure management, and others. If your AI makes or influences significant decisions about people, it's likely high-risk.

High-risk AI system obligations take effect on 2 August 2026. Prohibited AI practices (social scoring, certain biometric uses) are already banned as of February 2025. General-purpose AI model rules apply from August 2025.

GDPR governs personal data. The EU AI Act governs AI systems specifically — including their design, testing, documentation, and deployment. They overlap (AI often processes personal data) but the AI Act adds requirements around risk assessment, human oversight, and transparency that go beyond data protection.

Your lawyer handles the legal interpretation — risk classification, regulatory opinion, policy drafting. But the EU AI Act also requires operational systems: a running risk management process, human oversight built into your product UX, automated logging, and months of evidence that these systems work. That's implementation work, not legal work. We handle the part your lawyer can't.

Fines up to €35 million or 7% of global annual turnover, whichever is higher. Beyond fines, non-compliance can mean your AI system is banned from the EU market entirely — which means losing access to 450 million potential users.

Conformity evidence takes months to accumulate.
Your enterprise clients won't wait.

Whether the deadline is August 2026 or December 2027, the operational evidence your conformity assessment requires can't be generated overnight. Enterprise procurement teams are already asking for it. Book a free 30-minute scoping call to identify your evidence gaps.