The EU AI Act, for PKI Engineers
Chapter 5 · The EU AI Act, for PKI Engineers
Section titled “Chapter 5 · The EU AI Act, for PKI Engineers”The thesis of this chapter, in one paragraph: a substantial portion of the EU AI Act’s high-risk obligations is already answered by the eIDAS / ETSI trust services regime. Treating the Act as a wholly new compliance taxonomy, divorced from existing trust-service infrastructure, is both more expensive and less defensible than treating it as a regulatory mapping exercise.
This chapter is the practitioner-facing companion to the
preprint
Cryptographic Provenance for AI Outputs: a PKI-Native Reading of EU AI Act Articles 10/12/13 through eIDAS and ETSI EN 319.
The preprint carries the formal apparatus, the threat model,
and the CDDL schema for .aep. This chapter carries the
operational reading: who builds what, when, and how.
5.1 What the AI Act actually asks for
Section titled “5.1 What the AI Act actually asks for”The EU AI Act (Regulation 2024/1689) entered into force in 2024 and applies in stages between 2025 and 2027. Its high-risk regime — Chapter III, Section 2, Articles 9–15 — is the engineering surface most likely to land on a PKI team’s desk.
A system enters the high-risk regime by satisfying either:
- Article 6(1) — the safety-component pathway. A system is a safety component of a product covered by EU sectoral legislation (machinery, medical devices, in-vitro diagnostics, toys, etc.) and the underlying product requires third-party conformity assessment.
- Article 6(2) + Annex III — the use-case pathway. A system is used in one of the eight Annex III categories: biometrics, critical infrastructure, education, employment, essential services and benefits, law enforcement, migration and border, justice and democratic processes.
If neither pathway applies, the system is not high-risk under the Act, regardless of how AI-shaped it looks. A copilot that suggests code is not high-risk; a copilot that approves a mortgage is.
The Act distinguishes two roles:
- Provider — the entity placing the AI system on the market. Article 9 (risk management), Article 11 (technical documentation), Article 12 (logging), Article 13 (transparency to deployers), Article 14 (human oversight), and Article 15 (accuracy, robustness, cybersecurity) all apply primarily to providers.
- Deployer — the entity using the AI system in operation. Articles 26 (deployer obligations) and 27 (Fundamental Rights Impact Assessment, FRIA, where required) apply to deployers.
A single organisation can be both — an in-house AI team “places its system on the market” to its own production deployment. Most EATF / Aletheia partner integrations are internal-deployment shapes of this kind.
5.2 The mapping, in a single table
Section titled “5.2 The mapping, in a single table”Three articles carry the strongest PKI fit: 10, 12, 13. Article 14 is adjacent — necessary but not sufficient. Articles 11, 26, 27 fall out as corollaries. The full mapping:
| AI Act Article | PKI-native interpretation | eIDAS / ETSI lever | EATF realisation |
|---|---|---|---|
| 10 Data governance | Integrity & provenance of artefacts only — not a cryptographic claim about training-data quality | EN 319 102-1 signature creation; canonical payload formats | .aep Evidence Package: payload hash, signature, optional dataset hash |
| 11 Technical documentation | Exportable, verifiable system descriptions | EN 319 102-1; CAdES enveloped signatures | Signed system-card bundles, model + policy version metadata |
| 12 Record-keeping | Tamper-evident, time-anchored, append-only logs | EN 319 401 / 411-1 (TSP policy); RFC 3161 (timestamps) | Hash-chained audit ledger, signed events, per-tenant block-size |
| 13 Transparency to deployers | Verifiable representations of agent capability + limits | EN 319 102-1 validation; trusted-list consumption | Chain-of-trust validation + verification UI/API |
| 14 Human oversight (adjacent) | Cryptographic anchor for human approval events | EN 319 102-1 signed events; threshold delegation patterns | Approval-event signatures; PKI makes outcomes reviewable and attributable, not safe |
| 15 Accuracy, robustness, cybersecurity | Verification endpoints, anomaly hooks (governance boundary) | (adjacent — runtime concerns) | Out of scope (governance layer) |
| 26 Deployer obligations | Signed deployer attestations referencing system-card bundles | EN 319 401 (TSP policy); FRIA referencing | Deployer-signed deployment manifest |
| 27 FRIA | Signed FRIA artefacts referencing the deployer manifest | EN 319 102-1; CAdES | FRIA evidence package linked to deployer manifest |
The rest of this chapter walks the four load-bearing rows in plain English, with a sector vignette per row to make the reading concrete. Articles 11, 26, 27 are treated together under §5.7 as corollaries.
5.3 Article 10 — the narrow reading
Section titled “5.3 Article 10 — the narrow reading”Article 10 of the AI Act asks for “data and data governance” practices: training, validation, and testing datasets must be relevant, representative, free of errors, and complete. It asks for documented data-management practices, examined for biases, with appropriate measures applied where biases are detected.
PKI cannot make any of these claims. PKI does not know what “representative” means for your population. PKI does not detect bias. What PKI can do — and the only thing the atlas claims PKI does — is bind a deployer’s assertion about a dataset to a verifiable hash of that dataset, and bind the model card that references the dataset to the same hash.
This is the narrow reading we adopt: EATF can prove that the data hash claimed in a model card is the data hash of the artefact bound to that model card. EATF cannot prove that the underlying data is fit for purpose, representative, or unbiased. Conflating the two is the single most common source of misplaced trust in the 2025–2026 “AI compliance” space.
The narrow reading is conservative, which is the point. A regulator who reads Article 10 expansively is not entitled to assume PKI has done their job for them; they must still examine the data-governance content of the model card, just as a regulator under eIDAS still examines the policy pack of a QTSP even when its certificates are cryptographically valid.
Sector vignette — water-quality-ee. The
water-quality-eepartner integration trains a binary classifier on Estonian Terviseamet open data. The model card binds a SHA-256 of the training-data snapshot, the model identifier, and the policy version into a CAdES-signed bundle. EATF signs the bundle. A regulator who fetches a daily P(violation) bulletin can chase the bulletin’smch(model-card hash) field back to the model card, verify signature integrity, and read the data-governance prose claims directly. The cryptography proves the bulletin references the exact model card the deployer published; it does not prove the data-governance prose is correct. That is the regulator’s job, and the substrate makes that job tractable.
5.4 Article 12 — record-keeping as a hash-chained ledger
Section titled “5.4 Article 12 — record-keeping as a hash-chained ledger”Article 12 requires automatic recording of events to allow traceability of the AI system’s operation. The PKI-native realisation is a hash-chained, append-only audit ledger of signed events, each carrying an RFC 3161 timestamp and a reference to the policy and model versions in effect.
Concretely, the EATF audit ledger has:
- Per-tenant block structure. Each block contains a tunable
number of events plus a header (
prev_hash,block_id,ts,tsa_token). Per-tenant block-size tuning, introduced in the late-April 2026 substrate update, lets deployers trade signing throughput for batch-signing efficiency without breaking the chain property. - Signed events. Each event is itself a
.aepEvidence Package — input hash, output hash, model identifier, policy version, deployer URI, agent URI, all CBOR-canonical and signature-bound. - Block-level signatures. The deployer signs the block
header, which includes the previous block’s hash; this is
the chain. Tampering with any historical event invalidates
every subsequent block’s
prev_hash. - Timestamps anchored by RFC 3161. Each block header carries a TSA token. We default to a non-qualified TSA; deployers under regulatory obligation (Annex III high-risk) upgrade to a qualified TSA per chapter 2.
A regulator’s question — show me everything this AI system did on 2026-04-15 — is answered by a single ledger query, scoped to the deployer, returning a paginated list of signed events. Each event is itself verifiable offline. The ledger is queryable by deployer, by agent, by action type, or by time window — and every query result is itself signable.
Sector vignette — MATx (education). The MATx partner integration is a Bayesian Knowledge Tracing engine that generates per-student feedback on micro-skill mastery. Every piece of feedback emitted to a parent is a
.aepEvidence Package: input (the student’s worksheet hash), output (the feedback text hash), model id (BKT engine version), policy version (the curriculum version in force), agent (the teacher’s deployer URI). When a parent or inspector wants to understand “what did the system tell my child this term?”, the answer is a ledger query scoped to that student’s identifier, returning a chronological list of signed feedback events. None of this requires a privileged backend view — the parent verifies offline against the deployer’s public key.
5.5 Article 13 — transparency through verification, not narrative
Section titled “5.5 Article 13 — transparency through verification, not narrative”Article 13 obliges providers to give deployers enough information to interpret system output appropriately — capabilities, limitations, performance metrics, sources of input data, intended purpose, levels of accuracy.
Common practice in 2025 is to write a “system card” as prose: a PDF document describing the AI system. Prose is necessary but insufficient. A deployer also needs a machine-readable representation that they can verify against the actual deployed artefacts.
EATF binds the system card, the model identifier, and the
policy version into the Evidence Package metadata, and exposes
verification at h2oatlas.ee/verify/.... A deployer (or
downstream auditor) can therefore answer
“is the system I am running today the system the provider
documented?” without trusting either party. Chain-of-trust
validation under EN 319 102-1 provides the formal grounding.
The user-experience for a verifying deployer is roughly:
- Receive an evidence package (downloaded, embedded in a document, or referenced by URL).
- Run
eatf verify <package.aep>(or paste the package into the web verifier ath2oatlas.ee/verify/). - Get back a structured response: signature valid? certificate path validates? OCSP fresh at sign time? TSA token validates? Model id matches the published system card hash?
- Decide whether to act on the AI output.
The novel claim for Article 13 is that the verifier — not the provider — is the source of truth about whether the system matches its documentation. This shifts the trust assumption from “providers tell the truth” to “providers are caught when they don’t”, which is operationally sounder.
Sector vignette — TADF building audit. The TADF auditor tool produces signed DOCX evidence packages for Estonian building inspectors. An inspector’s report is a CAdES-signed DOCX with embedded photos, each photo carrying a signed caption from the AI photo-captioning step. The accompanying
.aeppackage binds: report hash, model id (the captioning model version), policy version (the legal-references ranking version in force), deployer URI (TADF Ehitus OÜ), auditor URI (the human auditor). A regulator reviewing the report verifies offline; an aggrieved building owner has a public verifier URL to check; an appeal court can reproduce the verification on its own infrastructure. None of this requires TADF to operate court-facing infrastructure.
5.6 Article 14 — adjacent, not satisfied
Section titled “5.6 Article 14 — adjacent, not satisfied”Article 14 mandates effective human oversight: humans must be able to fully understand, monitor, intervene in, and override the AI system. We treat Article 14 as adjacent to the PKI mapping rather than satisfied by it.
A cryptographic anchor on approval events is necessary but not sufficient for oversight to be meaningful. EATF gives us a verifiable record that a human approved at time T under policy P; it does not make that approval informed, deliberate, or non-rubberstamped.
The atlas’s claim about Article 14 is therefore narrow: PKI makes outcomes reviewable and attributable. It does not make them overseen in the policy sense Article 14 contemplates. HCI work on the design of meaningful approval flows, and policy work on what constitutes “meaningful” oversight, sit one layer up. They are governance, not provenance.
This narrow framing matters because — in our reading of contemporary procurement — many deployers buy “AI Act compliance” tools that conflate signed approval events with satisfied Article 14 oversight. They are not the same. EATF supplies primitives that any genuinely Article-14-compliant flow can reuse, but supplying primitives is not the same as discharging the obligation.
Sector vignette — medical advisory (planned). The medical AI vertical, planned for late-2026 onboarding once a domain co-founder is in place, will be the most demanding Article 14 deployment. Every clinical advisory will require a signed practitioner approval before it is released to a patient. EATF signs the approval event — practitioner ID, claim text, claim hash, policy version, claimed-in time — but the quality of the approval is the practitioner’s obligation, governed by the deployer’s clinical policy. Article 14 reads through the substrate as “the substrate tells you who clicked the button at what time”; it does not tell you whether they meant it.
5.7 Articles 11, 26, 27 — corollaries
Section titled “5.7 Articles 11, 26, 27 — corollaries”The remaining articles in the mapping table fall out of §5.3– §5.5 as corollaries. We treat them briefly.
Article 11 — Technical documentation. Article 11 + Annex
IV require the provider to maintain technical documentation
that allows authorities to assess conformity. The PKI-native
realisation is straightforward: the documentation is a
structured artefact (system card + model card + policy pack +
risk-management notes), it is canonicalised (CBOR or JCS),
it is signed under a CAdES profile, and the signed bundle’s
hash is referenced from every Evidence Package emitted by
that system. A regulator who fetches one Evidence Package can
chase its mch field to the exact technical documentation in
force at signing time.
Article 26 — Deployer obligations. Article 26(2) (use as instructed by the provider) and 26(5) (logging) have direct PKI fits. The first maps onto a deployer-signed deployment manifest — a CAdES-signed CBOR object asserting which provider system, version, model, and policy the deployer has activated. The second is the same Article 12 ledger from §5.4, scoped to the deployer rather than the provider. The substrate-vs-vertical framing of chapter 7 is partly an Article 26 artefact: each deployer (“vertical”) signs their own manifest under their own credentials, riding on the provider’s substrate.
Article 27 — Fundamental Rights Impact Assessment. Article 27 requires deployers in specific Annex III contexts (public services, banking, insurance, employment) to conduct a FRIA before deployment. The FRIA is an artefact: a structured document asserting that the deployer has considered fundamental-rights impact under specific facts. The PKI realisation signs the FRIA under the deployer’s credentials, references the Article 26 deployment manifest by hash, and publishes the resulting bundle to a verifier. We do not claim that signing a FRIA makes the assessment correct; we claim that signing it makes the assessment audit-traceable in the operational sense Article 27 implies.
The pattern across the three corollary mappings is the same: each article asks for an artefact, and PKI provides a verifiable container for that artefact bound by hash to the runtime evidence that produced or relied on it.
5.8 What PKI does not give you
Section titled “5.8 What PKI does not give you”It is important to be specific about what the substrate does not discharge under the Act, because the temptation to overclaim is large.
- Article 9 (risk management). A process obligation. PKI does not run your risk-management system; it can sign the artefacts the system produces, but the system itself is out of scope.
- Article 10 (data quality). Already discussed in §5.3. PKI binds the claim about data; the claim’s content is the deployer’s problem.
- Article 14 (human oversight). Already discussed in §5.6. PKI anchors approval events; the policy that makes those approvals meaningful is governance.
- Article 15 (accuracy, robustness, cybersecurity). Largely runtime concerns. Anomaly detection, rate-limiting, intrusion response — these live in the governance layer one above provenance. PKI can sign incident reports and remediation events, but it does not perform the runtime work.
- Anything outside high-risk. Most of the Act — GPAI obligations under Articles 53–55, transparency obligations for non-high-risk under Article 50 — has its own engineering surface. Some bits map onto PKI (signed disclosures, signed watermarks); some bits don’t. The atlas concentrates on the high-risk regime because that is where the PKI fit is strongest and the practitioner audience is largest.
5.9 What this means in practice
Section titled “5.9 What this means in practice”For a PKI engineer arriving at an AI Act conformance project, the operational reading of this chapter is brisk:
- Deploy a
.aep-emitting signing pipeline with hybrid PQC (chapter 1, chapter 3). - Operate it as a trust service under EN 319 401 / 411-1 discipline (chapter 2 + chapter 6).
- Bind your system card and policy pack into Evidence Packages (Articles 11, 13).
- Stand up a hash-chained audit ledger (Article 12).
- Stand up a deployer manifest pipeline (Article 26).
- Where Annex III applies, integrate FRIA artefacts into the manifest bundle (Article 27).
- Stop short of claiming you’ve discharged Articles 9, 10’s data-quality dimension, 14, or 15. Those are governance, data-management, HCI, and runtime concerns — PKI’s neighbours, not PKI itself.
Most deployments fail not because they cannot do step 1 — that is the easy part — but because they collapse step 7 into “because we sign things, we comply”. They do not. The substrate is necessary; it is not sufficient. Honest substrate engineering ends with the boundary, not with the claim.
5.10 Diagrams
Section titled “5.10 Diagrams”5.10.1 Articles 9–15 quick map
Section titled “5.10.1 Articles 9–15 quick map”flowchart LR
classDef incl fill:#dcfce7,stroke:#15803d,color:#14532d
classDef adj fill:#fef3c7,stroke:#92400e,color:#78350f
classDef out fill:#e5e7eb,stroke:#6b7280,color:#374151
A9[Art. 9 <br> Risk management]:::out
A10[Art. 10 <br> Data & governance <br> narrow reading]:::incl
A11[Art. 11 <br> Technical documentation]:::incl
A12[Art. 12 <br> Record-keeping]:::incl
A13[Art. 13 <br> Transparency to deployer]:::incl
A14[Art. 14 <br> Human oversight <br> adjacent]:::adj
A15[Art. 15 <br> Accuracy / robustness / cyber]:::out
A10 -->|integrity & provenance| EATF[EATF <br> .aep Evidence Package]
A11 -->|signed system-card bundle| EATF
A12 -->|hash-chained audit ledger| EATF
A13 -->|verification UI / API| EATF
A14 -.signed approval events.-> EATF
A26[Art. 26 <br> Deployer obligations]:::incl
A27[Art. 27 <br> FRIA]:::incl
A26 -->|deployer-signed manifest| EATF
A27 -->|signed FRIA bundle| EATF
Solid green = direct PKI fit; amber dashed = adjacent (necessary but not sufficient); grey = governance / process / runtime layer out of scope for the substrate.
5.10.2 Article 10 — narrow reading boundary
Section titled “5.10.2 Article 10 — narrow reading boundary”flowchart LR
subgraph PKI[PKI binds the claim]
H1[Dataset SHA-256] --> M1[Model card]
M1 --> S1[Signed bundle]
S1 --> E1[.aep Evidence Package]
end
subgraph DG[Data governance — out of scope]
Q1[Is the data representative?]
Q2[Is the data leakage-free?]
Q3[Does it cover the population?]
Q4[Does it have bias?]
end
PKI -.does NOT decide.-> DG
This is the load-bearing visual statement of §5.3: PKI binds the claim about the dataset; PKI does not evaluate the content of the claim.
5.10.3 Verification UI flow (Article 13)
Section titled “5.10.3 Verification UI flow (Article 13)”sequenceDiagram
autonumber
actor V as Deployer / Auditor
participant W as Verifier UI / API
participant L as Aletheia .aep package
participant TSA as RFC 3161 TSA
participant CA as CA / OCSP responder
V->>W: Submit `.aep` (URL or upload)
W->>L: Parse CBOR envelope
W->>L: Verify signature(s) under cert
W->>CA: Verify cert path under EN 319 102-1
W->>CA: Validate captured OCSP state
W->>TSA: Validate RFC 3161 token
W->>L: Check model_id ↔ system-card hash binding
W-->>V: Pass / Fail with structured reason
A deployer’s question — “is the system I am running today the system the provider documented?” — is answered offline against captured state, not by trusting the provider’s word.
5.11 Outgoing links
Section titled “5.11 Outgoing links”- → Preprint · Formal version with threat model and CDDL schema.
- → Chapter 6 · Operational playbooks for the signing pipeline, audit ledger, and TSA selection introduced here.
- → Chapter 7 · Sector verticals show the mapping in production across five domains.
- → Chapter 8 · Open problems include Article 10’s data-governance dimension and Article 14’s oversight binding, both of which sit outside the PKI lens.