DPO Radio

Get AesirX CMP Lifetime Deal - Save up to 86% on AppSumo

A Verified-ID Path for Denmark and the EU

Aug 28, 202507 minute read

Part IV – Safety Without Mass Scanning: A Verified-ID Path for Denmark and the EU

blogdetail image
Safety Without Mass Scanning: A Verified-ID Path the EU

TL;DR: Skip “chat control” (mass client-side scanning). Protect minors by gating surfaces of risk with verified ID: non-interactive ZK (local verify), issuer-silent, unlinkable, E2E intact – with court-ordered, event-specific unmasking when real harm is alleged.

Europe’s urgency to protect minors online is real - and correct. Denmark’s EU Council presidency has put youth safety at the top of the agenda: stronger age verification on platforms, tougher action against CSAM, even exploring a social-media ban for under-15s. The instinct is understandable. But one proposal keeps reappearing under different names – scanning everyone’s private messages on every device, all the time. That approach is often framed as “chat control.” It promises protection; in practice it builds a surveillance substrate we will never fully dismantle.

We need something better – something that makes abuse riskier and rarer without normalizing suspicion of every citizen. We already have it: verified ID with due-process accountability, delivered with non-interactive zero-knowledge proofs and context-scoped pseudonyms (CuK + Shield of Privacy). It is safety without mass scanning.

Note: CuK is currently under development in partnership between AesirX and Concordium.

What we’re being asked to trade and why we shouldn’t

Scanning all chats on all devices sounds simple. It isn’t. To work at all, client-side scanning has to pierce end-to-end encryption or pre-screen messages before they’re encrypted. It accumulates false positives that drown investigators and devastate families. It invites function creep: today CSAM, tomorrow “extremism,” the next day anything a government finds uncomfortable. It is the classic overbroad measure: collect everything now, promise restraint later.

And crucially, it doesn’t address the most common entry points of harm: the places where grooming begins, where minors are reachable at scale, where pseudonymous abusers can operate with no consequences. That is where a verified-ID model changes the maths. 

We keep end-to-end encryption intact. We don’t scan content. We gate discovery, first contact, amplification, and payouts with private, issuer-silent attribute proofs. Private messages stay private; accountability triggers only when a court targets a specific event.

The better foundation: proofs, not profiles – accountable by due process

From Part I, we shifted the question from “Who are you?” to “Is this specific condition true?” – 13+, in-country, one human (not a bot). From Part II, we made that work at web scale with non-interactive ZK: verification happens locally, issuers are not called back, and proofs are unlinkable across sites. From Part III, we turned privacy into operations: CuK gives the site a context-scoped pseudonym that’s useful only in that setting, while Shield of Privacy keeps UX fast and issuer-silent. When serious harm occurs, identity can be revealed only under a lawful, court-supervised process – no backdoors.

That architecture delivers what broad scanning never can: privacy by default, accountability when needed, and a credible deterrent for would-be abusers.

Policy baseline

  1. Local, issuer-silent verification with zero-knowledge proofs
  2. Pairwise, context-scoped CuK pseudonyms per site, action, time window
  3. Scheduled revocation via anonymous lists or accumulators, no per-user callbacks
  4. Optional hash-only, event-scoped anchors for due-process accountability
  5. We do not scan devices or weaken end-to-end encryption
  6. We do not store profiles, cross-site identifiers, or raw proofs

How verified ID reduces abuse in the places that matter

Abuse usually doesn’t spring from a single private message; it starts where bad actors find and select minors. The verified-ID model hardens those surfaces:

  • Discovery surfaces become age-bounded. Youth spaces are truly for youth; adult-only areas are sealed. An adult cannot enter a minors-only forum under a fresh throwaway account; the gate demands an attribute the abuser cannot privately satisfy.
  • Contact surfaces gain a human-only switch. Mass sockpuppetry and automated outreach fall away when each pseudonym must map to one human.
  • Risky actions require verified status. New listings, mass DMs, and payout triggers are gated by proofs. It’s not just “prove 18+”; it's to prove eligibility before you can reach many minors or move money.
  • Accountability raises costs. Under due process, a specific event can be unmasked. Bad actors know their actions are private until they harm – and then traceable. That alone removes the lowest-effort predators and repeat offenders.

This is targeted friction: it protects minors where the harm originates while leaving ordinary, lawful private chats unscanned and encrypted.

what the lawful path actually looks like

What the lawful path actually looks like

The fear is that “accountability” becomes a euphemism for backdoors. In our model, it doesn’t. Day-to-day, people are pseudonymous and private. A site verifies a YES/NO answer and logs only a verdict + timestamp

If there is credible evidence of abuse, authorities can seek a court order tied to a specific event, for example:

  • “The 13+ proof used to initiate contact at time T on domain D.” Using the hash-only fingerprint anchored on Concordium, the verifier supplies its presentation record for that event. 
  • The identity issuer confirms that a valid credential exists; designated privacy guardians complete an N-of-M disclosure under court supervision. 

There is no issuer usage log to query, no generalized switch to flip – only a narrow, auditable unmasking for that one incident.

Compare that with chat control: everyone pre-screened, all the time, with inevitable drift in scope. One is constitutional guardrails; the other is perpetual pre-investigation.

Why this aligns with Denmark’s goals

Denmark’s presidency priorities – age verification across platforms, real action on CSAM, and curbing youth harm – are exactly what a verified-ID model advances:

  • Age verification, done right. Proofs of 13+ or 18+ without identity exposure or issuer telemetry.
  • CSAM prevention at the front door. Reduce the chance that adults reach minors in the first place; make high-reach actions provably human and age-appropriate.
  • Lawful enforcement, not indiscriminate scanning. When a police report exists, courts can unmask a specific event. That’s accountability with guardrails.
  • No blanket ban required. A verified-ID web can enforce minor-only and adult-only spaces, and can region-limit content – without hauling everyone’s private messages into government-mandated scanners.

This is a safety model that scales and respects rights.

What platforms actually implement (and what they stop collecting)

A platform operator turns this on at the surfaces with the highest harm-to-reach ratio: sign-ups, first DM, listing creation, payment initiation. The code asks a narrow question via AesirX CMP; the wallet returns a non-interactive proof; the site verifies locally and receives a CuK that is meaningful only here and now. No identity is stored. No issuer is called. If operators need an audit breadcrumb for due process, they anchor a hash-only fingerprint – never a dossier.

What disappears from the data exhaust: bulk identity uploads, third-party beacons in consent/verify, cross-site identifiers, and the temptation to build shadow profiles “just in case.” What appears in the abuse pipeline: fewer opportunities to reach minors at scale, and higher-quality leads tied to specific events when things go wrong.

A policy blueprint Denmark and the EU can adopt

A workable regulation can be written in plain language:

  • Prohibit generalized scanning of private communications.
  • Require attribute-based verification (age, eligibility, human-only) for high-risk online actions affecting minors.
  • Mandate issuer-silent verification (no callbacks at proof time) and unlinkability across relying parties.
  • Allow due-process unmasking of specific events with court orders and independent oversight (transparency reports, N-of-M governance).
  • Demand first-party execution of consent and verification – no third-party trackers in the flow.
  • Encourage privacy-preserving status checks (anonymous lists/accumulators) and hash-only audit anchors.

This gives law enforcement precision and platforms clarity, while citizens keep the privacy the EU Charter promises. 

Measure it: track drops in minor-targeted first-contact attempts, spam, chargebacks, and moderator hours; track improvements in law-enforcement lead quality tied to event-specific anchors.

Necessity and proportionality

This model asks only what a specific action requires, proves it locally, and exposes no identity by default. Processing is minimal: a verdict and timestamp, scheduled status checks, and an optional hash-only anchor for that event. End-to-end encryption remains intact. When serious harm is alleged, identity disclosure is limited to the named event under a court order, with published court-signer lists and N-of-M guardians. There is no bulk telemetry and no reuse across services. Effectiveness is measured by drops in first-contact attempts and abuse, with transparent reporting and routine key rotation.

The choice in front of us

We can build a Europe where minors are measurably safer online and private speech remains private. Or we can normalize a scanner on every device and hope it’s never misused. Verified ID with due-process accountability – proofs, not profiles; private by default, unmaskable by court order – meets Denmark’s goals without crossing the line.

If you run a platform: start where harm hits hardest – first contact and first amplification. Gate those actions with attribute proofs. Log verdicts, not identities. If you write policy: require issuer-silent verification and narrow unmasking – and make generalized scanning out of bounds. If you’re a parent: ask platforms one question – Do you verify attributes privately, or do you scan my child’s messages?

The way forward is clear: safety without mass surveillance. That’s how we protect children and keep the web human, even as automated traffic rises.

Ronni K. Gothard Christiansen
Technical Privacy Engineer & CEO, AesirX.io

Enjoyed this read? Share the blog!