TL;DR
Age verification is becoming mandatory across the internet - but if we implement it wrong, we’ll turn “safety” into a new surveillance layer. “Attribute-only” isn’t enough: the real risks are linkability and phone-home backchannels (network metadata and backend requests that can disclose where and when you verified) that let BigTech or BigGov track you over time. The standard we must demand is arms-length verification: the issuer must not learn where you verify, and the relying party must not learn who you are - only eligibility - with privacy by default and accountability only under due process.
Age verification is rapidly becoming the new “front door” to the internet. Alcohol delivery. Adult content. Gaming. Social media. Dating. Communities. Even ordinary apps that suddenly get reclassified as “high risk”.
And yes - we have to protect children. We have to reduce grooming, fraud, harassment, scams, and coercion.
But we also have to name the danger plainly:
We are on the verge of solving a legitimate safety problem with a surveillance architecture.
That would be a civilizational mistake. Because once age verification becomes routine, it becomes infrastructure. And once it’s infrastructure, it becomes a tempting lever: not just to verify eligibility, but to monitor behavior, map relationships, and normalize control.
![]()
The uncomfortable truth: “attribute-only” is not the same as privacy
A lot of people are comforted by a simple promise:
“Don’t worry - we only reveal the attribute. Just ‘18+’. Just a yes/no.”
That promise is about proof privacy - what the verifier learns from the cryptographic proof content.
Proof privacy matters. But it is not the whole story.
Because the real privacy failure of modern systems rarely happens in the payload. It happens in the rails around it - the correlation handles, the backchannels, the quiet telemetry, the logs that outlive the user’s life choices.
This is why I keep repeating a line that makes some people uncomfortable:

If a wallet calls home. If an issuer performs status checks. If an SDK reports metadata. If a stable identifier is reused. If a verifier keeps central logs. If OS services become the correlation layer - then “attribute-only” becomes a half-truth.
Not malicious. Just incomplete. And in privacy engineering, incomplete is how surveillance ships.
Linkability + phone-home: the two silent killers
To understand whether an age verification system is truly privacy-preserving, you need to look beyond “what data is disclosed” and ask two questions:
Linkability: can the same person be correlated across contexts and sites over time?
Phone-home: can a third party learn where and when verification happens via backchannels, wallet services, issuer calls, telemetry, or network metadata?
These are not theoretical edge cases. They are the default failure modes of the modern web.
Even when the proof only says “18+”, the system can still betray you. Not by exposing your credential, but by exposing your context: where you verified, when you verified, how often you verify, and whether you’re the same person across different services. That leakage can happen through issuer status checks and revocation calls, wallet backend dependencies, OS-level attestation rails, quiet SDK telemetry, stable identifiers that were “never meant for tracking”, and verifier logs that outlive the moment. And once those patterns exist, you don’t need the credential contents anymore. You can profile someone from metadata alone.

The arms-length standard: how verification should work in a free society
So what should “ID verification done right” actually mean?
It must mean arms-length verification - not as a buzzword, but as a structural principle:
The issuer must not learn which service you’re using.
The relying party must not learn who you are.
And no intermediary should be able to phone home.
That is the line between safety and surveillance.
It’s also the difference between “we used ZK” and “we built a system that cannot be turned into monitoring”.
This is the vantage point behind what we already implement with AesirX CMP + Concordium ID, and it’s the lens I use to evaluate any age verification proposal - from “ID upload” to “OAuth login” to national wallets to BigTech wallet proofs.
Why the common approaches fail people
These aren’t edge cases; they’re the default incentives.
ID upload fails because it turns a private action into a permanent record: full identity data, stored server-side, logged, leaked, reused. You can’t “unshare” a passport scan once it’s out.
OAuth login (Google/Facebook) fails because it’s linkability-by-design. A central identity provider learns every relying party. A stable account identifier becomes a universal correlation handle. Convenience becomes surveillance.
BigTech wallet ZK proofs can reduce what the relying party sees - but can still leave phone-home and correlation risks through wallet backends, OS services, telemetry, and surrounding rails. Cryptography can be clean while the system still leaks context.
State / national wallet systems can be designed toward privacy - but in reality, implementations vary, and many are still struggling with linkability and phone-home patterns. “No tracking” is often a goal, not a guarantee.
This is why it’s dangerous when anyone publishes simplified grids that imply:
“Attribute-only means privacy solved.”
It’s not solved until linkability and phone-home are engineered out.
What we’re actually building: privacy for the many, accountability for the few
The future we need is not “anonymous chaos” and it’s not “monitored obedience”.
It’s something more mature: privacy by default, and accountability by due process.
In practice, this is what “arms-length” looks like: the AesirX CMP sits at the interaction point as the website’s first-party gatekeeper, triggering verification only when needed and only for a predicate (“over 18”, “eligible”, “one human per service”). Concordium ID delivers the predicate proof so the relying party receives eligibility - not identity. And the flow is designed so the issuer still doesn’t learn where you verified, because the relying party never becomes a reporting endpoint to an issuer, wallet vendor, or state. Finally, the result can be bound to a domain-scoped pseudonymous session (contextual uniqueness), so you can enforce policy without creating a global identifier that follows people across the internet.
And then comes the part most people get wrong:
We do need accountability - because scammers, groomers, stalkers, and fraud networks exploit low-friction systems.
But accountability must not be built as mass surveillance.
So we anchor accountability so that when there is lawful cause and a formal court process, investigators can trace bad actors. That doesn’t mean building permanent behavioral dossiers on everyone else. It means building a system where bad actors know they can be held responsible - without turning ordinary life into a monitored transaction.

This is how you get both: fewer bad actors, and more dignity for everyone else.
Why this is urgent now
Age verification mandates are expanding - and once verification becomes routine, it becomes infrastructure. Infrastructure attracts gravity: more integration, more logging, more “just one more” safety feature. In that climate, proposals for VPN restrictions, automated scanning of chat messages, and systematic surveillance stop sounding exceptional and start being framed as “practical”.
This isn’t paranoia. It’s a pattern. If we don’t set the standard now, the standard will be set for us - by the same incentives that normalized surveillance capitalism and mass data brokerage.
And once this infrastructure exists, it won’t stay limited to “adult content” or “alcohol”. Mission creep isn’t a conspiracy; it’s what happens when capability meets power.
So this is my line for 2025:
We can protect minors. We can reduce abuse. We can stop fraud.
But we must not build a world where “prove you’re allowed” quietly becomes “prove who you are” - everywhere.

Reclaim your privacy
The cause is bigger than a product. It’s bigger than a wallet, a framework, or a regulation.
It’s a decision about what kind of internet we want to live in.
An internet where you can participate as a human being - not as a continuously monitored subject.
An internet where safety does not require surrender.
An internet where privacy-preserving technology is not optional “nice-to-have”, but a foundational requirement.
Reclaim your privacy - and demand age verification done right: predicate proofs, arms-length separation, domain-scoped pseudonyms, and accountability only under rule of law.
What’s your red line: linkability, phone-home, or central logs?
Ronni K. Gothard Christiansen
Technical Privacy Engineer & CEO @ AesirX.io





