TL;DR: Verify attributes with ZK (age/country/human) for a private YES/NO – issuer-silent, unlinkable, and accountable by court order when real harm occurs.
There’s a quiet but decisive shift online: more traffic now looks like software than people – and it’s getting harder to tell the difference.
AI agents don’t sleep. They post, buy, resell, and impersonate at machine speed. If we do nothing, the default “user” won’t be a person – it’ll be software.
That’s the moment we’re in, and it demands a better foundation for trust.
This isn’t a call for more surveillance. It’s a call for better proof. Communities, markets, and families don’t need your identity dossier to stay safe; they need a narrow proof that answers a specific question – and nothing more.
From suspicion to proof: the trust recession we can fix
As bots multiply, everyday participation gets taxed by suspicion. Moderators burn out. Parents hesitate to let teens join communities. Marketplaces drown in fraud and chargebacks. We add roadblocks – CAPTCHAs, scans, biometric checks – and still watch determined actors slip through. We pay with our time and, worse, with our privacy.
The fix is not to expose more of ourselves. The fix is to verify only what a context requires: age, country, or “one human, not a bot.” That’s it.
Prove the attribute, not the person
Consider what spaces actually need:
- A teen forum needs to know a member is 13+, not their legal name.
- A local group needs to know a participant is in Denmark, not their street address.
- A marketplace needs to know a seller is a real human and eligible, not their passport number.
This is attribute verification – precise, minimal, and respectful. With zero-knowledge proofs (ZKPs), a user’s wallet proves a property (18+, in DK, human) without revealing the credential itself, without “phone-home” pings to issuers, and without creating a trail that can be linked across sites.
When you change the question from “Who are you?” to “Is this specific condition true?”, the entire risk surface changes – for the better.
Privacy by default, accountability by due process
Privacy cannot be a shield for abuse, and accountability cannot be a pretext for tracking everyone. The right architecture balances both:
- Private by default. Users act under stable pseudonyms and disclose only the attribute a context requires. Sites store verdicts (true/false + timestamp), not identity.
- Accountable by due process. If serious fraud or harm occurs, there’s a lawful path to unmask a bad actor – by court order, not by moderator whim or hidden backdoors.
That’s how you protect people and communities without normalizing surveillance.
What this looks like for real people – starting with youth
Denmark points to a practical path: teens (13+) can hold a national eID. Connected to a privacy-first wallet, this makes age-appropriate youth communities feasible, keeping adults out without exposing a teen’s name, school, or browsing history to the site operator. Families get confidence; moderators get leverage; kids get spaces that are truly designed for them. A 14-year-old can join a study forum that’s truly for teens – no names exposed, no dossiers created.
Then extend it – markets, messaging, and the “human-only” switch
- Marketplaces: Gate listings behind a one-time proof that sellers are human and eligible. Fraud drops without anyone uploading passports to unknown servers.
- Communities: Offer “human-only” rooms for sensitive topics. Sockpuppetry and dogpiling collapse when each pseudonym must map to one human.
- Region-specific access: Release content to the right jurisdictions without harvesting addresses or building an audience-tracking map.
- Moderation: Gate the highest-risk actions (bulk messaging, payments, new listings) with proofs, while allowing ordinary participation to remain effortless.
The outcomes are tangible: less spam, fewer scams, faster moderation, and no new data troves to breach or broker.
How it works
- Wallet, not upload. Credentials live in the user’s wallet. Sites see only a derived proof.
- Question, not dossier. Sites ask a narrow question (“18+?”, “in DK?”). The answer is a cryptographically verifiable yes/no.
- No phone-home. Issuers aren’t silently notified each time a proof is presented.
- Unlinkable proofs, auditable actions. A proof shown to Site A can’t be correlated with Site B, and sites log only that a valid proof gated a specific action – useful for audits, useless for profiling.
That’s the difference between “compliance theater” and real privacy engineering.
Why now, before AI agents set the norm
The next wave of bots will be more coordinated, conversational, and convincing. If we wait, we’ll normalize heavy-handed measures that harm civil liberties and still fail against sophisticated automation. The humane alternative exists today: human-centric verification that scales – which is why we’ve moved from principle to practice.
Shipping reality: WordPress and beyond
Last week, we released AesirX CMP v2.0.0 for WordPress, bringing ZKP-based Age and Country Verification to site owners with a few clicks. It runs first-party (no third-party beacons during consent or verification) and answers only the question a context requires. Because WordPress powers a massive share of the web, placing these tools in everyday builders’ hands accelerates a safer, more private default.
If you run a community, marketplace, publication, or membership product, you can pilot attribute proofs exactly where abuse hurts you most – sign-ups, messaging, listings, and payments – without turning users into data points.
Principles that should be non-negotiable
- Minimum necessary: verify only what’s required, nothing else.
- No phone-home: issuers don’t get a ping for each proof.
- Unlinkable by default: prevent cross-site correlation.
- Due-process accountability: unmasking only under a lawful order.
- First-party by design: consent and verification must not leak to third parties.
- Document the path: publish a narrow, auditable unmasking policy.
Write these into your product requirements. Hold vendors, and yourselves, to them.
The web we should choose
We can build a web where:
- Children join communities that are truly for them.
- Buyers and sellers transact with confidence.
- Moderators have teeth without building panopticons.
- Creators reach the right regions without harvesting personal data.
- AI agents serve humans, not impersonate them.
The way forward is simple to state and essential to preserve: prove what’s relevant, protect what’s private, and keep a lawful path for accountability. Do that, and we keep the internet human – without turning it into a surveillance system.
If you lead a product or community: start with one surface where abuse is costly. Gate that action with an attribute proof. Log verdicts, not identities. Measure the drop in spam, fraud, and moderation load. Then expand.
If you’re a policymaker: mandate safety and privacy-preserving verification, not blanket identity exposure.
If you’re a parent: ask platforms one question: Do you verify attributes privately, or do you track my child to keep them “safe”?
We don’t have to accept a bot-dominated, surveillance-heavy internet as the price of participation. We can choose better – and keep the web human, even as the bots rise.
Ronni K. Gothard Christiansen
Technical Privacy Engineer & CEO, AesirX.io
Ready for Part II?
Read: Part II - ZK Proofs for a Human-Only, Privacy-First Web