TL;DR: A German court just put a unit price on unlawful device access: €100 per cookie. That turns “compliance” into unit economics and makes vendor liability scalable. If your SaaS model depends on pre-consent telemetry, billing pings, or “cookie-less” measurement that still runs client-side, you’re not selling compliance - you’re selling liability.
A German court just did something that will quietly reshape the web-facing technology market.
It didn’t “warn publishers to fix their banners.”
It didn’t “remind websites to update their policies.”
It didn’t reward the industry’s favorite scapegoat: the messy implementation by someone else.
It put the spotlight where it belongs: on the vendor whose code touched the user’s device without proven consent.
One unlawfully set cookie triggered €100 in non-material damages (Higher Regional Court of Frankfurt's December 11 ruling, court ref. 6 U 81/23). The number is small enough to look harmless - until you understand what it does to incentives once it becomes repeatable.
A systemic defect is never “one cookie.”
A systemic defect is a pattern.
And patterns scale.
So the question I want every agency, MarTech lead, DPO, and SaaS vendor to sit with is not “is €100 a lot?”
It’s this:
If a defect is provable on a site with a million visitors, what does the liability ceiling look like the moment claims become industrialized?
“€100 isn’t the story. The story is that vendor liability is now economically legible.”
Who this is for: anyone distributing client-side code into other people’s websites - CMP vendors, analytics vendors (including “cookie-less”), attribution and personalization tools - and the web agencies/implementers and DPOs who inherit the risk when that code fires pre-consent.
Germany didn’t produce one signal. It produced a cluster.
If this were only one Frankfurt ruling, the industry would do what it always does: call it an outlier, wait for appeal, and keep shipping.
But late 2025, Germany didn’t produce a single datapoint. It produced a pattern of court artifacts that all point in the same direction: courts are increasingly willing to price “loss of control” and “feeling of surveillance” as compensable harm - and they’re increasingly willing to treat web-facing vendors as responsible actors when their technology crosses the device boundary - any execution/caching/network instruction that happens - without proven consent.
Frankfurt is the cleanest hook because it’s so legible: €100 for one unlawfully set cookie. Not a regulator fine. A civil damages award that makes unit-economics risk obvious. When a court is comfortable attaching money to a single terminal-equipment event, the entire game changes - because now systematic defects can be turned into systematic claims.
Then you have Munich. Two decisions in December 2025 - both awarding non-material damages (one at €750 OLG München, Urteil vom 18.12.2025 – 14 U 2300/25 e, one at €500 OLG München, Endurteil vom 18.12.2025 – 14 U 1314/25 e) in a major “tools” context - reinforcing that courts are not treating this as harmless technical trivia. They’re treating it as a fundamental rights issue with real compensable impact, and they’re comfortable doing it at amounts that immediately change risk calculations for vendors and platforms.
Read that again: within weeks, multiple higher regional courts are handing out damages for covert or uncontrolled tracking-style mechanisms. Different fact patterns, same message.
So if you are a CMP vendor, an analytics vendor, or any “X-as-a-Service” vendor shipping client-side logic into other people’s pages: stop waiting for the perfect legal safe harbor. The courts are already telling you what they will reward and what they will punish.
They will reward systems that can prove a valid boundary existed before the device was touched.
They will punish systems that rely on “we told customers to do the right thing” after the boundary was crossed.
“Germany didn’t just warn the market. It started pricing the intrusion - and the bill lands on vendors.”

The industry’s old shield is failing: “the customer is responsible”
For years, vendors protected themselves with a sentence that sounds reasonable until you look at the runtime:
“Publishers are responsible for obtaining consent.”
But a browser doesn’t read your contract.
Your script doesn’t pause because a PDF says “customer must comply.”
A tag fires when it can. An SDK sends events when it’s instructed to. A pixel calls home when it’s loaded. And if the design allows that to happen before a valid user choice exists, the legal risk is not “downstream.” It’s upstream - baked into the product.
This is the key shift: courts are beginning to treat consent not as UI, but as a technical boundary. If the boundary is crossed, “we told customers to do the right thing” stops being persuasive.
Because what the user experiences is not your contractual allocation of responsibility.
What the user experiences is your code executing on their device.
“Your JavaScript doesn’t read your contract. Courts are starting to notice.”
Should every web-facing vendor be nervous?
It’s tempting to file this under “AdTech problems.”
That’s a mistake.
The same liability logic is relevant anywhere a vendor supplies client-side logic that interacts with a device, measures behavior, or transmits data off-site.
Consent vendors. Analytics vendors. Heatmaps. A/B testing. Personalization. Fraud tooling. “Performance monitoring.” Even “privacy-friendly” measurement stacks.
Because the risk is not “whether you call yourself privacy-first.”
The risk is whether your system can be made to cross the device boundary without valid consent and without provable accountability.
And that brings me to the most persistent myth in web privacy.
The most dangerous sentence in analytics marketing: “we don’t use cookies, so consent isn’t required”
This week, Plausible Analytics chimed in on a thread on X with what is probably the most common positioning in the “privacy-friendly analytics” category:
- We don’t store or access information on the user’s device
- We don’t use cookies or persistent identifiers
- We only produce aggregated statistics after the request reaches our servers
- Therefore, Article 5(3) isn’t triggered and explicit consent isn’t required
This is exactly the type of claim that sounds clean, simple, and “engineer-friendly.”
It’s also exactly the type of claim that breaks when you apply modern technical reality and current regulatory thinking.
Because privacy compliance is not a vibe. It’s not a blog post conclusion. It’s not a legal memo that relies on a simplified technical premise.
Privacy compliance is a property of what actually happens in the browser - including execution, caching, network calls, and the fact that modern tracking is not limited to “cookies.”
“Cookie-less is not consent-less - and it’s definitely not liability-less.”
The technical reality many analytics platforms try to skip
When you ship a JavaScript analytics tag, you are distributing executable logic to the user’s device. That logic is cached, executed, and used to instruct the device to make measurement calls. Even if you avoid “persistent identifiers,” you still have a system that:
- runs on the terminal equipment, and
- instructs the terminal equipment to transmit tracking-relevant information back to a server.
And this is exactly where a lot of “cookie-less” narratives become fragile.
Because the core legal question is not: “Did you set a cookie?”
The core legal question is: “Did your mechanism involve storage/access on the device or instruct the device to send back identifiers or tracking information - even temporarily - as part of measurement?”
That’s why “we only aggregate after the request reaches our servers” is not the escape hatch it sounds like.
Aggregation is what you do after the fact.
The trigger is what happens at the device boundary.
And if you want a practical demonstration that this is not just theoretical, look at Plausible’s own documentation push: “Run Plausible as a first-party connection … to bypass adblockers.”
Read that again slowly.
They’re explicitly advising customers to proxy the analytics endpoint through the customer’s domain name to evade user controls - not to change the underlying technical action, but to make it harder for users to block.
Now, I’m not making a moral argument here. I’m making a liability argument:
If your compliance narrative relies on “we don’t do device operations that trigger the rule,” while your own docs promote techniques to bypass blockers that target exactly those device-level operations, you are walking customers into a risk posture while advertising it as safety.

Even if you debate 5(3), GDPR still applies when the script loads
Let’s say someone wants to argue endlessly about whether a specific implementation triggers the terminal equipment rule. Fine. You can debate that.
But the moment a browser loads a SaaS analytics script and sends requests, the vendor receives what it receives:
IP address. User agent. Request metadata. Timestamp. URL paths. Referrers. Network identifiers. And in many real deployments, even more.
That is processing. That requires a lawful basis. That requires transparency that matches reality. And it requires you to stop hiding behind language like “anonymous” when what you really mean is “we don’t call it an identifier.”
This matters because vendor liability is not going to be limited to “cookie placement.” It is going to expand through the broader accountability doctrine that courts are increasingly comfortable applying: the party with the clearest visibility and control cannot hide behind the party with the least.
Which brings us to the second German signal that should terrify SaaS vendors.
The burden-of-proof trap: vendors built systems only they can explain
The web stack has a structural asymmetry: users cannot see what vendors do behind the curtain.
They don’t know where events go.
They don’t know who receives them.
They don’t know how they’re joined, retained, enriched, or shared.
They don’t know whether “aggregated” means “immediately aggregated” or “event-level stored and aggregated later.”
But vendors know.
And courts are increasingly unwilling to accept “blanket denials” when the defendant is the only party who can provide the specifics.
This is where centralized SaaS becomes a litigation liability: your strongest product advantage - your control and visibility - becomes the reason you can’t shrug and say “not our problem.”
“Your data platform is a product advantage - until it becomes a litigation disadvantage.”
When the data model is the business model
Now let’s talk about the thing most people avoid because it cuts too close to how the market makes money.
Consent-as-a-Service.
Analytics-as-a-Service.
Compliance tooling-as-a-Service.
Attribution-as-a-Service.
Personalization-as-a-Service.
These models share a structural temptation: to be commercially viable, they want data early, and they want it reliably, even when the user says “no.”
And when vendors feel the law tightening, they reach for the same escape hatch: necessity.
Here’s the problem: under both GDPR and the terminal-equipment rule in ePrivacy Directive (Article 5(3)), necessity is assessed from the user’s (data subject’s) perspective - not the vendor’s business model. “Necessary” does not mean “necessary for product metrics, billing, diagnostics, attribution, or growth.” It means necessary to deliver the service the user actually requested - and only what is strictly required to do that.
So the category invents excuses:
- “Telemetry for service quality”
- “Pings for diagnostics”
- “We need it for billing”
- “We must measure consent to enforce consent”
- “We don’t store identifiers; we only measure”
…what they are really saying is: our business model needs pre-consent device operations.
And that is not a legal basis. That’s a commercial preference.
This is the core contradiction that vendor liability will expose: you can’t justify pre-consent device operations as “technically necessary” when the only thing that makes them “necessary” is the vendor’s revenue model. If the user said “no,” and your stack still needs to run to make your SaaS work, your product isn’t compliant-by-design - it’s consent-hostile-by-design.
If your revenue depends on data you can’t lawfully guarantee, then compliance cannot be a “feature.” It becomes a direct constraint on the business model. And many vendors respond by quietly making compliance optional in practice while advertising it as default.
That is not a “banner problem.”
That is a product-market-fit problem under law.

It’s not only the EU. The UK, Norway, and Vietnam are moving the same way
Germany is simply where the court artifacts are currently the loudest.
But the logic is increasingly consistent across jurisdictions:
- In the EU, the line between “cookie compliance” and “tracking compliance” keeps collapsing into the broader question: what do you do at the device boundary, and can you prove your basis?
- In the UK, the same consent-before-tracking dynamics keep reappearing under PECR and UK GDPR accountability.
- Norway’s direction under Ekom reforms moves along the same axis: device-level tracking is a legal boundary, not a marketing choice.
- Vietnam is now tightening accountability under PDPL and the implementing environment, and any vendor shipping web-facing tracking or measurement into Vietnamese-facing stacks should assume scrutiny will follow the same path: prove what you do, why you do it, and why it’s lawful.
Different statutes. Same future: vendors can’t outsource accountability forever.
The practical conclusion: treat consent as a runtime boundary, or accept that you’re selling liability
The industry spent a decade turning consent into a UI problem because UI problems can be “fixed” without changing business models.
But courts are increasingly treating consent as what it always was: a boundary around device access, behavioral observation, and invisible power.
So here is the standard that matters now - the only one that will survive hostile examination:
Can you truthfully say your product is designed so non-compliance is hard, not merely discouraged?
Not in your marketing.
Not in your contracts.
In the runtime.
Because in the vendor-liability era, the only durable compliance posture is the one you can prove when someone is actively trying to break it.
“Consent is not UI. Consent is runtime. If you can’t prove the boundary, you don’t have compliance - you have hope.”
Agencies: either you standardize this across every client deployment, or you’ll keep being the last person who “installed liability” and then got paid to clean it up later.
If you want to check if your (or your clients) website is collecting data prior to consent use our free privacy scanner to test it.
Ronni K. Gothard Christiansen
Technical Privacy Engineer & CEO @ AesirX.io




