TL;DR
The Belgian DPA’s Inspection Service has dismissed a complaint against a “privacy-friendly analytics” provider by claiming JavaScript in RAM is not “storage” under Article 10/2 (the Belgian cookie rule implementing ePrivacy Directive 5(3)) and that IP addresses here are not personal data, a view that ignores how modern browsers and tracking actually work.
The case in one paragraph - and who the Inspection Service is
The decision at the centre of this discussion comes not from the Belgian DPA’s Litigation Chamber (which issues binding decisions and fines), but from its Inspection Service - the DPA’s investigative arm. The Inspection Service examines complaints, runs investigations, and then decides whether to pursue a case further or close it. In this matter, it concluded that there was no sufficient evidence of GDPR infringement and that the cookie rules (Article 10/2 of the Belgian Data Protection Act, implementing Article 5(3) ePrivacy Directive) did not apply in the way the complainant alleged.
In public commentary, the case has been framed as a success story for “privacy-friendly analytics”. My view is almost the opposite: it’s a textbook example of what happens when an authority reasons about digital technology from the dictionary up instead of starting with how browsers and tracking actually function today.
For DPOs, privacy lawyers and privacy engineers, it’s worth unpacking why.

What the Inspection Service actually decided
From the (heavily redacted) translated ruling, the reasoning can be summarised in three steps:
1. On ePrivacy Directive / cookie rules (Article 10/2 of the 2018 Act)
- The complaint argued that JavaScript tracking scripts were placed on the user’s terminal equipment without consent, contrary to Article 5(3) ePrivacy Directive (as implemented by Article 10/2).
- The Inspection Service responded that JavaScript “runs in volatile memory” and that such memory is not “storage” within the meaning of the cookie rule. Dictionaries were cited to define “to store” as “to keep for later use”, with an emphasis on a future-use time component and on non-volatile storage.
- On that basis, executing JavaScript in RAM did not qualify as “storing information” on the terminal equipment.
2. On GDPR / personal data
- The provider acknowledged processing IP addresses but argued that it could not realistically identify individuals based on IP alone, especially as a US-based company dealing with Belgian ISP data.
- The Inspection Service, relying on Breyer and the more recent SRB judgment, accepted that in this context IP addresses were not personal data for this controller, because identification would require disproportionate effort.
3. On procedure and interest
- The complaint was also criticised for lack of concrete evidence, questionable territorial scope and for essentially pursuing an “actio popularis” (public interest complaint) rather than a personal grievance.
- Using its discretion, the Inspection Service decided not to investigate further, while leaving open the possibility of revisiting the matter if better-documented complaints are filed in future.
On paper, this might look neat and pragmatic. In practice, it rests on a version of the web that no longer exists.
ePrivacy Directive 5(3): why “storage vs memory” is the wrong battle
Article 5(3) of the ePrivacy Directive - now embedded in Article 10/2 of the Belgian Data Protection Act - was drafted to be technology-neutral. It doesn’t talk about cookies, RAM, or hard drives. It talks about:
“storing information, or gaining access to information already stored, in the terminal equipment of a subscriber or user”.
The Inspection Service tries to carve out a safe harbour for JavaScript by insisting that:
- JavaScript is “just code” executed in the browser’s runtime;
- it resides only in volatile memory;
- therefore, it doesn’t meet the “stored for later use” criterion, and the cookie rule doesn’t apply.
There are two problems with this.
The law already moved beyond cookies
For more than a decade, European regulators have treated Article 5(3) as covering any technique that writes data to, or reads data from, a user’s device - including device fingerprinting based on browser and DOM attributes, cached resources, and other “cookie-less” identifiers.
The European Data Protection Board’s latest guidance on the technical scope of Article 5(3) makes several key points:
- “Storage” means placing information on any electronic storage medium that forms part of the user’s device. It does not exclude RAM or CPU cache.
- There is no minimum duration for storage. Even very short-lived writes can fall under Article 5(3) if they are used to track or identify users.
- JavaScript that instructs the browser to send identifiers or behavioral data back to a remote server is one of the canonical examples of operations that do fall within the scope.
In other words, the Inspection Service is trying to reopen a door that the EDPB has spent years closing.
Browsers in 2025: you can’t separate execution from storage
Technically, the decision’s distinction between “memory” and “storage” just doesn’t align with how browsers work today:
- When your browser fetches https://analytics.example/script.js, that resource is stored - in the HTTP cache, in RAM, sometimes on disk - so it can be executed and re-used.
- The JavaScript engine compiles the script and holds it in memory as bytecode or machine code; the browser then creates and updates data structures in the DOM and JS heap to represent everything from the page layout to user events, performance metrics and device characteristics.
- Modern tracking is heavily based on reading and writing those in-memory structures: screen size, font lists, canvas rendering output, audio fingerprinting, pointer movements, scroll depth, etc.
From the user’s perspective, there is no meaningful difference between:
- a third-party script that drops an HTTP cookie and reads it later; and
- a third-party script that reads a constellation of DOM and API values and uses them to single out their device across visits.
Both are invisible. Both are initiated remotely. Both involve accessing information on the device that the user did not explicitly request to be used for tracking.
![]()
The device boundary is what matters
Once you take the device boundary seriously, the “JavaScript in RAM is not storage” line becomes a semantic distraction. The real question under Article 10/2 is:
- Does this operation place information on, or access information from, the user’s terminal equipment?
- Is it strictly necessary for the service the user requested?
- If not, has the user given valid consent?
JavaScript-based analytics and optimization code clearly answer “yes”, “no”, and “usually no” to those three questions. The fact that some of the moving parts live in RAM is irrelevant to the protection the law is trying to provide.
The same pattern repeats on the GDPR side.
GDPR and IP addresses: identifiability in a data-broker world
On the GDPR side, the Inspection Service leans on Breyer and SRB to conclude that the IP addresses processed by the analytics provider are not personal data in this context, because:
- the provider is based in the US;
- it would have to approach a Belgian ISP to obtain subscriber details;
- given time, cost and effort, that is not “reasonably likely” to happen.
This again feels tidy on paper, but it ignores both the case law and the realities of today’s data ecosystem.
On the GDPR side, the Inspection Service leans on Breyer and SRB to conclude that the IP addresses processed by the analytics provider are not personal data in this context, because:
- the provider is based in the US;
- it would have to approach a Belgian ISP to obtain subscriber details;
- given time, cost and effort, that is not “reasonably likely” to happen.
This again feels tidy on paper, but it ignores both the case law and the realities of today’s data ecosystem.
Breyer never required the controller to stand alone
In Breyer, the Court of Justice held that dynamic IP addresses can be personal data for a website operator even when only the ISP has the extra information needed to name the user. What matters is whether there are legal and practical means reasonably likely to be used to identify the person - including via third parties.
The test is explicitly contextual:
- you look at the means available to the controller (and others),
- not just at what the controller currently chooses to do.
Reading Breyer as “if it would be annoying to ask an ISP, the IP isn’t personal data” is a misinterpretation.
SRB is about real pseudonymisation, not “we promise not to look”
SRB adds nuance by clarifying that data can stop being personal for a specific recipient if it is properly pseudonymised and that recipient has no realistic way to reverse it or to link it back to an individual.
In the analytics scenario, however, the provider is receiving raw IP addresses as part of HTTP traffic and using them as part of attention and performance statistics. There is no separate key, no independent holder of the mapping, no cryptographic barrier.
Equating that situation to the pseudonymisation in SRB conflates two very different kinds of processing.
The missing piece: IP intelligence and data brokers
More importantly, the decision treats Belgian ISPs as the only route from IP to identity. That might have been plausible in 2005. It is not in 2025.
Today:
- IP-intelligence providers offer rich enrichment APIs: location, connection type, organization, sometimes even household-level and business-level attributes.
- Data brokers and DMPs combine IP-level signals with cookies, device IDs, login data and purchase histories to build audience segments and identity graphs.
- Many AdTech and analytics tools integrate directly into this ecosystem.
A serious “reasonable means” analysis has to ask:
- Could the analytics provider, in practice, enrich IP addresses via these services?
- Could they join IP logs with other identifiers they hold (user accounts, email hashes, device IDs) to establish a profile over time?
- Are those steps materially more burdensome than the other processing they already undertake?
If the answer is “yes, and this is standard industry practice”, then the claim that IP addresses are “non-personal” because an ISP isn’t involved starts to look like wishful thinking.

Singling out is already impact
Even if you set aside the subscriber identity question, there’s a more fundamental point: if the provider can single out a visitor and take decisions about them - adjust bids, tailor content, infer interests - then, in every meaningful sense, they are dealing with that person as a data subject.
Recent case law and guidance keep coming back to this: data that does not obviously name someone can still be personal if it allows consistent singling out in a realistic ecosystem context. That is exactly what IP-plus-behavioral analytics does.
What DPOs and privacy professionals should take away
It would be easy for controllers and vendors to treat this Inspection Service decision as a convenient shield:
- “Our scripts run in memory; that’s not storage under Article 10/2.”
- “We only see IP addresses; those aren’t personal data for us.”
If you are responsible for compliance, that path is dangerous.
Here are the practical lessons I’d suggest:
- Anchor your device-side analysis in Article 5(3)’s purpose, not in storage semantics.
Start with the terminal equipment boundary: what information do your scripts read and write on the user’s device? Is it strictly necessary for what the user asked you to do? If not, treat it as subject to consent, regardless of whether the bits live in RAM or on disk. - Treat IP+behavioral data as personal by default in analytics.
Given the current state of IP-intelligence and data-broker markets, assuming “non-personal” status for IP-based analytics is a weak compliance position. If a vendor wants to claim their IP-level data is effectively anonymised, they should have a rigorous, documented identifiability assessment that goes far beyond “the ISP has the subscriber file”. - Don’t outsource your technical analysis to case snippets.
Authorities can be wrong on the tech - especially when decisions are made early in the process by investigative bodies like the Inspection Service rather than by a fully reasoned Litigation Chamber judgment. Use such cases as prompts to sharpen your own analysis, not as permission to stop thinking. - Document your own “reasonable means” assessment.
For key data types (IP addresses, device IDs, pseudonymous IDs, hashed emails, consent strings), write down how they could realistically be linked to an individual in your ecosystem, what controls you have in place, and why you consider residual risk acceptable or not.
If your current analytics setup can’t meet these conditions, treat this as a signal that you need to redesign, not as a sign that you’ve found a clever legal carve-out.

The safer pattern: first-party, block-before-consent
Ultimately, the best way to avoid fragile arguments about “is this really personal data?” or “does this really count as storage?” is to change the architecture, not the story.
For web analytics, that means:
- First-party by design: run consent management and analytics on your own domain, with your own first-party server or platform. Avoid unnecessary third-party calls and piggy-backing by AdTech scripts.
- Block-before-consent: default to blocking all non-essential tracking technologies - not just cookies, but pixels, JavaScript tags, fingerprinting scripts and beacons - until you have explicit, informed consent.
- Minimize and segregate identifiers: shorten IP retention windows, strip or coarse-grain location, avoid joining analytics data with marketing profiles unless you have a very strong legal basis and clear user expectation.
- Logically and technically separate analytics used for own-path statistics from any cross-site advertising or profiling purposes. Treat the latter as high-risk and consent-driven.
This pattern does not magically make every legal question disappear. But it dramatically reduces the reliance on edge-case interpretations like “RAM is not storage” or “IPs are not personal data for us”, and it aligns much better with the trajectory of EU regulation.
Why this case is a warning, not a template
The Belgian Inspection Service decision is not the end of the story on privacy-friendly analytics. But it is a useful cautionary tale.
When DPAs - or parts of them - start from semantic comfort rather than from how the web and the data-broker ecosystem actually function in 2025, they risk creating decisions that are easy to quote and dangerous to rely on.
As DPOs and privacy professionals, our job is to bridge that gap: to bring the technical reality into the legal analysis, and to design systems where compliance doesn’t depend on hoping that JavaScript in RAM somehow doesn’t count.
Because on a real user’s laptop or phone, it absolutely does.
Ronni K. Gothard Christiansen
Technical Privacy Engineer & CEO @ AesirX.io





