The European Parliament voted on 26 March 2026, 331 to 228 , to reject an extension of the temporary legislation known as chat control, which gave service providers the right to scan users’ messages for child sexual abuse material (CSAM). In Finland, the National Bureau of Investigation expressed concern: according to the NBI, service providers filed nearly 14,500 reports of suspected material last year, accounting for roughly one fifth of all new criminal investigations opened. The numbers are real and the problem is serious. That is precisely why it deserves a better solution than untargeted mass surveillance — and precisely why parliament’s decision was right.
What did parliament ban — and what did it not ban?
Parliament did not ban the identification of child sexual abuse material. It refused to extend a temporary exemption that applied only to unencrypted platforms. The expiring law did not cover encrypted communications at all: WhatsApp messages were not within its scope, unlike, for example, Instagram’s unencrypted direct messages. This is no legal technicality. Messages are either end-to-end encrypted or they are not — there is no middle ground. Opening an encrypted channel to surveillance means breaking the encryption for all its users, not just those suspected of a crime.
The broader, ongoing chat control movement, however, aims to break the encryption of encrypted communications as well. Parliament voted down the path that would have opened the door in that direction. That is the real significance of the vote.
The rule-of-law minimum is that a concrete suspicion precedes scanning — not the other way around.
Can companies still identify child sexual abuse material?
Yes, they can. When the law expires, companies lose the legal protection for making reports — not the ability to identify material. The technical tools and expertise remain unchanged. Large platforms have both commercial and ethical incentives to continue voluntarily: the reputational damage from spreading CSAM is enormous, and on many platforms voluntary monitoring has been in place for years before the legislation.
That does not mean the current situation is sufficient. Voluntary action without a legal framework is uneven and vulnerable to change. Smaller platforms may not invest in detection tools without a legal obligation, and large players can change their practices based on business decisions. What is needed is a new and properly scoped legal basis. It must give companies clear protection for making reports — without at the same time opening the way to large-scale mass scanning. Parliament’s vote did not close that path; it simply said that the old, poorly scoped temporary solution does not get an extension.
Why does untargeted mass scanning not work?
A concrete suspicion before scanning is not special treatment for privacy — it is the rule-of-law minimum. This principle is already established elsewhere: police cannot obtain a search warrant without reasonable grounds, and wiretapping requires a court order. The same principles belong in the digital environment. I have written about the same logic in connection with intelligence legislation and biometric identifiers : extending surveillance without individual suspicion is not a question of efficiency, but of principle.
Mass scanning is also ineffective in practice. Organised crime has already moved to channels that open scanning cannot reach — the dark web and encrypted applications, which the temporary legislation did not reach and which a broader chat control regulation might not reach either. Ordinary users, not organised networks, would be subject to expanded surveillance. An ineffective tool does not become good because the goal is right.
What can Finland do next?
Finland has an opportunity to help shape a solution that actually works. The model is clear: targeted surveillance in which a concrete suspicion and a court order precede scanning. The same standard as for wiretapping, no more and no less. In practice, this means that in a new law the right to scan would be tied to an individual suspicion and a court order obtained in advance — not to automatic scanning of everyone’s messages without cause. Such a system protects children more effectively, because it is directed where the suspicions are, while upholding the rule of law — and without laying the groundwork for broader mass surveillance.
Trust in European institutions is built on the same logic: it does not arise from untargeted surveillance, but from protecting citizens’ rights as EU members even when it is difficult.
Europe’s competitive advantage does not come from cheap services, but from trust — from the fact that here, privacy is a genuine fundamental right. That trust cannot be bought.
Parliament was right. The next round is already coming — and Finland must be ready to shape it into something better. The alternative is to leave the field to others, in which case the next regulation will again be a poor compromise at the expense of the rule of law.
Other posts
-
Passport biometrics must stay protected
When fingerprints were added to passports, Finland created a national biometric register. Now the government wants to open it to police use.
-
Do not dismantle fundamental rights
The government plans to extend intelligence methods to fight crime without individual suspicion — a serious threat to fundamental rights.
-
Digital independence: one step at a time
Digital independence is built in every procurement decision. Helsinki chose Finnish UpCloud as the platform for its core public services system.
-
Digital independence is a necessity
Finland's digital services depend on US platforms. As the US becomes unreliable, digital sovereignty is no longer optional — it is a necessity.