The EU is apparently on the verge of obliging companies to scan end-user devices for material that suggests child sexual abuse – and automatically report findings to the authorities. A corresponding regulation could come into force as early as the beginning of 2022. Civil rights activists are alarmed because the review is taking place across the board and without cause.
The MEP Patrick Breyer, for example, who calls such a system “Chat Control 2.0”, sees the constitutionally protected postal and telecommunications secrecy at risk as well as respect for the European Charter of Fundamental Rights. He therefore calls for resistance to the planned regulation.
After studying physics, Wolfgang Stieler switched to journalism in 1998. He worked at c’t until 2005, when he was the editor of the Technology Review. There he oversees a wide range of topics from artificial intelligence and robotics to network policy and questions of future energy supply.
However, the regulation is not quite as surprising as it seems at first glance. Network civil rights activists and IT experts who are now shouting “alarm” must rather allow themselves to be reproached for not having seen the signs of the times or not taking them seriously enough.
As early as November 2020, a position paper by the EU Council of Ministers became known, according to which security authorities within the EU should be given a form of “exceptional access to encrypted data”. With the position paper, the EU interior ministers aimed in particular at messenger services such as WhatsApp – since such messenger services are increasingly used by criminals, it must be possible for the investigative authorities to decrypt this data in case of doubt in order to check it.
State back doors not only encourage abuse of power
In theory, one possibility to do this would be to provide the encryption software with a kind of state master key. But IT security experts have been raging against these or similar ideas for around 30 years – and rightly so. State back doors not only encourage abuse of power, they would also undermine user confidence in the digital infrastructure. And most of all, they would be great targets for cyber criminals. In short: Anyone who introduces state crypto backdoors weakens the security of the entire IT infrastructure and thus indirectly damages the economy.
That is a strong argument, and at this point the discussion has regularly ended. Until one year ago. Because as early as 2020 there were signs that the EU is also considering other strategies – which are no less unpleasant. From the point of view of the Commission, this would in a sense be an egg-laying pig, a Europe-wide monitoring of encrypted communication, which at the same time should not damage European data protection in global competition as much as possible.
One already back then leaked discussion paper shows how this could work: Communication service providers would be obliged to create a type of digital fingerprint of content – such as images, for example – and then to compare this with a database in which the imprints of criminal content are stored. Of course, this would have to happen before the corresponding message is encrypted. The message should only be delivered if the query has no result.
Even harmless image files can trigger an alarm
Of course, this process also has disadvantages for the privacy and security of the user: The digital fingerprint on which the process is based, is namely not mathematically unambiguous. This means that even harmless images can trigger an alarm, and it also means that malicious third parties can slip supposedly harmless files on their victims, on which the human eye does not detect anything bad, but the scan triggers an alarm.
But the procedure undermines the central argument of the IT experts that state security and confidential communication cannot be reconciled. At least formally. Network citizens’ rights activists and IT companies no longer automatically pull together at this point. The fact that these different interests are by no means purely theoretical was already shown in the case of Apple: The company initially announced in the summer that it wanted to carry out such a photo analysis directly on the respective smartphone and only backed off after massive protests because of the damage became too big on one’s own image. Such an effect is hardly to be expected in the case of an EU regulation.
At this point, network civil rights activists can neither argue in a purely technical manner nor rely on an automatic equality of interests with the companies concerned. Anyone who is against the introduction of such a surveillance infrastructure will have to argue politically in the future. The automated image scan is an attempt by the investigative authorities to shed light on encrypted communication without completely overturning the entire infrastructure for confidential communication in the network. At the same time, an infrastructure is created that can easily be abused by authoritarian states or invasive security organs. Do we really want that?