EU eyes Big Tech to police child sex abuse online
With a steep rise of child sexual abuse online, the EU is calling on Big Tech to report and remove illegal content, but the methods which they use could lead to mass surveillance, privacy activists say
BRUSSELS, May 13 (Thomson Reuters Foundation) – Just one detail in a photo uploaded online can help Yves Goethals and his team of digital detectives track down a victim of child sexual abuse.
It can be a barcode on a dustbin or a logo on a shopping bag that pins down a location and helps identify a child.
“We know from experience in Belgium, 90-95% of cases, when we identify the victim, the offender is not far away,” Goethals, head of the child abuse unit at Belgian police, told the Thomson Reuters Foundation.
But getting to that point, said Goethals, takes weeks, if not months, of painstaking work trawling through images of child sexual abuse material (CSAM) for clues.
Online child sex abuse has risen sharply globally during coronavirus lockdowns, prompting calls for better regulations and reporting tools to protect potential victims.
In response, the European Commission announced a new law on Wednesday to ensure tech companies do more to detect and remove child sexual abuse images online and prevent grooming.
Under the law, it will be mandatory for companies such as Meta (Facebook), Google and Apple to detect, report and remove child sexual abuse content found on their services.
Companies that fail to comply with the rules face fines of up to 6% of their annual income or global turnover, which will be set by European Union (EU) countries.
The EU executive said its proposal, which follows similar attempts in Australia to regulate big tech over child protection, aimed to replace the current system of voluntary detection and reporting which it said had fallen short.
The measure needs the approval of both the European Parliament and EU leaders, a process that can take two years.
Big Tech admits more must be done but says it also wants to shelter the law-abiding people who use its tools and platforms.
“A fine balance between safety online and privacy will need to be found,” said Siada El Ramly, director general of DOT Europe, a lobby group for tech giants from Apple to Google.
TECH: PROBLEM AND SOLUTION
Battle lines are already firmly drawn.
Privacy activists fear detection technologies could open the door to mass surveillance.Law enforcement says some loss of privacy is a price worth paying to protect children from digital predators.
Striking the right balance is critical to the law’s success – and right now, Goethals says, the law is failing.
“We are facing a bizarre situation…(in) the distinction between your privacy as a normal citizen and our investigation into a criminal, the balance is in favour of the criminal.”
As the number of social media platforms has grown over the past two decades, so has the volume of child abuse material shared and detected online.
Between 2010 and 2020, there has been a 9,000% increase of abuse images online, said the U.S. National Center for Missing and Exploited Children (NCMEC), a non-profit organisation.
The EU is at the epicentre, with servers located in the bloc hosting 62% of child sexual abuse content in 2021.
Social media platforms such as Facebook and Instagram logged the largest number of reports of indecent images, said NCMEC.
The big rise could reflect better tracking of abuse, in part due to artificial intelligence (AI), it added.
AI-powered monitoring uses technology to filter hundreds of thousands of images and root out abusive content.
It uses tools such as Microsoft’s PhotoDNA and the Internet Watch Foundation’s (IWF) ‘digital fingerprinting’ technology, whereby human analysts assess images, and assign a unique signature or ‘hash’ to any that contain child abuse.
When AI finds an image that matches the hash’s unique code, it can uncover a whole cache of previously hidden material, said Hany Farid, co-developer of PhotoDNA and professor of computer science at the University of California at Berkeley.
“What you have to understand about perpetrators, they don’t traffic in one or two images. They traffic in hundreds, thousands, tens of thousands of images. And when I find one image…I get a warrant and I can search all your images,” Farid said in a video call.
However, when AI-monitoring meets end-to-end encryption on messaging services such as WhatsApp, the tools become powerless.
Its very purpose as a privacy feature makes it impossible for anyone but the sender and recipient to see the content.
WALKING THE PRIVACY TIGHTROPE
Lobbying groups suggest a range of competing solutions to the EU proposal.
One option is so-called client-side scanning – or installing monitoring software on all personal devices to check for uploads of messages or images that may contain child abuse.
Privacy campaigners call it the advent of mass spyware and say rogue regimes or criminals could abuse its powers.
“Once you’ve put these back doors in that break the encryption, it’s very easy for any malicious actor to exploit or government to mandate Facebook or WhatsApp to look for keywords relating to dissent, protest or being LGBT,” said Ella Jakubowska, policy officer at European Digital Rights (EDRi), a Brussels-based lobby group.
When Apple tried to roll out similar technology last year, it met with a major backlash by staff, who feared repressive regimes could use it to impose censorship or make arrests.
Child rights defenders say that using hashing technology, or only scouring for known images, is one of the best ways to protect privacy while also protecting potential victims.
“It’s tried and tested…successfully deployed by Microsoft, Google and Facebook and others for over a decade. And a lot of the concerns we’ve heard from privacy activists haven’t happened,” said Dan Sexton, chief technical officer at the children’s charity Internet Watch Foundation.
By contrast, mass trawling of texts for any signs of grooming could fail given the EU’s top court has previously outlawed such general monitoring.
Farid is wary of corporate advocates for privacy, saying big players in the tech world have jumped aboard the privacy bandwagon simply to protect their business model.
“All of these companies that talk about privacy track every little thing you do so that they can monetise your behaviour. This is not a privacy issue.”
Policeman Goethals concurs, saying he just wants to “make life as difficult as possible for the criminal.
“I don’t want to get rid of your privacy, I just want to be able to identify offenders and victims by using technology.”
Related stories:
Calls for new laws as tech fuels ‘explosion’ of online sex abuse
Online child sex abuse cases triple under lockdown in Philippines
(Reporting by Joanna Gill. Editing by Lyndsay Griffiths. Please credit the Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers the lives of people around the world who struggle to live freely or fairly. Visit http://news.trust.org)