EU May Ask Tech Companies to Scan for Sexual Abuse Material

(Bloomberg) — The European Commission presented its plan to fight online child sexual abuse material, which includes demanding the world’s biggest technology companies to scan for, detect and report it to law enforcement bodies.

The measures announced in Brussels on Wednesday allow courts in the EU to require social networks to track and report efforts by their users to groom children via messaging tools.

Other proposed methods to achieve these requirements include:

  • Using age verification to identify minors joining a platform
  • Checking user-generated content for signs of abuse imagery
  • Deploying artificial intelligence to detect language patterns associated with grooming
  • Reporting offending material discovered to law agencies to investigate

A new European agency similar to the U.S.’s National Center for Missing and Exploited Children, which will work alongside Europol, will be created for companies to report their findings to. Courts in the EU will be able to require companies take down flagged material or block access to web addresses. Companies will be allowed to appeal such orders.

Ylva Johansson, the commissioner for home affairs in charge of the proposal, said Wednesday that she expects criticism from companies because “protecting children is maybe not profitable.” Privacy activists and lawmakers alike have already voiced concern, likening the plans to surveillance tactics.

Johansson said there are many “rumors” about the commission’s plans, but that the proposal is not about encryption or reading people’s communications. She added that companies will have to use the least intrusive methods to detect the material online.

A spokesperson for Meta Platforms Inc., Facebook’s parent company, said “it’s important that any measures adopted do not undermine end-to-end encryption, which protects the safety and privacy of billions of people, including children.”

Scanning for child sexual abuse material has long been controversial due to privacy concerns, but it’s something all major tech firms and social networks do. This includes sharing digital fingerprints of known illegal material so it can be automatically detected, removed and reported. Facebook, Microsoft and many others have done this for years.

Numerous companies attempt to go further, with mixed results. Last year, Apple Inc. announced, then halted, plans to detect child abuse imagery in its users’ photo libraries after widespread concern voiced by privacy advocates. The commission’s own plans have also been delayed for years for related reasons.

But the amount of child pornography online has been increasing, especially during the Covid-19 pandemic, Johansson said. Reports cited by the commission show that 85 million photos and videos containing child sexual abuse were reported online in 2021. Advocacy group Thorn said in March this was a 38% increase from 2020.

It’s part of the reason the EU’s proposal goes beyond conventional scanning for known offending material, and will allow courts to require companies proactively scan for new abuse, be it pictures, videos, or grooming of children via chat. AI could be used, such as to speed up detection of concerning language patterns, the commission said, but would need human oversight.

The EU’s plan will need the sign-off from member countries and the European Parliament, a process that can take years, especially with such a controversial proposal. 

(Updates with comments in fifth paragraph)

More stories like this are available on bloomberg.com

©2022 Bloomberg L.P.

Close Bitnami banner
Bitnami