Press play to listen to this article
STRASBOURG — European lawmakers Tuesday voted to ban online ads that target children, ramping up a crackdown on Big Tech platforms and their content moderation policies.
The proposed ban was included in a draft of the EU’s content moderation rulebook, the Digital Services Act, which the European Parliament’s internal market committee approved with a large majority of votes.
Parliament’s text, which is still up for negotiation in coming months, includes a line seeking to stop tech companies like Facebook, Google, TikTok and others from allowing businesses to target minors through their platforms.
The move is part of a broader push across Europe to impose age verification systems online so that children can’t access some platforms with content deemed harmful for them, including pornography.
“We want to protect minors using platforms,” Christel Schaldemose, the Danish Social Democrat lawmaker in charge of the file, told POLITICO.
She said the law still needed refining to make sure that identifying minors won’t compromise privacy rights, but added that „the platforms today already have actual knowledge about who minors are.”
Proposed by the European Commission a year ago, the Digital Services Act (DSA) aims to create Europe-wide rules for online platforms. The bill seeks to crack down on illegal content, regulate online advertising and impose transparency measures on platforms‘ algorithms.
Banning targeted ads for children could become a thorny issue in future negotiations to finalize the law.
The committee’s text will be voted at Parliament’s plenary meeting, likely in January. The bill is then up for three-way negotiations between Parliament, the Commission and the EU Council, which represents EU governments. The final rules could come into force as soon as 2023.
EU countries did not restrict targeted advertising in their position agreed in November, but Schaldemose said she was confident capitals could be convinced to add it to the final text.
MEPs also voted to ban manipulative designs to nudge users to consent to online tracking, known as „dark patterns,“ and they granted users the right to choose if they want to be served targeted advertising. Large platforms with over 45 million European users such as Facebook and Google will also have to give users an option to opt out of tracking to see personalized content on their feeds and platforms.
Lawmakers also added more obligations for online marketplaces like Amazon to do random checks on products sold by third-party sellers through their online shop and to inform users if they have bought illicit goods — on top of removing them.
The draft rules also said tech companies and regulators should be able to challenge orders from European public authorities to take down specific pieces of illegal content or request information about users. Meanwhile, users would be empowered to seek redress and compensation from tech companies if they face serious harm on their services.
Lawmakers also want tech companies like Facebook and Google’s YouTube to open up about the ways they moderate content — a longstanding demand from policymakers and NGOs that would now be enshrined in the law.
Firms would have to inform users when they change rules and user agreements and say how many staff work to moderate content for each European language. Large platforms would also have to evaluate the risks they pose to society and stem the spread of disinformation on their pages.
Want more analysis from POLITICO? POLITICO Pro is our premium intelligence service for professionals. From financial services to trade, technology, cybersecurity and more, Pro delivers real time intelligence, deep insight and breaking scoops you need to keep one step ahead. Email [email protected] to request a complimentary trial.