Codes for the coders: EU integrates new DSA Codes of Conduct

In January 2025, Meta updated[1] its policies on monitoring speech on its digital platforms and announced that it would end its fact-checking program, starting in the US but potentially moving worldwide. If these policy changes are implemented in the EU, Meta would need to be careful to ensure compliance with the Code of Conduct on Countering Illegal Hate Speech Online +[2] and the Code of Practice on Disinformation[3] (the “Codes”), which the European Commission (the “Commission”) has integrated into the Digital Services Act (“DSA”). These are the first such codes integrated into the DSA, and come at a point when compliance with online safety regulation is a focus for the public and regulators.

Code of Conduct on countering illegal Hate Speech Online + (the “Hate Speech Code”)

The Hate Speech Code was integrated into the DSA on 20 January 2025, and is designed to address ‘illegal hate speech’.[4] The signatories to the Hate Speech Code include the major social media platforms, including Meta.

The signatories have agreed to the following commitments:

  1. To have in place terms and conditions informing users that they prohibit hate speech on their services;
  2. To have in place notice and action mechanisms to allow EU users to notify them of illegal hate speech on their services. They commit to review at least 50% of the notices received within 24 hours, and apply their best efforts to go beyond this target and aim for 67%;
  3. To have in place a process to assess compliance with the notification system. This takes place over a period of up to 30 working days each year, in which designated NGOs or public entities (“Monitoring Reporters”) will report illegal hate speech on the relevant platforms. The Commission produces public reports on the percentage of notices reviewed within the required timeframes (as per 2 above), and the percentage of reported content which is removed;
  4. To cooperate with Monitoring Reporters to address and prevent the spread of illegal hate speech, including through annual convenings, exchanging best practices, and engaging in structured dialogue; and
  5. To continue supporting tools and initiatives which encourage civility online and critical thinking.

The Hate Speech Code is based upon a previous iteration adopted in 2016. Commission reports indicate that, while adherence to the Hate Speech Code remains high, signatories are slower in assessing reported hate speech, and the rates of removal of reported hate speech may be declining, with a reported decline from 72% in 2019 to 63% in 2022.

Code of Practice on Disinformation (the “Disinformation Code”)

On 13 February 2025, the Disinformation Code was integrated into the DSA. As with the Hate Speech Code, the Disinformation Code is based on a previous iteration, to which the signatories have been signed up since 2022.

The Disinformation Code has over 40 signatories, including Google, Meta, Microsoft, TikTok and Twitch, who have agreed to a range of commitments including aiming to:

  • Remove financial incentives for purveyors of disinformation, such as by taking measures to avoid placing advertising next to disinformation;
  • Increase transparency on political advertising, so that users are able to identify political advertising;
  • Implement and promote safeguards against misinformation and disinformation, including manipulative behaviours and practices;
  • Improve the ability of users to access authoritative sources and empower users to identify and report disinformation;
  • Increase the ability of third-party researchers and fact-checkers to access platform data; and
  • Establish a monitoring framework to assess the implementation of the Disinformation Code.

Additionally, the signatories have committed to cooperate and coordinate their work during elections, when the threat of disinformation is particularly high. See our previous Perspectives blog on steps taken by the Commission under the DSA to mitigate systemic online risks that may impact the integrity of the elections.

Comment

The DSA requires large tech companies to mitigate the systemic risks on online platforms, such as the spread of illegal content.[5] These systemic risks have been a focus for the Commission: it has brought formal proceedings against X (formerly Twitter) regarding alleged dissemination of illegal content, including hate speech, and formal proceedings  against TikTok regarding a suspected breach of its obligation to properly assess and mitigate misinformation linked to the recent Romanian presidential elections.

The integration of the Codes into the DSA means that adherence to their requirements, while voluntary, may be considered as an appropriate risk mitigation measure for signatories bound by the commitments set out in the Codes, and may also help non-signatories demonstrate compliance with the DSA.   

The increasing importance of the Codes as a yardstick for compliance with the DSA comes at a crucial moment as tech companies such as Meta consider changes in direction in their online content and moderation policies. The Codes’ signatories will accordingly need to pay close attention to the codes when introducing any such changes in the EU, lest they invite scrutiny under the DSA.

Footnotes

[1] More Speech and Fewer Mistakes | Meta
[2] The Code of conduct on countering illegal hate speech online + | Shaping Europe’s digital future
[3] The 2022 Code of Practice on Disinformation | Shaping Europe’s digital future
[4] As defined in the relevant national laws (Commission Opinion of 20.1.2025 on the assessment of the Code of conduct on countering illegal hate speech online + within the meaning of Article 45 of Regulation 2022/2065).
[5] DSA Articles 34 and 35.