Belvue Museum, Place des Palais 7

Organised by Center for Democracy & Technology (CDT)


  • Emma Llansó Director, Free Expression Project Center for Democracy & Technology (CDT)
  • Mark McCarthy, Senior VP, Public Policy, Software & Information Industry Association (SIIA)
  • Prabhat Agarwal, Deputy Head of Unit F.2 (E-Commerce and Platforms), DG CONNECT, European Commission
  • Armineh Nourbakhsh, Associate Director, Data Science S&P Global

Governments and companies are turning to automated tools to make sense of what people post on social media. Policymakers routinely call for social media companies to identify and take down hate speech, terrorist propaganda, harassment, “fake news” or disinformation, and other forms of problematic speech, and companies are developing and rolling out new AI applications to keep up with the pressure to moderate content. But these tools have limitations that can affect their accuracy, fairness, and effectiveness. For example, natural language processing (NLP) algorithms can amplify social bias reflected in language and disproportionately censor or misinterpret the speech of certain groups. Bias in content moderation algorithms raises ethical and human rights concerns, including the risk of discriminatory enforcement of laws or terms of service. Often, those whose content is removed are left without transparency into the review and takedown process or any recourse to challenge the decision.

This panel will discuss the capabilities and limitations of content moderation tools, focusing on the key issues that policymakers must understand before intervening in this area. It will also discuss what companies can do to make their content moderation practices more fair, transparent, and accountable, highlighting some practices that companies are currently engaging in.

Please register by completing the following form:                                                                               

Alternatively, you can contact Laura Blanco on