Solidarity with OTHERS is proud to share the outcomes of our panel discussion, “The Effects of Hate Speech on Democracy, Fundamental Rights & Inclusion”, held on May 22, 2025, at the European Parliament. The event was graciously hosted by MEP Estelle Ceulemans (S&D) and brought together a distinguished panel of experts—including legal scholars, policy advisers, and civil society leaders—to explore the dangerous normalization of hate speech in both public discourse and political institutions. The discussion highlighted the growing challenges posed by online platforms, algorithm-driven amplification, and institutional inaction, while also identifying clear and actionable strategies to counter hate speech through legal, technological, and civic mechanisms.
The panel discussion attracted a diverse and engaged audience, with nearly 200 viewers tuning in live via YouTube and over 80 in-person attendees (A recording of the entire event is publicly accessible via YouTube).
Participants represented a broad spectrum of expertise and roles, including professors, legal advisors, policy experts, journalists, activists, students, and organizational leaders. This multidisciplinary mix underscored the complexity of addressing hate speech, combining academic research, legal practice, policy making, and grassroots activism to enrich the conversation and drive comprehensive solutions.
The panel featured a distinguished lineup of speakers, each bringing unique expertise to the discussion: Prof. Johan Vande Lanotte, Yasmina El Kaddouri, Adinde Schorl, and Coskun Yorulmaz. Their insights laid a solid foundation for examining hate speech’s multifaceted impact on democracy and social inclusion.
Opening Remarks by MEP Estelle Ceulemans: How Words Become Persecution
In her compelling opening address, MEP Estelle Ceulemans underscored the urgent relevance of the topic, reminding the audience that hate speech is not a relic of the past but a present and dangerous force—even within the very institutions meant to protect democracy. She warned that “words become ideas, ideas become policies, and those policies can lead to persecution,” emphasizing that genocide and systemic exclusion often begin with language that dehumanizes. Citing a recent incident during a European Parliament plenary session—where far-right MEPs used the floor to incite hate against Roma communities—Ceulemans highlighted the dangers of legitimizing such rhetoric under the guise of political debate. She reaffirmed her commitment, alongside the S&D Group and the Anti-Racism and Diversity Intergroup, to standing against all forms of discrimination. Ceulemans also pointed to existing EU frameworks, such as the Charter of Fundamental Rights and the Digital Services Act, as vital tools in the fight against hate speech. However, she stressed that these legal mechanisms are only as effective as the political will behind them, warning that the growing influence of far-right parties threatens the integrity of European values and inclusive democracy.
Transitioning from the opening address, the floor was then given to Professor Johan Vande Lanotte, a distinguished expert in constitutional law and human rights, whose remarks set a sobering tone for the broader conversation on hate speech and democratic integrity in the digital era.
Remarks by Prof. Johan Vande Lanotte: Hate Speech Spreads Beyond Borders Online
Prof. Johan Vande Lanotte began by situating hate speech within a broad historical context, pointing out that it is by no means a modern phenomenon. From the manipulative rhetoric of Alcibiades in ancient Greece to the propaganda of the Nazi regime in the 1930s, hate speech has long been used as a tool for inciting division and justifying violence. What has changed today, according to him, is not the nature of hate speech, but its scale and speed.
The internet, he explained, has fundamentally altered how hate speech spreads. In the past, disseminating hate required a complex propaganda machine. Today, it takes only a few clicks. This ease of access and amplification, especially via social media, has made contemporary hate speech more dangerous than ever. Its viral potential, paired with algorithm-driven content delivery, means users are often exposed to extreme views without consciously seeking them out.
At the heart of the issue lies a legal and ethical dilemma. While the European Convention on Human Rights protects freedom of expression—including ideas that “offend, shock, or disturb”—this right is not unlimited. When expression incites hatred or violence, or when it aims solely to insult or reinforce dangerous prejudices, it loses that protection. Vande Lanotte reminded the audience that Article 10 of the Convention uniquely references the duties and responsibilities that come with free speech. In his view, this principle highlights a crucial tension: how to safeguard democratic discourse while preventing its erosion through dehumanizing language.
Two recent developments, he argued, complicate this balance. First, the rise of digital platforms has created a borderless environment for hate to spread. Second, a handful of tech giants—motivated by profit rather than public interest—now control the public conversation. Unlike traditional media, these private platforms are shaped by opaque algorithms rather than editorial standards. This shift in information power, he warned, must prompt a reevaluation of how we protect freedom of expression in the digital era.
While limitations on speech should always be approached with caution, especially given how social media can empower resistance movements under authoritarian regimes, Vande Lanotte emphasized that some regulation is necessary. The key, he insisted, is to avoid placing this responsibility solely in the hands of governments or private companies. What is needed are independent institutions—free from political or corporate influence—that can respond rapidly and proportionately. Because, in the digital age, harm happens within hours, not months.
In response to a question from the moderator about Elon Musk and his growing influence over public discourse through the platform X (formerly Twitter), Vande Lanotte drew a historical parallel. Just as the U.S. government once dismantled Bell’s telecommunications monopoly to preserve public access, today’s information monopolies demand similar scrutiny. Yet, no meaningful regulation has occurred. That failure, he observed, reflects a decline in the democratic resilience of the United States.
Musk’s open support for far-right movements in Germany was described as deeply troubling, but not surprising. His position is symptomatic of broader democratic deficiencies. According to Vande Lanotte, the U.S. remains a democracy only if one accepts its structural inequalities such as unequal access to voting and a president’s ability to override judicial authority. In his view, when rule of law becomes optional, democracy itself stands on shaky ground.
Fighting hate speech, he concluded, is not just a legal duty—it is a democratic imperative. But this fight must be led by fast, independent, and principled mechanisms. Only then can we prevent the irreversible damage that unchecked speech can cause in an age where words can travel the world in seconds.
Building on Professor Vande Lanotte’s legal and historical framing of digital hate speech, the discussion then turned to the specific legal standards and case law that shape how Europe defines, limits, and adjudicates hate speech today. Human Rights Lawyer Coskun Yorulmaz offered a detailed examination of the European Court of Human Rights’ evolving jurisprudence and reminded the audience of a critical distinction: freedom of expression is a protected right, but not an absolute one.
Remarks by Lawyer Coskun Yorulmaz: Speech Must Be Free but Not Weaponized
Building on the earlier discussions about the normalization and amplification of hate speech, lawyer Coskun Yorulmaz brought a crucial legal perspective to the panel—one that underscored how hate speech is not merely a social or political issue, but a matter of enforceable rights and responsibilities under European law.
Yorulmaz emphasized that while the European Convention on Human Rights (ECHR) does not define hate speech explicitly, institutions like the Council of Europe provide a working framework: any expression that incites, promotes, or justifies hatred, violence, or discrimination—based on characteristics such as ethnicity, religion, language, gender identity, or sexual orientation—falls under this category. Importantly, hate speech is not confined to verbal or written statements; it can also be conveyed through symbols, images, or even artistic forms.
A recurring theme in his analysis was the tension between freedom of expression and its limits. The ECHR treats free speech as a cornerstone of democratic society, but it is not a blanket protection. Article 10 allows for restrictions where necessary—such as to protect public safety or the rights of others—and these restrictions must be proportionate and legitimate. Article 17 goes a step further by stating that rights enshrined in the Convention cannot be used to undermine other rights. In effect, this provision excludes hate speech from protection when it is wielded to incite hostility or erode human dignity.
Rather than approaching this as an abstract principle, Yorulmaz grounded his analysis in case law from the European Court of Human Rights. He referred to landmark decisions where the Court drew clear lines between protected and unprotected speech. In Garaudy v. France, for instance, Holocaust denial was deemed not a contribution to historical debate but a deliberate act of hatred, rightly criminalized. In Jecu v. France, a teacher’s use of racist rhetoric in a school newsletter was judged incompatible with the values expected from public educators. And in Delfi AS v. Estonia, the Court held an online news outlet accountable for failing to remove violent and hateful user comments—marking a critical precedent on platform responsibility.
These rulings collectively reflect a nuanced legal approach that doesn’t treat all speech equally. Context, intent, and potential harm are decisive factors. A key insight from Yorulmaz’s contribution was that hate speech law isn’t about policing opinions—it’s about preventing language from becoming a weapon used to marginalize, threaten, or incite violence against vulnerable groups.
The legal lens he offered served as a reminder: combating hate speech is not solely the role of politicians, activists, or tech companies. Courts and legal institutions have both the authority and obligation to intervene—especially when speech crosses the line from dissent to dehumanization.
Following the previous interventions, the floor was given to Yasmina El Kaddouri, Equal Opportunities and Wellbeing Advisor to the Flemish Minister of Welfare. Drawing on both her policy and legal background, El Kaddouri delivered a compelling reflection on how hate speech serves not only as a communicative act of hostility but as a structural barrier to equality and inclusion—values that lie at the core of the European project but are far from fully realized.
Remarks by Yasmina El Kaddouri: Hate Speech as a Barrier to Inclusion and Democratic Participation
She began by emphasizing that hate speech must be understood in its broader social context—not just as isolated slurs or threats, but as a mechanism that reinforces societal hierarchies and exclusionary norms. Citing data from the EU Fundamental Rights Agency, she noted that around 40% of ethnic minorities across the continent have experienced harassment or derogatory remarks, with even higher rates among Muslim communities. Yet the majority of these incidents go unreported, a silence that speaks volumes about the lack of trust in redress systems and the normalization of exclusion.
El Kaddouri drew on her experience as a former human rights lawyer to highlight the behavioral consequences of hate speech. She observed that it shapes how people engage with society—forcing young people to self-censor, pushing women out of public discourse, and making racialized and queer individuals avoid certain spaces or professions. In her current role advising the Flemish government, she now sees the same dynamics embedded more deeply into structural domains like housing, education, and employment.
While the legal frameworks are in place—Article 10 of the European Convention on Human Rights, for example—there is a growing gap between legal principles and actual protection. Hate speech that incites discrimination, hostility, or violence is not protected under freedom of expression, but enforcement is faltering. In Belgium, over 2,000 discrimination complaints were filed in 2022, over one-third involving racist hate speech—much of it online. But each statistic represents a person being told they do not belong.
She further pointed to the failure of tech platforms in moderating harmful content. A 2023 report by the European Commission revealed that the rate of content removal for hate speech had declined, and reaction times were slowing. Meanwhile, algorithms that reward outrage continue to amplify dehumanizing narratives.
Importantly, El Kaddouri underscored how this digital erosion is not gender neutral. In Flanders, Muslim women—particularly those who wear the hijab—are disproportionately harassed online. This form of gendered Islamophobia operates as a silencing force, preventing full democratic participation and reinforcing exclusion.
She also emphasized the failure of legal systems to address intersectional discrimination effectively. Treating identities like race, gender, religion, and disability in isolation leads to policy blind spots, whereas real-world experiences often involve overlapping forms of discrimination.
Turning to the international context, she referenced her role as registrar for the Turkey Tribunal, where hate speech had been instrumentalized by political elites to dehumanize dissenters and minorities, paving the way for systemic repression. The Tribunal concluded that hate speech had been deliberately used as a tool of governance—a chilling lesson, El Kaddouri warned, that Europe would do well to heed before more severe consequences unfold.
She linked this to the normalization of far-right rhetoric in Europe—rhetoric that undermines women’s rights, vilifies migrants, and targets LGBTI persons, all while hiding behind claims of free expression. When such speech is left unchecked, the boundaries of social acceptability shift, weakening the foundation of democratic societies.
In closing, El Kaddouri stressed the need for actionable steps: collecting disaggregated equality data, reinforcing equality bodies and civil society support structures, and, most crucially, including those affected by hate speech in policy development. These individuals are not just victims but key stakeholders in building inclusive democracies.
Ultimately, her message was clear: hate speech is not simply a matter of offensive words—it is a mechanism of exclusion that chips away at democratic life. Confronting it demands more than legal tools; it requires political will, institutional commitment, and a collective dedication to human dignity.
Building on earlier reflections about the online dimension of hate speech and the responsibility of platforms, the next speaker, Adinde Schoorl from INACH—the International Network Against Cyber Hate—offered insights into the critical role civil society organizations play in monitoring platform compliance and holding tech companies accountable under the Digital Services Act.
Remarks by Adinde Schoorl: Monitoring Platform Compliance and the Role of NGOs under the DSA
Ms. Schoorl opened her remarks by introducing INACH, a coalition of 31 organizations from 24 countries, primarily within the EU. Since its founding in 2002, INACH has worked to combat hate speech in all its forms, through monitoring, reporting, education, and research.
She first explained INACH’s work prior to the Digital Services Act (DSA), notably its involvement in monitoring the EU Code of Conduct on Combating Illegal Hate Speech Online. NGOs, including INACH members, conducted annual monitoring cycles and shadow monitoring periods to assess social media platforms’ responsiveness to hate speech complaints. To address consistency outside these cycles, INACH initiated the SAFET Project (2022–2024), which enabled daily monitoring to detect discrepancies in platform behavior.
Turning to the DSA, Schoorl described it as a much-needed step forward in addressing online hate, especially given current threats to truth and democracy. However, she raised critical concerns about how its implementation has unfolded:
- Slow rollout: Many countries have only recently appointed Digital Services Coordinators (DSCs), leaving NGOs with little information on how to engage.
- Politicization: Selection of trusted flaggers—NGOs given direct reporting authority—is vulnerable to political agendas. In some cases, NGOs working on LGBT+ rights have been excluded.
- Lack of expertise: Many DSCs lack background in hate speech, creating a knowledge gap that NGOs must bridge.
- No compensation: Trusted flagger duties are unpaid, placing unsustainable demands on NGOs.
- Uneven application: Variability in DSA interpretation across countries creates unequal working conditions for NGOs.
Schoorl emphasized that platform performance has regressed since early improvements, particularly after the pandemic. Monitoring data from 2024 shows wide disparities:
- Only 15% of trusted flagger reports were removed on YouTube, and just 10% for regular users.
- TikTok’s removal rates were nearly identical for both groups—49% and 48%.
- Response times varied widely: TikTok responded to 73% of reports within 24 hours, compared to Instagram’s 50%, and X’s 54%. YouTube failed to respond to 93% of reports.
She concluded by affirming the promise of the DSA, while warning that without financial support, transparent selection procedures, and consistent application, its goals risk falling short.
Her final message was clear: NGOs are essential to DSA monitoring, but cannot carry this burden alone.
Following the speakers’ contributions, the floor was opened for questions, eliciting detailed responses that addressed the core challenges of regulating online hate speech under evolving legislative frameworks.
Q&A Session
The first question was directed to Ms. Schoorl and addressed the role of social media algorithms in amplifying hateful content. The question referred to the EU’s Digital Services Act (DSA) and asked whether current regulatory measures adequately tackle the algorithmic drivers of hate speech, or whether deeper changes—such as redesigning algorithms or rethinking platform business models—are needed.
Ms. Schoorl responded to this question, emphasizing that while transparency has improved with mandatory quarterly reports and the establishment of the Algorithmic Transparency Center, current measures remain insufficient. She underlined that the core issue lies in platforms’ profit-driven models, which are fundamentally structured to reward engagement—including through divisive or hateful content. In her words: “They are making money from hatred, and that’s a structural problem.” She advocated for rethinking not only algorithmic design but also the business models that incentivize harmful engagement. According to her, without binding regulatory pressure, platforms are unlikely to open up their systems to genuine scrutiny.
Ms. Kaddouri also addressed the first question briefly, offering a legal perspective rooted in her background as a human rights lawyer. She noted that the lack of harmonized criminal law across EU member states poses a major challenge in enforcing hate speech provisions uniformly. Acts that are criminalized in Belgium may not be treated similarly in other countries like Hungary or Poland. She suggested that Article 10 of the European Convention on Human Rights, which allows for derogation in cases involving public order, could potentially offer a legal pathway—but ultimately, such arguments would rest on judicial interpretation.
The second question came from a representative of the “Solidarity with OTHERS” and was directed to all panelists. It highlighted the inconsistent implementation of the DSA across EU countries and raised concerns about politicized decisions in the selection of trusted flaggers, the complicity of some state-controlled media in promoting hate narratives, and the possibility of launching infringement procedures against states that fail to uphold EU principles on hate speech and discrimination.
Mr. Yorulmaz acknowledged the disparities in DSA application and expressed concern about the exclusion of NGOs representing vulnerable communities, such as LGBTQI+ groups. He emphasized the necessity for stronger EU-level enforcement and did not rule out infringement procedures against non-compliant states. Drawing on the jurisprudence of the European Court of Human Rights, he pointed out that while freedom of expression is protected under Article 10 of the European Convention, this right does not extend to hate speech that incites violence or discrimination. He warned, however, against overly rigid definitions of hate speech that could be weaponized by authoritarian regimes, reiterating the need for balanced legal interpretation. He concluded by stressing the crucial role of civil society and the importance of ensuring their continued protection and support.
Ms. Kaddouri, in her extended response, offered further critical reflections on the shrinking space for equality bodies in Belgium. She expressed deep concern about recent decisions to cut funding for UNIA, Belgium’s national equality institution, which she described as symptomatic of a broader trend across Europe. She reiterated the importance of data collection, research, and advocacy in resisting this regression, and emphasized the power of language in shaping political narratives. She also briefly explained for the audience that UNIA is an independent public body tasked with combating discrimination and supporting victims.
Ms. Schoorl elaborated further on both questions. Regarding AI moderation of hate speech, she acknowledged its potential but highlighted its current limitations, particularly in terms of bias, language nuance, and cultural context. She described the existing AI moderation tools as inconsistent and unreliable, noting that most effective monitoring still relies on manual methods. She advocated for a hybrid model combining AI with human oversight, while also pointing to the lack of affordable, ethical AI solutions for NGOs. On enforcement, she warned of mounting political and commercial pressures that threaten to erode the progress achieved through the DSA. She cited concrete examples of trusted flaggers facing violent backlash, particularly in Germany, and expressed alarm over the absence of protective mechanisms for NGOs. Without such safeguards, she warned, the DSA could falter under external pressure, silencing those who challenge online hate.
The moderator Estelle Ceulemans concluded the session by drawing attention to recent developments within the European Parliament itself, where hate speech and discriminatory rhetoric are increasingly tolerated—even normalized—despite formal safeguards. She cited an example from a recent plenary session on the Universal Declaration on the Rights of the Child, in which far-right MEPs propagated harmful narratives about unaccompanied minors. Furthermore, she noted a disturbing trend of mainstream political groups aligning with far-right votes on issues related to LGBTQ+ and women’s rights. This, she emphasized, underscores the urgency of sustained and courageous engagement with hate speech—within and beyond institutional spaces.
The event closed with a round of applause and a moment of hospitality, leaving participants with both a sense of the progress made and the many challenges ahead.