Help us make the FRA website better for you!

Take part in a one-to-one session and help us improve the FRA website. It will take about 30 minutes of your time.

YES, I AM INTERESTED NO, I AM NOT INTERESTED

AdobeStock#324023974
17
November
2025

Regulating online terrorist content – Balancing public safety and fundamental rights

Online terrorist content is a threat to fundamental rights, rule of law and democracy. EU measures to tackle such content aim to prevent terrorism while upholding these values. FRA’s report looks at how online terrorist content is detected and removed under EU legislation. It highlights challenges in interpreting rules, risks of over-removal and potential impacts on freedom of expression. It finds that moderation practices by authorities and platforms can disproportionately affect certain groups, such as Muslims and Arabic speakers, while far-right content often receives less scrutiny. The findings, based on research and expert interviews with those addressing online terrorist content, offer ways to improve transparency in content moderation and to better balance public security and fundamental rights, contributing to wider debates on regulating online content responsibly.

The regulation complements efforts of HSPs to address the proliferation of terrorist content based on their own terms and conditions, supported by EU initiatives such as the EU Internet Forum and the Code of Conduct on Countering Illegal Hate Speech Online. In fact, the vast majority of terrorist content, especially on large social media platforms, is detected by HSPs’ own detection tools and subsequently removed on the basis of such terms and conditions (community guidelines, terms of service, etc.) rather than being triggered by law enforcement or other government requests [70]
 See, for example, Macdonald, S. and Staniforth, A., ‘Tackling terrorist content online – Propaganda and content moderation’, Tech against Terrorism Europe whitepaper, 2023, pp. 14–15.
.

In general, the regulation does not regulate the content of these HSP moderation policies. However, it interacts with them at several levels. Recital 5 underlines that HSPs have ‘particular societal responsibilities to protect their services from misuse by terrorists and to help address terrorist content disseminated through their services online, while taking into account the fundamental importance of the freedom of expression.’ This complements obligations of HSPs to act with due regard to fundamental rights of the recipients of their services under Article 14 DSA and, for very large online platforms and very large online search engines, to assess the risk for fundamental rights stemming from the design, functioning and use of their services, including when using algorithmic systems, under Article 34 DSA. When setting out their transparency obligations under the regulation, Article 7 also requires HSPs to report on their own content moderation measures.

Furthermore, in accordance with Article 5, when a competent authority decides that an HSP is exposed to terrorist content, for example due to having received two or more removal orders in 12 months, such an HSP must enhance its content moderation and implement specific measures to prevent the dissemination of terrorist content. While the HSP retains discretion in choosing the measures, they may involve technical means to identify and expeditiously remove content, including automated tools that must be subject to human oversight. The specific measures must meet the requirements set out in Article 5(3). Besides effectiveness, this includes a targeted and proportionate nature; taking full account of the rights and legitimate interests of the users, in particular users’ fundamental rights concerning freedom of expression and information, respect for private life and protection of personal data; and diligent and non-discriminatory application. Article 5(5) in conjunction with Recital 24 requires HSPs to report their measures to competent authorities for assessment, which should also cover the fundamental rights impact. This mechanism embodies the positive obligation of the Member States to secure the effective exercise of fundamental rights and prevent fundamental rights violations, including by providing an oversight over the application of the specific measures by HSPs under the regulation [71]
 See, for example, Council of Europe, Appendix to Recommendation CM/Rec(2018)2 of the Committee of Ministers to Member States on the roles and responsibilities of internet intermediaries, 7 March 2018, Recital 6. For relevant jurisprudence, see judgment of the Second Section of the ECtHR of 14 December 2010, Dink v Turkey, Nos 2668/07, 6102/08, 30079/08, 7072/09 and 7124/09, paragraph 137; judgment of the Fifth Section of the ECtHR of 10 April 2019, Khadija Ismayilova v Azerbaijan, Nos 65286/13 and 57270/14, paragraph 158.
. HSPs also have the possibility to request a review of the decisions related to being exposed to terrorist content and specific measures (Article 5(7)) and challenge them in court (Article 9).

Article 10 of the regulation also requires HSPs to set up effective and accessible complaint mechanisms for content providers whose content has been removed due to specific measures and ensure that such complaints are dealt with expeditiously.

This chapter does not analyse in detail companies’ content moderation policies. Instead, it focuses on some of the key fundamental rights issues arising in that context, including the shift towards automation and the limits of human moderation, and the resulting risk of the disproportionate impact on specific groups and content. Against this backdrop, it looks at how the regulation interplays with HSP content moderation and, in particular, how specific measures might further aggravate these challenges. When it comes to describing the impact of HSP policies on issues, the chapter relies strongly on examples related to Meta as the transparent operation of its Oversight Board offers comparatively more insight into its operations than what is available for other HSPs [72]
 For more details, see the Oversight Board website.
.

Findings from the research indicate that the regulation affects how HSPs approach content moderation, going beyond the direct impact of individual removal orders.

Interviewees across professional groups acknowledge that addressing terrorist content has been high on the list of priorities for many large HSPs for some time, as evidenced by voluntary initiatives such as the GIFCT and responses to events such as the Christchurch terror attack, and that most companies are willing to counter terrorist use of their platform. At the same time, a number of experts from civil society / academia, along with competent authorities, consider that the regulation and the discussions surrounding its adoption have been an important factor in promoting HSPs’ enhanced focus on terrorist content in recent years. While this can be considered a positive development, it also entails an increased use of automated tools to detect terrorist content and a general tendency to prioritise compliance with regulatory frameworks and government requests and err on the side of over-blocking content, avoiding the risk of penalties and ideally exposure to removal orders, which carry the risk of having to implement specific measures (see Section 3.2). Experts from civil society / academia, including those with content moderation experience, recall that major HSPs stepped up their moderation efforts at the time when the regulation was proposed, in an apparent hope to pre-empt its adoption.

I believe that the platforms were thinking: ‘If we do this voluntarily, we can avoid the regulation.’

Civil-society/academia expert

Some competent authority experts emphasise that, in their view, shaping how HSPs approach moderation of terrorist content is one of the underlying aims of the legislation.

The regulation has helped in improving the moderation by service providers, […] so that providers learn and know we are putting some pressure on them, that they start protecting themselves.

Competent authority expert

Content moderation by HSPs necessarily differs from the detection of terrorist content conducted by competent authorities. HSP terms and conditions are considerably broader in their scope than the regulation, prohibiting a range of content that may include illegal and ‘borderline’ content that may, depending on the HSP, cover hate speech, violent and graphic content, harassment and even some types of disinformation [73]
 See Centre on Regulation in Europe and Broughton Micova, S., Systemic Risk in Digital Services: Benchmarks for evaluating management of risk of terrorist content dissemination, 2024, p. 11; Saltman, E. and Hunt, M., ‘Borderline Content: Understanding the gray zone’, GIFCT, 2023.
. At least in the case of major HSPs, content moderation by companies operates on a considerably larger scale in terms of the volume of content processed and geographical scope. At the same time, it is based on enforcing terms and conditions that are not subject to the same standards and degree of scrutiny as national or international law. From this perspective, effective content moderation by HSPs is a prerequisite for successfully addressing the proliferation of terrorist contentonline. At the same time, incentivising HSPs towards stricter content moderation can generate a significant impact on fundamental rights, without safeguards equivalent to those which apply to competent authorities in the context of the regulation.

This impact includes in particular freedom of expression and information (Article 11 of the Charter), along with respect for private and family life (Article 7 of the Charter), freedom of thought, conscience and religion (Article 10 of the Charter), freedom of assembly and of association (Article 12 of the Charter), freedom to conduct a business (Article 16 of the Charter), non-discrimination (Article 21 of the Charter) and the right to an effective remedy and a fair trial (Article 47 of the Charter).

Automated tools such as machine learning models and hash matching (see textbox ‘Automated tools for the identification of terrorist content’) allow HSPs, in particular large social media platforms, to detect and remove large amounts of illegal content [74]
 On the use of automation by HSPs, see, for example, Organisation for Economic Cooperation and Development (OECD), ‘Transparency reporting on terrorist and violent extremist content online’, OECD Digital Economy Papers, No 367, June 2024, pp. 29–31.
. Although HSP experts highlight that potential terrorist content detected through such means is generally not taken down without being first assessed by a human moderator (see Section 3.1.2), the findings confirm a widespread trend towards entrusting more content moderation to automation and reducing the role of human review or oversight. Some interviewees note that the regulation, due to its focus on quick removal of content and the reputational risks associated with being labelled as hosting terrorist content, further incentivises the use of such automation.

Findings indicate serious concerns over the reliability of automated tools, something that FRA has flagged in the context of its work on algorithmic bias [75]
 See, in particular, FRA, Bias in Algorithms – Artificial intelligence and discrimination, Publications Office of the European Union, Luxembourg, 2022.
. Interviewees underline that while machine learning detection and classification models have improved considerably over recent years, their quality tends to be overestimated. Due to the scale of operation of major HSPs, even a small error rate can have major consequences and affect the freedom of expression and information and freedom of thought, conscience and religion of a large number of people globally.

In my experience, there is a lot more faith in automation than is warranted. [Automated content moderation] is really still a developing field, and yet we already have a regulation that encourages that.

Civil-society/academia expert

The nature of the tools and the quality of data to train them both play an important role. Compiling comprehensive datasets of sufficient quality for training machine learning tools remains a challenge, especially outside major languages. Many interviewees, including those with experience working on content moderation, refer to difficulties with African or Asian languages, for which less training data is available, languages with multiple dialects or languages written in non-Latin script, with which the models have difficulty working. In this context, civil-society/academia experts call for more transparency of the use of automated tools, their accuracy and the nature of the datasets.

One problem that we have is that we don’t know the type of content [HSPs] block, we cannot know how much legitimate content is taken down based on their own assessment criteria.

Competent authority expert

Automated tools are less accurate when working with some types of content. Videos, in particular livestreamed ones, have a higher inaccuracy rate than images. Text and speech carry the highest risk of false positives (content wrongly flagged as terrorist content) as machine-learning classification does not take context into account, which makes recognising non-violent terrorist content, such as propaganda, or distinguishing an actual call to violence from sarcasm or historical references, difficult for automated tools. According to experts with content moderation experience, this sometimes results in the removal of news reports or the suspension of social media accounts for discussing current political topics, limiting people’s ability to freely discuss current events and share non-harmful content.

If a detection model is based on the name of a terrorist organisation, for example, any content that references that organisation (e.g. reporting or expressing views on the Taliban takeover in Afghanistan or the activities of Hezbollah in Lebanon) may be flagged by the algorithm to a human moderator as glorifying terrorism. Unless the human moderator recognises this as a false positive case (see Section 3.1.2), the content will be taken down.

Content identified based on hash matching as known terrorist content or an altered version of it is taken down automatically. Some interviewees express concerns that hash detection cannot distinguish when content is used for legitimate purposes (e.g. academic research or journalism) and that sharing hashes among HSPs (e.g. in the context of the GIFCT Hash Sharing Database) raises issues of transparency and accountability for possible mistakes.

Typically, content goes via humans, but some goes automatically, via tools. If someone is trying to reupload the same content, or very similar content after they altered [the original], it is prevented from being uploaded.

HSP expert

While content detected through machine learning models typically undergoes some form of human review, some interviewees with content moderation experience note that this is not always the case. If the tool, based on certain pre-set phrases or other parameters (for example, the presence of a headless body in an image), assesses the content as sufficiently clearly violating the HSP’s terms and conditions, the content can also be taken down automatically.

Furthermore, even if human review is involved, in some cases, it might only take place ex post. As an example, an interviewee with content moderation experience refers to cases of particular high-risk events (e.g. public protests), where all content assessed by automated tools as potentially problematic is preventively removed pending human review, which might take as long as 24 hours. This can have a particular impact on material expressing polemic or controversial views, the coordination of political activities, etc., affecting not only freedom of expression and information but also freedom of assembly and association.

The limits of automation demonstrate that a thorough human review of content detected by automated tools is necessary to avoid over-blocking some content [76]
 On the role and importance of ‘human-in-the-loop’ processes, see, for example, Thorley, T. G. and Saltman, E., ‘GIFCT Tech Trials: Combining behavioural signals to surface terrorist and violent extremist content online’, Studies in Conflict and Terrorism, 2023, pp. 1–26.
. However, as a number of interviewees point out, the trend towards more automation has gone hand in hand with reduced investment in human moderation teams characterised by lay-offs and outsourcing driven by economic considerations, affecting the effectiveness of human oversight of automated decisions.

Interviewed experts from large HSPs state that their companies have robust human moderation in place where experts with different specialisations deal with terrorist content. Smaller companies might rely on a single person covering terrorism among other tasks. Civil-society/academia interviewees observe that the quality and capacity of content moderation teams differ significantly based on company size and their focus on terrorist content, but that even HSPs with sufficient resources might lack appropriate human rights expertise. According to some interviewees, investment in different languages likewise varies partly due to the regulatory pressure to moderate content coming mostly from Europe and the United States, aggravating the uneven performance of automated tools in different languages.

Importantly, while content flagged by authorities is routed directly to specialist in-house teams, content detected by automated tools – i.e. the overwhelming majority of content flagged as potentially terrorist content – is assessed by frontline moderators. Interviewees point to several factors that significantly reduce the effectiveness of human review as a safeguard to address over-blocking.

While the training of frontline moderators in large HSPs typically also covers terrorism, the training is usually limited to recognising obvious signs of terrorist content, without considering cultural and religious specificities and other nuances that help to safeguard free speech. Where detailed internal guidance exists for how to assess different types of content, it might be only available in English and not the languages moderators work with.

In addition, frontline moderation is increasingly outsourced. Interviewees point out that outsourced moderators work under particular time pressure, with strict quotas not only on how many pieces of content they need to process but also on how many cases they can escalate to in-house content moderation teams in case of doubt. Making a decision whether content flagged by automated tools is indeed terrorist content within the available 10–20 seconds might be possible for simple cases, interviewees say, but not for more complex scenarios involving local context or a dialect a moderator is not familiar with. Lack of systematic oversight over frontline moderators also leaves room for biases and subjective decisions, which tend to err on the side of removal. The online response to events such as the Israeli-Palestinian conflict after 7 October 2023 can overwhelm moderation systems easily, one interviewee with content moderation experience says, turning moderation into a numbers game where moderators cannot keep up with the amount of content and might remove content nearly automatically.

Moderators are treated somewhat like AI. They’re given a target number of things to do in an hour. There are few allowances for a break or to make a mistake […] One can only escalate one case this hour so they will just be going to click this button that says ‘Remove’ and move on, because that’s the safer thing.

Civil-society/academia expert

Pointing also to known testimonies of whistleblowers and studies on the conditions of human moderation in large HSPs [77]
 See, for example, Barrett, P. M., Who Moderates the Social Media Giants? – A call to end outsourcing, NYU Stern Center for Business and Human Rights, 2020; Miceli, M., Tubaro, P., Casilli, A. A., Le Bonniec, T., Salim Wagner, C. et al., Who Trains the Data for European Artificial Intelligence? – Report of the European Microworkers Communication and Outreach Initiative (EnCOre, 2023–2024), DiPLab, Weizenbaum Institute, and DAIR Institute, 2024, pp. 22–26.
, civil-society/academia experts state that while the conditions for in-house moderation teams have somewhat improved, outsourced moderators continue to be treated as an extended form of automation and a disposable resource in terms of low wages, poor working conditions, psychological harm due to exposure to often highly traumatising content and lack of support. Besides a very real impact on the fundamental rights of moderators themselves, this may also have consequences for freedom of expression and other rights by affecting the quality of moderation and increasing the likelihood of the over-removal of content. Given these shortcomings, some interviewees question the push for human moderation over automation and emphasise the need to first ensure adequate human resources, better working conditions and guidance for moderators.

This over-flagging, so false positives, is slightly higher by the automatic machine learning model, but the human moderators also happen to have a quite a lot of work, so it happens with them as well.

Civil-society/academia expert

While the issues with automation and human moderation of suspected terrorist content increase the risk of over-blocking content across the board, findings show that they have an impact on certain types of content and groups in society more than on others – a challenge explored in the context of assessment of content by competent authorities in Chapter 2. When it comes to content moderation by HSPs, findings show a widely shared concern that legitimate content related to particular topics or posted by users (content providers) from a particular region or speaking a particular language is at a disproportionate risk of over-removal. This amounts to a risk of discrimination based on, among other things, ethnic origin, language, religion or belief, or political opinion.

In terms of languages, Arabic, due to its non-Latin script and multiple dialects that might use the same terms differently, is considered a particularly vulnerable language in content moderation. Interviewees point to, among other examples, a human rights review contracted by Meta to assess the impact of its moderation policies and activities during the 2021 events in Israel and Palestine [78] This designation shall not be construed as recognition of a State of Palestine and is without prejudice to the individual positions of the Member States on this issue.
(see textbox ‘Discriminatory impact of HSP content moderation – example of Meta’). Interviewees with content moderation experience highlight that these issues exist across the industry and that other HSPs likewise encounter lower accuracy rates for speakers of certain dialects of Arabic. Lack of attention to the cultural context of particular terms in certain languages can generate lots of false positives and lead to over-removal, as shown by the policy advisory opinion of the Oversight Board, which found the company’s approach to moderating the term shaheed (‘martyr’ in Arabic) to be overbroad and disproportionately restrict freedom of expression and civic discourse.

Similar to competent authorities, HSPs rely on international and national lists of dangerous organisations and individuals when detecting terrorist content online [79]
 On this topic, see Tech Against Terrorism, Who Designates Terrorism? – The need for legal clarity to moderate terrorist content online, 2023.
. As outlined in Chapter 2, these lists are heavily skewed towards jihadist entities, with far fewer designations of right-wing extremist and other non-jihadist organisations and individuals. Individuals living in, or posting about, the situation in regions where organisations present on these lists play a particular role (e.g. refugees from Afghanistan or Syria living in the EU), and in particular Muslims, face an increased likelihood that their content will be either automatically blocked by automated tools or scrutinised and potentially wrongly assessed by a human moderator and removed.

Whenever content is referring to a certain area, it can be unclear if it’s glorification or if the person lives in the area and is just reporting and making observations of that area like in the Taliban region. [The company] errs on the side of over-enforcement rather than under-enforcement.

Civil-society/academia expert

Researchers and journalists can be likewise affected. In fact, civil-society/academia experts explain that the work of actors documenting human rights abuses has been impacted by HSP takedowns, as this is content that is indeed related to terrorism but has important documentary value and potential to bring to justice perpetrators of war crimes and genocide [80]
 See, for example, the work of the non-governmental organisation Mnemonic, including the Syrian Archive. See also Goodman, J. and Korenyuk, M., ‘AI: War crimes evidence erased by social media platforms’, BBC website, 1 June 2023, concerning the removals of footage capturing human rights abuses during the Russian war of aggression in Ukraine.
.

Some civil-society/academia interviewees say that the lack of attention paid by HSPs to right-wing extremist content is worrying, both given its radicalising potential and the fact that it is significantly more relevant in some parts of Europe than jihadism [81]
 Data by Tech Against Terrorism shows that HSPs also tend to remove a smaller percentage of right-wing terrorist content that is flagged to them by other players in comparison with flagged jihadist content. Tech Against Terrorism, ‘Mapping far-right terrorist propaganda online’, Terrorist Content Analytics Platform, May 2024, p. 24.
. In this context, some interviewees highlight the EU–US divide in approaching free speech and the growing political acceptance of far-right views, both as an explanation for the under-moderation of right-wing extremism online and as a challenge for the near future.

The majority of experts from civil society / academia also highlight that the over-moderation of online content by HSPs can contribute to a chilling effect on rights, in particular freedom of expression and freedom of assembly and association, as people from communities that feel over-moderated withdraw from public debate and restrict their involvement in solidarity movements and activism  (see also Section 2.1.2). While such an effect is hard to measure, some civil-society/academia experts refer to testimonies by migrant communities in Member States stating that they refrain from posting certain content, resort to measures to bypass moderation (e.g. by slightly altering texts, using different expressions and symbols) or restrict their involvement in solidarity movements and activism, due to a fear of being perceived as supporting terrorism [82]
 See, for example, European Network Against Racism and Choudhury, T., Suspicion, Discrimination and Surveillance: The impact of counter-terrorism law and policy on radicalised groups at risk of racism in Europe, 2021, pp. 53–56.
.

People are absolutely self-censoring, including in the context of what is currently happening in Gaza and Lebanon where it is difficult to discuss certain topics without mentioning, e.g. Hezbollah. Particularly in these emergency situations, many people are scared about losing access to their accounts and are really limiting what they say. In this way, we are already seeing a chilling effect among specific communities.

Civil-society/academia expert

Findings show that transparency reporting by HSPs under Article 7 of the regulation, including statistics on the removal of content under their own terms and conditions, complaints and reinstated content, does not provide sufficient information to measure and address the risk of HSP over-blocking and to enhance accountability.

In transparency reports, intentionally, the numbers are presented in such a way that you have numbers, but it is really hard to say [what they really mean]. There is obviously granularity, but they hide behind the global average.

Civil-society/academia expert

According to civil-society/academia experts, data in these transparency reports lacks granularity and comparability across the industry. In many cases, the data does not clearly show what content has been taken down on terrorist grounds (rather than based on broader categories like ‘public security’ or ‘violent content’), how many takedowns resulted from own detection and how many from content flagged by referrals. It also does not disaggregate data by criteria such as region or language. Furthermore, not all HSPs issue these reports, and when they do, they do not necessarily meet the requirement of making them public, for example making them only accessible to their own users instead.

We are very closely monitoring those metrics. If we see a spike in enforcements or a big spike in the number of appeals coming in, we can deep dive into that to understand why this is happening, like if we might be potentially over-enforcing or perhaps just seeing a change in [user] behaviour.

HSP expert

Having looked at the main fundamental rights issues arising in the context of existing HSP content moderation policies, this section outlines how the obligation to address exposure to terrorist content by enhancing moderation efforts might further exacerbate some of these challenges.

In general, due to the limited use of Article 5 so far (see textbox ‘Limited practical experience with the use of specific measures’), most competent authorities and HSPs currently have no practical experience with specific measures. As a result, during the research, some interviewees were only able to share insights based on the rules and procedures they have set up for this purpose so far, or on existing frameworks that they envisaged to apply in such cases.

The draft regulation foresaw that all HSPs, regardless of the degree of exposure to terrorist content, could adopt such measures proactively, on their own initiative, while those who have received a removal order would be obliged to adopt them and report on them to the competent authorities. Concerns that this would be disproportionate and might amount to imposing a general monitoring obligation upon HSPs (see Section 3.2.2) resulted in the adopted wording that only requires the implementation of specific measures from those HSPs designated as exposed to terrorist content.

The provision relating to specific measures impacts, in particular, the rights to respect for private and family life (Article 7 of the Charter), protection of personal data (Article 8 of the Charter), freedom of expression and information (Article 11 of the Charter), freedom to conduct a business (Article 16 of the Charter) and non-discrimination (Article 21 of the Charter).

The regulation limits the obligation to implement specific measures to HSPs designated as exposed to terrorist content. However, according to a range of interviewees, Article 5(4) only provides loose criteria for when to designate HSPs and trigger this requirement. Experts from competent authorities responsible for dealing with specific measures generally indicate that the provisions would serve as guidance rather than a set of strict criteria. Some say they would primarily base their decision on the receipt of two or more removal orders within the past 12 months, the example provided by the regulation. Others emphasise they would also consider other factors to assess each case individually and to determine whether a systemic issue exists. The practice in designating HSPs, albeit limited so far, confirms that approaches of competent authorities diverge (see textbox Limited practical experience with the use of specific measures’).

Some experts from competent authorities indicate that a certain degree of regulatory flexibility is beneficial, allowing for more proportionality and an individual approach. The majority of interviewees from competent authorities entrusted with this task nevertheless highlight that clearer guidance would be necessary on how to determine when an HSP can be considered exposed to terrorist content. Some argue that the concept and criteria are too vague, leading to uncertainty for both the HSPs and the authorities themselves, reducing foreseeability and resulting in a lack of transparency in authorities’ assessments. As for deciding that an HSP is no longer exposed to terrorist content, the regulation provides even less guidance, stating in Article 5(7) that a reasoned decision should be taken by the competent authority upon the HSP’s request based on objective factors.

Some experts question the logic of defining exposure based on just two removal orders, a low standard for larger HSPs whose scale of operations makes it very likely that some terrorist content appears on their platforms, despite having dedicated staff and resources handling content moderation. This threshold, one competent authority expert argues, might be more relevant for smaller HSPs that struggle to protect their platforms from terrorist content. Another expert from a competent authority recalls that, in accordance with the logic of the regulation, an HSP receiving just two removal orders can be treated in the same manner as one receiving hundreds, and questions the proportionality of this approach, given the impact such measures can have on the companies and their users.

I think it is normal for a HSP to host terrorist content two times or more […] One has to be careful with these measures, because they can be very hard and have a big effect.

Competent authority expert

To further elaborate specific criteria for determining whether an HSP is exposed to terrorist content, some experts from competent authorities emphasise the need to collaborate with other authorities and learn from their approaches and emerging good practices, both with relevant authorities nationally, such as regulators in related fields, or in discussions with their counterparts in other Member States. Due to the limited experience with specific measures across the EU, comparing experience and exchanging best practices nevertheless remains difficult. In this regard, some experts mention the experience and discussions in the related context of DSA implementation as useful.

In accordance with Article 5(6), the choice of what particular specific measures to implement in the case of being designated as exposed to terrorist content is left to the HSP. The research confirms that experts from competent authorities are aware that they cannot require HSPs to implement particular measures. Some highlight the advantages of this approach, granting HSPs discretion in determining the responses and tools most suitable to their own unique context. Others state they could support the HSPs with ideas and recommendations, for example based on what tools and approaches work well for other companies.

As to what measures HSP can choose to implement, the majority of experts from civil society / academia point to the overall lack of clarity provided by the regulation’s broad list of potential measures (including ‘any other measure that [the HSP] considers to be appropriate to address the availability of terrorist content on its services’). As a result, the interpretation of this obligation by HSPs is likely to vary from case to case, necessarily also differently impacting the rights of users. Some experts from competent authorities, on the other hand, highlight that the differences in capacity between HSPs of different sizes, along with variations in hosting models, make a flexible approach to ordering and implementing specific measures essential.

When it comes to the potential fundamental rights impact of specific measures, findings show several areas of concern.

One relates to a risk of incentivising HSPs to implement changes to their policies that are likely to result in excessive takedowns of legitimate content. As described in Section 3.1, content moderation policies of many HSPs already run the risk of over-blocking and having a disproportionate impact on certain groups. In its formal comments on the proposed regulation, the European Data Protection Supervisor (EDPS) underlined the importance of ensuring that specific measures comply with the principle of necessity, are proportionate to the level of HSP’s exposure to terrorist content and are accompanied by appropriate accountability tools [83]
 EDPS, ‘Formal comments of the EDPS on the proposal for a regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online’, 12 February 2019, pp. 6–7.
.

In order to comply with authorities’ expectations and avoid potential penalties, HSPs’ terms and conditions may slide further towards over-compliance, over-removal and the unnecessary censorship of content, experts from civil society / academia warn. Some experts from competent authorities also advise caution when it comes to ordering HSPs to implement specific measures, highlighting that they can easily become overly stringent and have significant consequences for the rights of users. Other competent authority experts, on the other hand, argue that HSPs implementing specific measures would seek to avoid over-removal due to their business model [84]
 On the role of private players in enforcing law online and the impact on fundamental rights, see for example, Bellanova, R. and De Goege, M., ‘Co-producing Security: Platform content moderation and European security integration’, Journal of Common Market Studies, Vol. 60, 28 December 2021; Tosza, S., ‘Internet service providers as law enforcers and adjudicators. A public role of private actors’, Computer Law & Security Review, Vol. 43, November 2021.
.

There’s no legal risk for platforms if they over-censor, but there is one if they under-censor.

Civil-society/academia expert

Furthermore, the regulation allows the use of automated tools as part of specific measures. As described in Section 3.1.1, these tools are increasingly deployed by HSPs to prevent the reappearance of prohibited content and to speed up the detection process. Even without being explicitly instructed to use these tools, the fact that the HSP is required to combat the presence of terrorist content on its platform more effectively, in combination with the vague and open-ended list of possible specific measures provided by the regulation, is likely to create strong pressure to employ these tools despite their known limitations and the likelihood of producing large numbers of false positives, experts from across professional groups say. For HSPs that have no experience with employing such tools or that lack strong fundamental rights expertise, it might be particularly difficult to avoid a disproportionate effect on rights.

Some experts from competent authorities consider automated tools a necessity for companies that deal with large amounts of content. They argue that their use should be acceptable as long as appropriate safeguards are in place, including the human oversight required in such cases by Article 5(3) of the regulation. Others consider them insufficiently transparent or draw attention to their limitations on accurately detecting and assessing potential terrorist content.

We do not know of any tools so good that they do not require human analysis. Keywords help, hashes help, but mainly it is analyst work.

Competent authority expert

Another concern relates to whether the obligations imposed on them by virtue of Article 5 might effectively require HSPs to resort to general content monitoring, particularly through automated tools. Amounting to indiscriminate surveillance, such general monitoring would impact not only freedom of expression and information but, notably, also the rights to privacy and protection of personal data.

The risk of incentivising HSPs to adopt measures de facto constituting indiscriminate surveillance of all content was among the chief concerns highlighted during the negotiations of the regulation [85]
 See, for example, UN, Mandates of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression; the Special Rapporteur on the right to privacy and the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism, OL OTH 71/2018, 2018, pp. 9–10; FRA, Proposal for a regulation on preventing the dissemination of terrorist content online and its fundamental rights implications – Opinion of the European Union Agency for Fundamental Rights, Publications Office of the European Union, Luxembourg, 2019, pp. 38–42.
. While states can require providers of online services to address the dissemination of specific illegal content [86]
 For a distinction between general and specific monitoring, see, for example, Senftleben, M. and Angelopoulos, C., The odyssey of the prohibition on general monitoring obligations on the way to the Digital Services Act: Between Article 15 of the e-Commerce Directive and Article 17 of the Directive on Copyright in the Digital Single Market, October 2020.
, they are prohibited under Article 8 of the DSA and Article 15 of the e-Commerce Directive from imposing a general monitoring obligation, and the same principle is reflected in Article 5(8) of the regulation [87]
 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the internal market (‘Directive on electronic commerce’), OJ L 178, 17.7.2000, p. 1, ELI: http://data.europa.eu/eli/dir/2000/31/oj.
. In accordance with international standards, states should also avoid any action that may indirectly lead to such general content monitoring [88]
 Council of Europe, Appendix to Recommendation CM/Rec(2018)2 of the Committee of Ministers to Member States on the roles and responsibilities of internet intermediaries, 7 March 2018, paragraph 1.3.5.
. However, some interviewees observe that the regulation effectively encourages HSPs to implement such general monitoring by leaving the choice of measures up to HSPs, permitting them to use automation and, at the same time, expecting them to arrive at outcomes that can be difficult to achieve without implementing general monitoring of content [89]
 See also Meijers Committee, ‘CM1904 Comments on the proposal for a Regulation on preventing the dissemination of terrorist content online (COM(2018) 640 final)’, 2019, p. 5.
.

Monitoring of specific measures by competent authorities should help ensure that such measures do not infringe on fundamental rights. However, replies by experts from competent authorities with this responsibility indicate a lack of clarity and a divergence of views on this topic. Some of these experts stress that fundamental rights would be integral to the assessment criteria and they would require HSPs to take adequate measures to protect these rights. Some other competent authority experts, on the other hand, state that the national legal frameworks governing their activities do not envisage them to monitor fundamental rights impacts. Others doubt whether they are equipped with sufficient training and knowledge when it comes to assessing issues such as discrimination.

I do not know if [fundamental rights] would be our focus. Not sure we have sufficient training or knowledge, unless it is evident. But any violation of fundamental rights could be taken by users to the judicial authority. […] We are not experts in that area; we implement the regulation.

Competent authority expert

Article 10 obliges HSPs to establish effective and accessible complaint mechanisms for content providers whose content was removed because of specific measures. HSP experts confirm that existing complaint mechanisms that allow users to challenge takedowns based on HSP terms and conditions would be used for this purpose.

Complaint mechanisms can provide content providers with access to low-threshold non-judicial remedies and contribute to greater transparency and accountability of HSPs towards content providers. This could help address some of the challenges arising in the context of HSP moderation policies. Findings from this research nevertheless reveal gaps in the application of these complaint mechanisms, which limit their effectiveness and accessibility in practice.

First, content providers can only use their right to complain to HSPs in case they are meaningfully informed about the takedown of their content, something that is not necessarily guaranteed and largely depends on HSP policies (see Chapter 4).

Access to an appeal does not mean anything if it is not a meaningful appeal.

Civil-society/academia expert

Second, the mechanism for processing complains may limit the effectiveness of the remedy. Some experts from civil society / academia note that, in some HSPs, complaints are assessed by the same person who decided on the takedown. Others refer to the use of automation by HSPs to review complaints, where the same tool trained on the same dataset, which determined that content is illegal, is used to determine the outcome of a complaint procedure. If this is done without human intervention, there is no meaningful remedy.

The regulation does not stipulate a specific deadline for handling complaints, and findings show that appeals are not necessarily prioritised by content moderation teams [90]
 When commenting on the proposed regulation, EDSP recommended introducing a deadline within which HSPs would be obliged to decide on a complaint. EDPS, ‘Formal comments of the EDPS on the proposal for a regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online’, 12 February 2019, p. 11.
. In some HSPs, an appeal against a removal of content is only queued for a limited period of time (e.g. 48 hours) and if it is not handled by then, the case is automatically closed without restoring the content, some interviewees say. In this context, a civil-society/academia expert with content moderation experience also highlights that violations of terms and conditions that are considered particularly severe, such as those related to terrorism, are subject to more stringent measures such as immediately blocking the entire account (rather than issuing a warning or temporary suspension, as would otherwise be the case). Loss of access to an account may make it very difficult in practice for the user to actually submit a complaint to the platform.

As the same mechanisms would be used when appealing against takedowns based on specific measures and those initiated by a referral (see Section 2.2), these gaps are relevant both in the context of an HSP’s own content moderation and in response to takedowns initiated by competent authorities.