Intelligence and Security Committee Report: Extreme Right-Wing Terrorism/Tackling Extreme Right-Wing Terrorism Online

TACKLING EXTREME RIGHT-WING TERRORISM ONLINE


261. It is clear that the plethora of online material, and the international nature of ERWT, pose significant challenges when it comes to disrupting the process of radicalisation and identifying potential attack planning. Extreme Right-Wing Terrorists tend to be tech-savvy and well aware of the security services' interest in their activities—arguably, even more so than their Islamist extremist counterparts.[1] Their conspiracy-theorist, anti-government outlook tends to reinforce the idea that their internet use is being monitored, and they are often aware of what technical security measures they need to employ to avoid detection. This can include the use of encrypted platforms, Virtual Private Networks (which obscure the identity and location of the user) and 'dark net' sites.

262. The Head of Counter Terrorism Policing (CTP) was clear that "end to end encryption is a disaster".[2] and is having a detrimental effect on their ability to detect harmful material online. With more services offering end-to-end encryption of messaging, MI5 has called on communication service providers (CSPs) to allow intelligence agencies to have "exceptional access" to encrypted messaging.[3] However, it is also important that the CSPs—the companies that host these platforms—take the necessary steps to ensure this material cannot be viewed and shared in the first place. This is proving to be something of a 'work in progress'.

263. The Committee first identified the problem of the CSPs failing to remove Extremist material from their platforms in its 2014 Report on the Intelligence Relating to the Murder of Fusilier Lee Rigby. During the course of that Inquiry, the Committee were told by Google, Facebook and Apple, among others, that they did not routinely monitor the content on their systems and therefore were unable automatically to block Extremist material. They attributed their failure to review suspicious content to the volume of material on their systems. Instead, they told the Committee, they were largely reliant on user-generated reports—from private citizens, organisations and law enforcement authorities—which would then trigger them to remove illegal or offensive content.

264. The Committee expressed its concern at this lack of accountability:

It is clear from the responses we received that the CSPs take different approaches to monitoring their networks. However, for the most part, action is only triggered when they are notified of offensive content (or content which breaches their guidelines) by others. In the case of communications between terrorists, user reporting is unlikely to happen, and therefore such content is unlikely to be discovered. This approach to reviewing content does not therefore help the intelligence and security Agencies to discover terrorist networks or plots.[4]

While the Government was broadly supportive in its response to the Committee's recommendations in the Report—stating that "we are also pushing CSPs to take stronger, faster and further action to combat the use of their services by terrorists, criminals and their supporters"[5]—it failed to propose a substantive way forward.

265. By the time of the Committee's Inquiry into the 2017 terror attacks,[6] the Director General of MI5 was able to confirm to the Committee that some progress had been made in the intervening years, in that at least the CSPs were acknowledging they had a role to play: "Companies are no longer overtly denying all responsibility for material they carry. They were doing that five years ago."[7] However, the Committee once again found itself in familiar territory when it came to examining the issue of whether CSPs were taking active steps to ensure that law enforcement agencies were notified of any material that may have a national security element. It transpired that although the major CSPs were now developing algorithms that would detect harmful content automatically, as the Head of CTP told the Committee, the utility of this was somewhat negated by the fact that it prevented any onward reporting to law enforcement:

the automation also means that it is kind of like a dump into your trash bin where it doesn't go through any kind of human eye and, if it doesn't go through any kind of human eye they cannot spot the fact that that might be something the police or Security Service might be interested in.[8]

266. Homeland Security Group advised that, while the CSPs have the technical capacity to engage in this area, it is a "much more sophisticated set of propaganda than we have experienced in the past", and that ERWT material was sometimes difficult for the CSPs to detect. The Counter Terrorism Internet Referral Unit (CTIRU) had a critical role to play in ensuring that the CSPs were proactively looking for this material on their platforms. The Head of CTP provided detail on the number of referrals the CTIRU were now making that were linked to ERWT, and how they were still developing a knowledge base which would help to inform decision-making with regard to whether online material breached Terrorism Act 2000 (TACT) thresholds:

I can give you some sense of perspective from the CTIRU point of view for 2020. Only six per cent of our total referrals in that year were in the Right Wing Extremism space. That amounts to 192 referrals with Right Wing material and, of the total referred, 58 of those breached TACT . . . we are learning what this stuff means all of the time and we are trying to develop a library that we have spent three decades building up in the Islamist space and we are trying to develop that at pace now.

By doing that we can go to social media sites, CSPs, who do want to cooperate with us, do want stuff taken down, or do want to assess whether it breaches their terms and conditions and say this is the material that we think is either illegal, in which case every respectable social media outfit I know wants it taken down, or it is a matter for you, but we are saying this stuff is inciteful and potentially egregious.[9]

The Global Internet Forum to Counter Terrorism

267. The Global Internet Forum to Counter Terrorism (GIFCT) was founded in 2017 by Facebook, Microsoft, Twitter and YouTube with the aim of preventing terrorists and violent extremists from exploiting digital platforms by fostering technical collaboration among member companies and sharing knowledge with smaller platforms.[10] It cites three strategic pillars as central to its mission: Prevent, Respond and Learn.

268. GIFCT points to three key initiatives launched by the forum:

  • The GIFCT hash database was launched in 2017 by the founding member companies, and comprises a shared industry database of 'hashes'—unique digital

'fingerprints'—of known violent terrorist or violent extremist content associated with organisations listed on the UN Terrorist Sanctions list.

  • URL sharing involves terrorist content being shared on one platform, with a link to content that is hosted on another platform. GIFCT began a programme in January 2019 to implement its own link sharing system—when a GIFCT company receives an indicator that the link leads to terrorist related content, the company now has a safe mechanism to share the URL links with the industry partner to whom the content belongs. This one-to-one sharing allows the notified platforms to review the link to decide if the content is violating its terms of service.
  • The Content Incident Protocol (CIP) is a process by which GIFCT member companies become aware of, assess and act on potential content circulating online as a result of a real-world terrorist or extremist event, as well as the potential distribution of that content. In addition, all hashes of any video footage produced by the attacker(s) are shared in the GIFCT database.

269. The UK plays an active role in the GIFCT,[11] and Homeland Security Group acts as the government representative (noting that it is important to be able to counter the ERWT online challenge through "an international prism").[12] They observed that "we were really in at the birth of it [GIFCT] so we were part of the kind of policy driver that sought to encourage both other countries and the major players to take part in it".[13]

270. Homeland Security Group advised that, in the event of a UK terrorist attack, they will initiate a crisis response in the online space, collaborating with individual CSPs, the GIFCT and UK law enforcement, to ensure swift removal of any affiliated terrorist content, including attacker manifestos and live-streamed attack videos. Homeland Security Group pointed to the live-streaming of the Christchurch attack in 2019 as the catalyst for this change in approach by these companies, telling the Committee:

It has been unquestionably the most celebrated attack in the Right-Wing Extremist circles. The companies immediately saw, to be frank, that this was extremely bad for them commercially and it was extremely bad more broadly. So we have worked with them for a new protocol which means that if they detect or we detect . . . but more likely they will [see] material online which looks like live streaming of a terrorist attack, they will immediately take it down and we have seen that work now in practice. It needs to get more sophisticated because, again, technology will stay ahead of us in some areas but it is an important step forward and one I would have thought five years ago, without GIFCT, we would have really struggled to achieve.[14]

271. Homeland Security Group pointed to the way that it, together with the CTIRU unit, was able to leverage the relationships it had built over the years with some of these CSPs in ensuring the Christchurch material was taken down. It was also supporting the CSPs to develop their capabilities:

For instance, following the March 2019 Christchurch attack in New Zealand, during which the livestreamed attack video was disseminated across the internet at an unprecedented rate, OSCT [Office for Security and Counter-Terrorism] utilised established CSP relationships to ensure CTIRU referrals of online terrorist content were prioritised and swiftly reviewed for removal.[15]

272. Whilst the GIFCT would certainly appear to have been a step in the right direction, there is still a lot more that the CSPs can and should be doing to tackle the online threat, as Homeland Security Group acknowledged:[16]

Homeland Security Group continues to work closely with the GIFCT and its individual members to press for a more robust, industry-wide approach to tackling terrorism online. The Christchurch attack demonstrated that progress made since the GIFCT's establishment did not effectively translate into an expeditious or coordinated cross-industry response in the event of a terrorist attack.

The UK worked with international partners to press the GIFCT to establish itself as a formal NGO entity with a clear organisational structure and future work programme. One such is the establishment of a crisis response protocol, which has been established and was successfully triggered following the Halle attack in Germany. The UK now sits on the GIFCT's new Independent Advisory Board, the formal mechanism for Governments and Civil Society organisations to hold the GIFCT to account for tackling online terrorist content.

The UK is also pressing for the GIFCT to ensure its crisis protocol is further improved by including non GIFCT members and comprehensively tackling the viral dissemination of all terrorist content as well as livestreamed video content.

Wider work with communication service providers

273. More broadly, HMG continues in its efforts to drive forward reform in what Homeland Security Group acknowledged is "still an ungoverned space".[17] HMG has consistently pressed CSPs to develop and utilise automated technology to proactively detect and remove terrorist content. The Head of CTP observed that the major CSPs were now looking to them for technical assistance, and told the Committee that "we were approached by Facebook to help them develop their algorithm to be able to take this down".[18] In addition, CTP advised that the Metropolitan Police are using police training footage to aid Facebook in developing tools to better detect live-streamed terrorist attacks.[19]

274. Homeland Security Group has led work with CSPs on candidate security, focusing on tackling the online abuse received by candidates in the run up to the 12 December 2019 election, some of which was Extreme Right Wing in nature. They maintain that CSPs have improved their processes for referring and removing online abuse content where it is illegal or breaches CSPs Terms of Service. In this context, Homeland Security Group have worked with CSPs to ensure online threat to life or content inciting violence is clearly reported to the police and that platform trends are shared with HMG, in order to ensure that CSPs provide information for intelligence and evidence purposes (and that content is left up when needed for ongoing investigations).

275. The Home Office is currently looking to develop a technological solution to reduce the number of shares terrorist attack videos receive after they have been livestreamed. The solution would seek to improve current hash detection techniques used by industry, allowing CSPs to proactively identify more manipulated videos.

Why ERWT online is a new challenge

276. Whilst the approach to the assessment of the ERWT threat is notably 'threat agnostic,' the reality is that tackling ERWT online increasingly requires a different approach from how the Government has traditionally tackled Salafi-Jihadist online propaganda:

  • In contrast to Daesh and Al-Qaeda, right-wing communities in the online sphere are increasingly fluid, with few formal organisations and structures. There are fewer ERWT proscribed groups and less group-aligned propaganda than in the Salafi-Jihadist space; and
  • The style of content and the way in which it manifests (for example, within coded anonymised messaging), means that detecting, moderating and removing ERWT online content can be particularly difficult. This makes it more difficult for the Intelligence Community and law enforcement to help CSPs develop automated tools to effectively tackle ERWT content on their platforms.

277. Online ERWT content sits amidst a plethora of content that falls below illegal terrorist thresholds. In addition, a significant proportion of ERWT online content manifests on platforms that make moderation particularly challenging (for example, by ensuring the anonymity of users) and/or are unwilling to collaborate with UK law enforcement requests to remove terrorist content. For example:

  • Some platforms hosting ERWT content, such as Gab, 4chan and 8kun, are designed as so-called 'free speech' platforms, are U.S.-based and regard themselves as abiding by U.S. law and claim protection under the First Amendment. This means that UK engagement with such platforms is particularly challenging.
  • The trajectory for the online space is one in which an increasing number of platforms evolve or emerge on principles of free speech and privacy—indeed, privacy is increasingly prioritised above security in the design of platforms.

278. With the emergence of many 'free speech' unmoderated platforms specifically aimed at the Extreme Right-Wing, the Government will also need to consider the levers that can be used to influence sites such as 8kun and BitChute. The Head of CTP explained that:

other than with the top six major [CSPs] . . . the stuff that the CTIRU takes down is a voluntary process. We [CTP] can only work in taking down extremist material because the companies actually co-operate with us there are many other providers . . . Bitchute is an example . . . that want nothing to do with law enforcement, will not co-operate and do not volunteer.[20]

279. The importance of finding a solution to extremist content on free-speech platforms was underlined to the Committee by Nick Lowles:

I think the major question here, and the other major question remaining is around bringing smaller platforms around the table, holding them to account, because if they can't be held to account and brought around the table, then we're just going to be playing whack-a-mole continually.[21]

The Government 'Online Harms' legislation

280. Homeland Security Group advised the Committee that HMG has an active dialogue with the CSPs in terms of alerting them to terrorist exploitation of their platform(s). The CTIRU a Metropolitan Police unit set up in 2010 to actively identify and assess online content, which is then referred to the CSPs for removal if it breaches UK terrorist legislation[22] and platform terms of service—has succeeded in getting platforms to remove 310,000 pieces of terrorist online material since its inception in 2010.[23]

281. This does, however, appear to be a rather modest achievement when contrasted with action taken by Facebook just over a year later—in the period April-June 2020, Facebook reported that it had removed 8.7 million pieces of terrorist content, and that over 99% of this content had been found and flagged by Facebook before the content had been reported by users.[24]

282. It has been clear for some time that there needs to be a robust legislative framework in place to ensure CSPs are properly regulated when it comes to tackling online terrorist content and activity, as well as other areas such as child sexual exploitation and abuse (CSEA).

283. In April 2019, the Government launched a public consultation on the Online Harms White Paper, which set out to "protect users online through the introduction of a new duty of care on companies and an independent regulator responsible for overseeing this framework".[25] It announced that the Home Office would be jointly leading on this Online Harms work with the Department for Digital, Culture, Media and Sport (DCMS). The consultation ran from 8 April 2019 to 1 July 2019, and included the prospect of new legislation:

The overarching principle of the regulation of online harms is to protect users' rights online . . . safeguards for freedom of expression have been built in throughout the framework. Reflecting the threat to national security, companies will be required to take particularly robust action to tackle terrorist content.

284. On 12 February 2020, the Government published its initial response to the consultation, noting that it was minded to make Ofcom the new Online Harms regulator, on the basis that it had existing expertise in the field, already had relationships with many of the major players in the online arena, and had received the endorsement of some of those organisations that had responded to the consultation.

285. In June 2020, Homeland Security Group provided an update to the Committee on their role in taking the Online Harms legislation forward:

We have been working with DCMS (with whom we jointly lead on this legislation) to draft the full Government response to the White Paper. This will be published in the autumn, alongside interim codes of practice for terrorist and CSEA content and activity. These have been developed in conjunction with law enforcement and UKIC [UK Intelligence Community]. We are preparing to instruct Parliamentary Counsel to draft legislation to deliver this regulatory framework, which we hope to introduce to Parliament next year.[26]

Homeland Security Group advised that their role in developing this legislation has in particular focused on the policy around Preventing Terrorist Use of the Internet (PTUI). They have also begun engagement with Ofcom (specifically on building its capability, and policy development).

286. On 15 December 2020, the Government duly published its full response to the consultation.[27] It stated that:

  • A new regulatory framework would be established with the introduction in 2021 of an Online Safety Bill. This would set out a general definition of harmful content and activity, with secondary legislation to cover priority categories of illegal offences—these would include terrorism. The legislation would also put duties on some platforms in relation to legal but harmful material (such as hate content).
  • This new legislation would apply to those companies that hosted user-generated content in the UK, and facilitate public or private online interaction between service users, one or more of whom is based in the UK (it would not apply to internet service providers or the dark web, as the latter falls to the direct responsibility of law enforcement).
  • It would include measures regarding senior management liability, with the Government reserving the right to introduce criminal sanctions for senior managers if they fail to comply (with the caveat, however, that this would not be introduced until at least two years after the regulatory framework comes into effect).

287. The White Paper also announced plans for sweeping new powers for Ofcom as the regulator, with Ofcom able to tackle non-compliance against any company anywhere in the world if it provides services to UK users. It would be able to impose:

  • a fine of up to £18m, or 10% of global turnover (whichever is the higher);
  • a Business Disruption measure, Level one: this would impose measures that make it less commercially viable to provide services to UK users; and
  • a Business Disruption measure, Level two: Ofcom to obtain a court order to block a non-compliant company's services from being accessible in the UK.

Homeland Security Group confirmed that whilst they do not at present have any plans to second any Home Office staff to Ofcom, they have been brokering introductory meetings between Ofcom and law enforcement partners (including those responsible for tackling online terrorist content) with a view to helping Ofcom develop required expertise.[28]

288. It is not exactly clear as to whether Ofcom has the capability and expertise to exercise this ambitious new remit—or indeed how long it will take Ofcom to develop this required skill base. Homeland Security Group acknowledged that Ofcom will need to build their capability:

If Ofcom were sitting here now, they would say they don't have that capacity today. I would agree with that, which is one of the reasons why this will be implemented over time rather than immediately.

I think, conceptually, they are the right people to do this. That seems to be a broad sense, both within UK government, but most importantly within the CSPs. They will need to grow capability and they will need to grow knowledge and the key thing that government is doing there is actually being clear within the legislation as to what their powers will be. So not today but, yes, by the time the [Online Harms] Bill is in.[29]

289. On 12 May 2021, DCMS and the Home Office published the draft Online Safety Bill. Taking this legislation forward has obviously been a complex and lengthy process, as confirmed by the Home Secretary:

It's taken ages, it's taken a long time to get to where we are today. That will indicate that there's been a lot of work, a lot of integrated work across DCMS, other government departments, shared equities, but also competing issues as well.[30]

Interim Code of Practice on Terrorist Content and Activity Online

290. Pending introduction of the Online Safety Bill, the Government also published (in parallel with its full response to the consultation on the Online Harms White Paper) an Interim Code of Practice, noting that:

these voluntary and non-binding interim codes will help companies begin to implement the necessary changes and bridge the gap until Ofcom issues its statutory codes of practice.[31]

291. The Interim Code of Practice on terrorism comprises five specific principles, which require companies to seek the following:

  • Principle 1: Identify and prevent terrorist content and activity;
  • Principle 2: Minimise the potential for searches to return results linking to terrorist activity,
  • Principle 3: Facilitate and participate in industry collaboration to tackle terrorist use of the internet;
  • Principle 4: Implement effective user reporting, complaints and redress procedure; and
  • Principle 5: Support investigation and prosecution of individuals for terrorist offences.[32]

292. We asked if the voluntary Interim Code of Practice was being adhered to by the CSPs—Homeland Security Group confirmed that:

By and large we are seeing, yes, they [the CSPs] have all found it helpful because it helps to encapsulate what we want. There was a fair degree of engagement with the CSPs beforehand. So at this point . . . and it is quite early, as you will appreciate, but at this point I wouldn't be calling out any individual companies, the companies with which we cooperate. The point has been made earlier about some of the smaller companies that we just don't have a relationship with, but the big ones are playing quite nicely in this space.[33]

The Commission for Countering Extremism was less optimistic: "it is hard not to be sceptical about what a voluntary code of practice would achieve in the long or short term, or that it would make any substantial difference to the growing and frightening threat of hateful extremism online".[34]

S. It appears that there are inherent difficulties with the voluntary Code of Practice, and indeed across the Online Safety Bill more widely. Whilst the major communication service providers—who are already on board with the Government's drive to promote responsible behaviour—are adhering to the principles, it is the smaller organisations (many of which are particularly influential in the Extreme Right-Wing Terrorism space) who appear reluctant to step up. The emergence of many 'free speech' unmoderated platforms specifically aimed at the Extreme Right Wing are also a problem. It will be essential for Ofcom to develop the expertise and technical know-how as a matter of urgency if it is to be able to properly enforce mandatory Codes of Practice across the industry.


  1. 'UK Far Right extremism: hate spreads from the fringe', Financial Times, 8 May 2019.
  2. Oral evidence - CTP, 29 April 2021.
  3. 'MI5 Chief asks tech firms for "exceptional access" to encrypted messages', The Guardian, 25 February 2020.
  4. Report on the Intelligence Relating to the Murder of Fusilier Lee Rigby, HC 795, 14 November 2014.
  5. Government Response to ISC report on Intelligence on the Murder of Fusilier Lee Rigby, Cm 9012, February 2015.
  6. The 2017 Attacks: What needs to change?, HC 1694, 22 November 2018.
  7. Oral evidence - MI5, 8 March 2015.
  8. Oral evidence - CTP, 8 March 2018.
  9. 331 Oral evidence - CTP, 28 April 2021.
  10. The membership of GIFCT has since expanded to also include: Mailchimp, Discord, Instagram, Whatsapp, Pinterest, Amazon, Dropbox, Mega, Linkedin, YouTube, Twitter, Microsoft, Facebook.
  11. The other governments represented on the GIFCT are: Canada, France, Ghana, Japan, New Zealand and the United States. The European Union is also represented (Directorate-General for Migration and Home Affairs). The United Nations Security Council Counter-Terrorism Executive Directorate has observer status. See www.gifet.org
  12. Oral evidence - Home Office, 28 April 2021.
  13. Oral evidence - Home Office, 29 April 2021.
  14. Oral evidence - Home Office, 28 April 2021.
  15. Written evidence - Home Office, 30 September 2020.
  16. Written evidence - Home Office, 30 September 2020.
  17. Written evidence - Home Office, 30 September 2020.
  18. Oral evidence - CTP, 29 April 2021.
  19. Written evidence - CTP, 31 January 2020.
  20. Evidence to the Home Affairs Select Committee - CTP, 23 September 2020.
  21. Oral evidence - Nick Lowles, Hope Not Hate, 16 December 2020.
  22. Counter-Terrorism Strategy (CONTEST) - June 2018.
  23. CTP, 'Together, we're tackling online terrorism', 19 December 2018, counterterrorism.police.uk/together-were-tackling-online-terrorism
  24. HMG, Online Harms White Paper, Initial Consultation Response, 12 February 2020, www.gov.uk/government/consultations/online-harms-white-paper-initial-consultation-response
  25. HMG, Online Harms White Paper, Initial Consultation Response, 12 February 2020, www.gov.uk/government/consultations/online-harms-white-paper-initial-consultation-response
  26. Homeland Security Group Quarterly Report, 1 April 2020-30 June 2020.
  27. DCMS and the Home Office, Online Harms White Paper: Full Government Response to the Consultation, 15 December 2020.
  28. Written evidence - Home office, 8 June 2021.
  29. Oral evidence - Home Office, 29 April 2021.
  30. Oral evidence - Home Secretary, 20 May 2021.
  31. DCMS and the Home Office, Online Harms White Paper: Full Government Response to the Consultation, 15 December 2020.
  32. DCMS and the Home Office, Interim Code of Practice on Terrorist Content and Activity Online, 15 December 2020.
  33. Oral evidence - Home Office, 29 April 2021.
  34. Written evidence - Commission for Countering Extremism, 17 December 2020.