The growing use of Large Language Model (LLM) to gather information to conduct explosives-based attacks, the propagation of AI-generated news bulletins by an Islamic State-aligned media outlet, and the creation of bespoke chatbots designed to disseminate Holocaust denialism have raised alarm over the disruptive potential of generative AI in the hands of terrorists and other violent non-state actors. While generative AI can facilitate the optimisation of terrorist recruitment, operational planning, and propaganda dissemination—offering automated content generation, rapid and culturally nuanced translations, and even access to information about the acquisition of chemical precursors or 3D printing firearms—the actual disruptive effect remains contested. At present, generative AI has not demonstrably augmented the lethality or appeal of terrorist entities. Other AI-driven applications, however, specifically in the domain of autonomous and semi-autonomous weaponry and even autonomous vehicles, can be highly disruptive in the hands of terrorists; they confer significant operational advantages, including enhanced command-and-control capabilities and greater lethality in the execution of attacks.
Integrating AI: EU counterterrorism challenges and opportunities