Counterproliferation in the age of AI. In dialogue with David Heslop (UNSW Sydney)

(Marco Emanuele)

The Global Eye and The Science of Where Magazine in dialogue with David Heslop. He is an Associate Professor at the School of Population Health at UNSW Sydney. He retains military responsibilities as Senior Medical Adviser for CBRNE to the Australian Army and to Australian Defence Force (ADF) leadership. He is a clinically active vocationally registered General Practitioner and Occupational and Environmental Medicine Physician. During a military career of over 15 years he has deployed into a variety of complex and austere combat environments, and has advanced international training in Chemical, Biological, Radiological, Nuclear and Explosive (CBRNE) Medicine. He has experience in planning for and management of major disasters, mass casualty and multiple casualty situations. He is regularly consulted and participate in the development and review of national and international clinical and operational general military and CBRNE policy and doctrine. His research interests lie in health and medical systems innovation and research using computational modelling and simulation to address otherwise intractable problems.

Why, and how, could the evolution of artificial intelligence become a risk in the construction of chemical or biological weapons?

This is, concerningly, already occurring and AI is right now a risk for the proliferation and use of CBRNE in many contexts. Recently Swiss researchers used a form of generative artificial intelligence normally applied to discovering new therapies and medications to propose the structures of thousands of toxic chemicals similar to the potent nerve agent VX and some with likely even greater toxicities. These tools are widely available already, the key distinction being individuals yet to take the step to “repurpose” the tools to undertake negative or destructive acts (see: Urbina, F., Lentzos, F., Invernizzi, C. et al. Dual use of artificial-intelligence-powered drug discovery. Nat Mach Intell 4, 189–191 – 2022).

A recent report by the RAND Corporation highlighted the risks posed by Large Language Models (LLMs) similar to ChatGPT to assist in the planning and execution of biological attacks. Such tools can reduce the barriers for individuals to comprehend what factors are important when selecting a candidate pathogen, how to weaponize a pathogen, and how to develop delivery and dissemination techniques. More importantly, LLMs can provide “logic checking” and “fact checking” capabilities provide advice on what pathways would be unfruitful and reducing development and implementation times (see: The Operational Risks of AI in Large-Scale Biological Attacks: A Red-Team Approach | RAND)

You write that a ‘new era of counter-proliferation in the age of AI’ must be initiated. What does that mean ?

New generations of AI tools are almost certainly going to rapidly accelerate certain trends in proliferation trends that were already occurring. The dissemination of relevant know-how and information relevant to weaponisation was an existing problem driven by digitisation and linked to various emerging dual-use technologies. There has been a gradual shift to transmittable data being the vector of CBRNE threat, in contrast to physical precursors, ingredients or know-how. AI tools in their current form provide substantial additional capabilities to sidestep data and informational barriers for malicious actors. Additionally some AI tools may also increase barriers and difficulty for the work of intelligence, verification and audit, and policing services. Addressing the risks that AI poses to traditional counter-proliferation and intelligence activities, and their enabling services, must be urgently addressed. Equally, how AI linked CBRNE proliferation can be countered is an important questions, and paradoxically may only be possible through the judicious use of various AI tools. In other words, we may be required to fight fire with fire.

Risks on a global level are changing faster and faster and more radically. Do you see, in the future, that – due to many factors including an unregulated evolution of AI – pandemics represent the new form of war ?

I would have to agree, but under certain conditions only. AI and other emerging technologies, coupled with deep integration of the population into the cyber-world in most parts of our lives – has opened the door for influence operations on a industrial scale. It is not difficult to conceive of a world where human behaviour and thus society is manipulated at scale for deliberate purposes using the tools of AI with digital connectivity and online interaction. My view is that the population will now have great difficulty being able to differentiate between fact and fiction and what is trustable, and so opening the door to easy manipulation by certain actors. The negative effects of pandemics – self-isolation, absenteeism, economic impacts, disruption, mental health impacts – may be easily achievable by creating false narratives surrounding feared pathogens rather than the actual release of a pathogen. Actual events of significance such as minor outbreaks may also be embellished, or minimised, through AI driven information operations for devastating effect – accelerating spread of disease and undermining public health efforts. So do pandemics represent a new form of war? Yes, and in many new ways including manipulating population behaviours even in the absence of real world threats.

(riproduzione autorizzata citando le fonti – The Global Eye e The Science of Where Magazine

Latest articles

Related articles