AI technology is advancing, as its malicious use. The opinion of Sarah Kreps and Richard Li (Cornell Tech Policy Lab) (by Marco Emanuele, The Global Eye’s In-Security)

The Global Eye’s In-Security and The Science of Where Magazine meet Sarah Kreps, the John L. Wetherill Professor and Director of the Tech Policy Lab at Cornell University, and Richard Li, currently studying at Cornell University and a research assistant in the Cornell Tech Policy Lab.  

Richard Li

With reference to your reflection published by Brookings, Cascading chaos: Nonstate actors and AI on the battlefield, can you explain to our readers why malevolent non-state actors are able to penetrate the artificial intelligence market with great ease?

Malevolent non-state actors use artificial intelligence for nefarious purposes because they can.  Artificial intelligence is far easier to acquire, develop, and deploy compared to conventional weapons or certainly nuclear.  It’s accessible, open-access, and can enhance whatever other capabilities these actors have at their disposal, whether a drone that becomes autonomous or misinformation that now is scaled up and micro-targeted to maximize the psychological impact among certain demographics.  Further, the lack of regulation and oversight on artificial intelligence allows non-state actors to skirt scrutiny.

This step of your reflection is fundamental: “Although the potential applications are extensive, the three areas with the most potential for AI to democratize harm are drones, cyberspace, and mis- and disinformation. AI is not creating but amplifying and accelerating the threat in all of these spaces”. Can you explain to our readers?

Non-state actors around the globe are beginning to leverage the recent proliferation in artificial intelligence to level the playing field.  The United States has used drones for counterterrorism, but now groups like the Islamic State use drones to drop explosives and drug cartels to transport drugs covertly. Groups like DarkSide have launched cyberattacks to strain U.S. oil supplies, and Indian authorities have cited militant groups using fake videos and photos to provoke violence to justify restricting internet services. AI acts as an accelerant and amplifier that can make all of these acts more efficient and lethal.

The issue we are discussing has obvious repercussions on security understood in a complex sense. How much do you think national security is at risk? 

The current harms caused by non-state actors using AI technology will only be magnified because AI technology will only continue to advance. For example, advancements in machine learning will allow future drone swarms that overwhelm defense systems to be more effective by autonomously identifying and targeting specific individuals or ethnic groups. Another example is that large language models that rely on machine learning can generate misinformation at a larger scale, eroding public trust.

Finally, a question that seems decisive. Liberal democracies are particularly attacked in the three areas you describe. To limit the phenomenon as much as possible, how much can single countries make at the level of national choices and what interventions would be necessary at the global level? In addition, does the risk come exclusively from non-democratic regimes or, instead, can it also arise from the de-generative crisis of the democracies themselves?

One way for individual countries to make an impact is for their individual approaches to converge with each other into a recognizable international norm.  In the context of AI-enabled technologies, liberal democracies can continue to use human-in-the-loop approaches even if non-democracies or non-state actors turn to fully autonomous systems.  As these liberal democracies act in concert with each other, they can help generalize a norm, which is not to say that these malicious actors will adhere, but at the least that they’ll face more international countervailing pressures to do so.  It’s also the case that an especially capacious country like the US can invest more resources in STEM that help it, and in turn, allies, maintain a technological edge.

in cooperation with The Science of Where Magazine

Marco Emanuele
Marco Emanuele è appassionato di cultura della complessità, cultura della tecnologia e relazioni internazionali. Approfondisce il pensiero di Hannah Arendt, Edgar Morin, Raimon Panikkar. Marco ha insegnato Evoluzione della Democrazia e Totalitarismi, è l’editor di The Global Eye e scrive per The Science of Where Magazine. Marco Emanuele is passionate about complexity culture, technology culture and international relations. He delves into the thought of Hannah Arendt, Edgar Morin, Raimon Panikkar. He has taught Evolution of Democracy and Totalitarianisms. Marco is editor of The Global Eye and writes for The Science of Where Magazine.

Latest articles

Related articles