(Artificial Intelligence) Offensive AI: Surfacing Truth in the Age of Digital Fakes (Wired)

The attack begins, innocently enough, with a single email.

An AI-powered spear-phishing email, to be precise. Unlike typical phishing campaigns, which use a scattergun approach to target victims, spear-phishing attacks are crafted with a specific audience in mind. In this case, the target is a senior manager at one of the largest banks on Wall Street.

Often generated using reconnaissance from social media, spear-phishing is labor-intensive and costly for cybercriminals–up to 20 times more so than conventional phishing. The upside for the attacker? Tailoring attacks to a specific victim produces, on average, 40-fold the click-through rate of its boilerplate counterpart.

Yet this particular email was actually authored by a malicious AI toolkit—not a human attacker. Aimed at mimicking the company’s CMO, the AI analyzed her social media feeds and emails in both professional and social interactions, developing an incredibly precise understanding of the CMO’s everyday communication. It replicates the CMO’s language almost perfectly.

The attack is hypothetical—but far from impossible. In fact, the incidence of these AI-powered spear-phishing attacks is likely to grow because it is so cost-effective for the attackers. Not only is this email’s deceptiveness equal to that of an expert criminal, it was also produced eight times faster. In fact, AI can complete reconnaissance, craft an email, and hit send all while the human attackers behind it are out at lunch. As AI continues to become more sophisticated in its ability to model human behavior, we need to rethink our approach to cybersecurity to better defend against these intelligent attackers.

A Disguise Like No Other

Today, one of the best defenses against any kind of phishing attack is the employee. Some 78% of organizations in a recent study reported that security awareness training decreased the susceptibility of their organization to phishing attempts. But these methods will fall short as AI-powered attacks grow.

In the case of the hypothetical attack, it’s very likely that the target is no stranger to security risks. Working in financial services, he’s grown wary of out-of-the-blue requests to log into accounts or to accept new service terms. But with no reason to think twice about a normal email from his CMO, he downloads the email’s attachment—infecting his computer with AI malware.

What’s more, the AI-powered malware can evade even the bank’s most advanced cyber defenses. While these tools are programmed to catch every threat known to the security industry, this attack is unique. The malware manages to permeate the network, and can blend in with normal activity. The AI malware remains elusive by continually changing its file name and identifiable features. At the same time, it quietly scans the network for weaknesses.

AI-intruders like these are poised to threaten our ever-expanding digital world. And increasingly, our assets, personal information, and trade secrets enter cyberspace every day thanks to internet of things devices, digitization of records, and more. But securing this information proves challenging when the attacks are unfamiliar and dynamic. In other words, how do you find a needle in a haystack when the needle, to a standard cybersecurity system, looks just like another piece of hay?

Dubious by Default

Outside of malicious attacks, AI is already a major agent of deception in cyberspace.

Generative adversarial networks (GANs), a form of adversarial AI, is the technology behind “deepfakes”—images or videos that appear highly authentic, but are in fact created by artificial neural networks. Deepfakes are proliferating the internet, and take the form of high-definition pictures of people who have never existed, and videos that can make just about anyone appear to say just about anything. Even the world’s most prolific deepfake artist is raising the alarm about his own creations, warning that he himself is no longer able to tell the difference.

While GANS are predominantly being used to create deepfakes for our entertainment, the same, or very similar technologies are already being used for malicious ends–particularly when applied to cyberattacks. Just as AI produces deepfake pictures that are indistinguishable from real ones, AI applied during spear-phishing campaigns could produce emails that look and sound exactly like a genuine message from a credible source. The result is an almost certain method for tricking unsuspecting victims. In late 2019, for example, an employee at an energy company was tricked using voice-mimicking software into wiring over $240,000 to a secret online account, thinking it was their boss on the phone.

As we enter this new technological age, the only technology that will be able to counter malicious AI, is AI itself.

The Machine Fights Back

From smart cities to large global banks, thousands of organizations have already turned to artificial intelligence for the answer, leveraging cyber AI produced by Darktrace to protect them from this uncertain future of confused identifies and declining trust in the veracity of information. Cyber AI is capable of learning about these organizations’ diverse and complex systems to such an extent that a malicious AI attack will automatically be outcompeted on the defensive AI’s home turf.

Marco Emanuele
Marco Emanuele è appassionato di cultura della complessità, cultura della tecnologia e relazioni internazionali. Approfondisce il pensiero di Hannah Arendt, Edgar Morin, Raimon Panikkar. Marco ha insegnato Evoluzione della Democrazia e Totalitarismi, è l’editor di The Global Eye e scrive per The Science of Where Magazine. Marco Emanuele is passionate about complexity culture, technology culture and international relations. He delves into the thought of Hannah Arendt, Edgar Morin, Raimon Panikkar. He has taught Evolution of Democracy and Totalitarianisms. Marco is editor of The Global Eye and writes for The Science of Where Magazine.

Latest articles

Related articles