(Tom Uren – Lawfare)
A range of reports show that artificial intelligence (AI) adoption is, unsurprisingly, making threat actors’ standard workflows quicker and easier. The good news for defenders is they don’t need to adopt entirely new approaches to counter these attacks. They do need to double down on basic security protections, with a focus on phishing-resistant multifactor authentication (MFA) and combating compromised credentials.
Recent reports published by CrowdStrike, ReliaQuest, IBM X-Force, Sophos, and OpenAI highlight AI’s impact on the threat landscape.
There is more than a bit of “ZOMG AI” in many of these reports, but we are here to cut through the hyperbole.
First, the bad news. AI is making phishing easier and more effective, and once threat actors break into networks they are able to move rapidly.
AI makes it simple for malicious actors to produce articulate messages in multiple languages. These days, the traditional there’s-a-typo-in-the-email-don’t-click-on-it phishing training exercises won’t slow down the Nigerian prince. His emails will be perfect from the get-go.
Phishing messages are also becoming far more personal. This week a Trend Micro blog detailed its development of a tool that automatically converts information scraped from LinkedIn into tailored spear phishing messages. We haven’t found evidence this technique is being used in the wild, but it took Trend a day to set up a prototype. It’s just a matter of time.
Conducting open-source research to inform targeted phishing is not new, but it has previously been a niche reserved for capable actors, willing to put in a lot of work for the chance of a big payoff. Think high-value business email compromise (BEC) or cyber espionage actors. Automating the process will dramatically lower the barrier to entry. Competent, targeted phishing will become common, even for “low-value” targets.
OpenAI’s report indicates that scammers are already zoning in on more niche target demographics. One romance scam targeted wealthy Indonesian men. Another targeted American men, who worked in the medical field, were in their 40s, and liked to talk about golf online. The scammers used generative AI to produce supporting material such as images or websites. They used LLMs for both translation and to add an authentic tone to their false personas. Messages were supplied to ChatGPT by the scammers, likely following manipulation playbooks that are developed empirically over time.
But AI’s not just for phishing.
Once inside a victim network, threat actors are now able to move laterally much faster than before. CrowdStrike reported that it took an average of just half an hour for intruders to move from their initial point of access to elsewhere in the victim’s network. This is down from 48 minutes last year and almost 100 minutes in 2021. This long-term downward trend is not solely due to the rise of AI, but it does reflect increasing automation enabled by AI assistants.
Finally, the speed of data exfiltration has also increased dramatically. CrowdStrike credited the current record to a group it calls Chatty Spider, which targets law firms. It attempts to exfiltrate data to Google Drive just four minutes after gaining illegitimate access to a workstation. Entire intrusions often lasted less than an hour. Similarly, ReliaQuest said the fastest data theft it saw began in six minutes. In 2024, that record was four hours.
There is some good news here. Threat actors aren’t doing anything novel. They are utilizing AI to implement the same techniques they always have. But they’re doing it much faster, which means they can squeeze a lot more badness into their usual 9-5.
This also means that, from a defender’s perspective, AI-related threats don’t require any sort of magic bullet. Defenders need to lock down basic hygiene and do more of the same, but more quickly. And phishing-resistant MFA has got to be part of the answer here.
It should be easy, really! Or at least easier than rolling incident response afterward.



