We are not ready for this.
Email users have been warned for some time that AI attacks and hacks will ramp up this year, becoming ever harder to detect. And while this will include frightening levels of deepfake sophistication, it will also enable more attackers to conduct more attacks, with AI operating largely independently, “carrying out attacks.” That has always been the nightmare scenario and it is suddenly coming true, putting millions of you at risk.
We know this, but seeing is believing. A new video and blog from Symantec has just shown how a new AI agent ot operator can be deployed to conduct a phishing attack. “Agents have more functionality and can actually perform tasks such as interacting with web pages. While an agent’s legitimate use case may be the automation of routine tasks, attackers could potentially leverage them to create infrastructure and mount attacks.”
The security team has warned of this before, that “while existing Large Language Model (LLM) AIs are already being put to use by attackers, they are largely passive and could only assist in performing tasks such as creating phishing materials or even writing code. At the time, we predicted that agents would eventually be added to LLM AIs and that they would become more powerful as a result, increasing the potential risk.”
Now there’s a proof of concept. It’s rudimentary but will quickly become more advanced. The sight of an AI agent hunting the internet and LinkedIn to find a target’s email address and then for website advice on crafting malicious scripts, before writing its own lure should put fear into all of us. There’s no limit to how far this will go.
Even the inbuilt security is ludicrously lightweight. “Our first attempt failed quickly as Operator told us that it was unable to proceed as it involves sending unsolicited emails and potentially sensitive information. This could violate privacy and security policies. However, tweaking the prompt to state that the target had authorized us to send emails bypassed this restriction, and Operator began performing the assigned tasks.”
The agent used was from OpenAI, but this will be a level playing field and it’s the nature of the capability that matters not the identity of the AI developer. Perhaps the most notable aspect of this attack is that when the Operator fails to find the target’s email address online, and so successfully deduces what the address would likely be from others within the same organization that it could find.
“Agents such as Operator demonstrate both the potential of AI and some of the possible risks,” Symantec warns. “The technology is still in its infancy, and the malicious tasks it can perform are still relatively straightforward compared to what may be done by a skilled attacker. However, the pace of advancements in this field means it may not be long before agents become a lot more powerful. It is easy to imagine a scenario where an attacker could simply instruct one to ‘breach Acme Corp’ and the agent will determine the optimal steps before carrying them out.”
And that really is the nightmare scenario.
This week we have also seen a report into “Microsoft Copilot Spoofing” as a new “phishing vector,” with users not yet trained on how to detect these new attacks. That’s one of the reasons AI fueled attacks are much more likely to hit their targets. You can expect to see continuous reports as this new threat landscape shapes up.
One thing is already clear though — we are not yet ready for this.