How AI agents help hackers steal your confidential data – and what to do about it

SEAN GLADWELL/Getty Images

Like many people, cybercriminals are using artificial intelligence to help them work faster, easier, and smarter. With automated bots, account takeovers, and social engineering, a savvy scammer knows how to enhance their usual tactics with an AI spin. A new report from Gartner shows how this is playing out now and how it may get worse in the near future.

Account takeovers have become a persistent area of attack for one major reason — weak authentication, said Gartner VP Analyst Jeremy D’Hoinne. Attackers can use various methods to access account passwords, including data breaches and social engineering.

Also: Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses

Once a password is compromised, AI steps in. Cybercriminals will use automated AI bots to generate multiple login attempts across a range of services. The goal is to see if those same credentials are being used on multiple platforms and, hopefully, ones that will prove lucrative.

Find the right type of site, and the criminal can gather all the related data for a full account takeover. If the hacker doesn’t want to carry out the attack themselves, they can always sell the information on the dark web, where willing buyers will grab it.

“Account takeover (ATO) remains a persistent attack vector because weak authentication credentials, such as passwords, are gathered by a variety of means including data breaches, phishing, social engineering, and malware,” D’Hoinne said in the report. “Attackers then leverage bots to automate a barrage of login attempts across a variety of services in the hope that the credentials have been reused on multiple platforms.”

With AI now in their arsenal, attackers can more easily automate the steps required for an account takeover. As this trend grows, Gartner predicts that the time needed to take over an account will drop by 50% in another two years.

Beyond assisting with account takeovers, AI can help cybercriminals carry out deepfake campaigns. Even now, attackers are using a combination of social engineering tactics with deepfake audio and video. By calling an unsuspecting employee and spoofing the voice of a trusted contact or executive, the scammer hopes to trick them into transferring money or divulging confidential information.

Only a few high-profile cases have been reported, but they’ve resulted in large financial damages to the victimized companies. Detecting a deepfake voice is still a challenge, especially in person-to-person voice and video calls. With this growing trend, Gartner expects that 40% of social engineering attacks will target executives as well as the general workforce by 2028.

Also: Want to win in the age of AI? You can either build it or build your business with it

“Organizations will have to stay abreast of the market and adapt procedures and workflows in an attempt to better resist attacks leveraging counterfeit reality techniques,” said Manuel Acosta, senior director analyst at Gartner. “Educating employees about the evolving threat landscape by using training specific to social engineering with deepfakes is a key step.”

Thwarting AI-powered attacks

How can individuals and organizations thwart these types of AI-powered attacks?

“To combat emerging challenges from AI-driven attacks, organizations must leverage AI-powered tools that can provide granular real-time environment visibility and alerting to augment security teams,” said Nicole Carignan, senior VP for security & AI strategy at security provider Darktrace. 

Also: 7 essential password rules to follow, according to security experts

“Where appropriate, organizations should get ahead of new threats by integrating machine-driven response, either in autonomous or human-in-the loop modes, to accelerate security team response,” Carignan explained. “Through this approach, the adoption of AI technologies — such as solutions with anomaly-based detection capabilities that can detect and respond to never-before-seen threats — can be instrumental in keeping organizations secure.”

Other tools that can help protect you against account compromise are multi-factor authentication and biometric verification, such as facial or fingerprint scans.

“Cybercriminals are not only relying on stolen credentials, but also on social manipulation, to breach identity protections,” said James Scobey, chief information security officer at Keeper Security. “Deepfakes are a particular concern in this area, as AI models make these attack methods faster, cheaper, and more convincing. As attackers become more sophisticated, the need for stronger, more dynamic identity verification methods – such as multi-factor authentication (MFA) and biometrics – will be vital to defend against these progressively nuanced threats. MFA is essential for preventing account takeovers.”

Also: Tax scams are getting sneakier – 10 ways to protect yourself before it’s too late

In its report, Gartner also offered a few tips for dealing with social engineering and deepfake attacks.

  • Educate employees. Provide employees with training related to social engineering and deepfakes. But don’t rely solely or even primarily on their ability to detect them.
  • Set up other verification measures for potentially risky interactions. For example, any attempts to request confidential information in a phone call to an employee should be verified on another platform.
  • Use a call-back policy. Establish a phone number that an employee can call to confirm a sensitive or confidential request.
  • Go further than a call-back policy. Eliminate the risk that a single phone call or request could create trouble. For example, if a caller claiming to be the CFO asks for a large sum of money to be moved, be sure that this action can’t be taken without consulting with the CFO or another high-level executive.
  • Stay abreast of real-time deepfake detection. Stay informed about new products or tools that can detect deepfakes used in audio and video calls. Such technology is still emerging, so be sure to supplement it with other means of identifying the caller, such as a unique ID.

Want more stories about AI? Sign up for Innovation, our weekly newsletter.



Original Source: zdnet