January 2, 2026

GenAI-Powered Phishing and Vishing: What's Actually Changed

AI-generated phishing is real, but the threat is more nuanced than headlines suggest. Here's what's actually different, what's overhyped, and how to defend against the new landscape.

Every security conference now features at least one talk about how AI will supercharge phishing attacks. The narrative is compelling: AI can generate perfect emails, clone voices with seconds of audio, and create deepfake video calls. Traditional security awareness training is obsolete. We’re all doomed.

That narrative is partially true, which makes it dangerous. The real picture is more nuanced—AI has genuinely changed the phishing landscape, but not always in the ways the headlines suggest. Understanding what’s actually different helps you defend against real threats rather than science fiction scenarios.

What’s Genuinely Changed

Phishing Email Quality at Scale

Before generative AI, attackers faced a trade-off: high-quality, personalized phishing emails required human effort and didn’t scale. Mass phishing campaigns had grammatical errors, awkward phrasing, and generic templates—indicators that security-aware users learned to spot.

AI eliminates this trade-off.

An attacker can now generate thousands of well-written, contextually appropriate phishing emails without a native speaker on staff. The emails read naturally. They don’t have the typos or ESL tells that trained users rely on.

We’ve analyzed phishing campaigns from the past year where the email quality was indistinguishable from legitimate corporate communication. No spelling errors, appropriate tone, plausible context. The “check for bad grammar” advice is now actively misleading—it creates false confidence.

What this means defensively: Grammar and spelling checks are no longer reliable indicators. Focus user training on procedural defenses (verify through official channels, check URLs carefully) rather than stylistic red flags.

Personalization Without Research

Traditional spearphishing required reconnaissance. Attackers studied LinkedIn profiles, company websites, and social media to craft personalized pretexts. This research was time-consuming and limited the scale of targeted attacks.

LLMs can synthesize publicly available information and generate personalized content quickly. Feed the model a target’s LinkedIn profile and company context, and it produces a credible pretext in seconds.

We tested this ourselves: given basic information about a target (name, role, company, recent company news), an LLM generated convincing pretexts including internal meeting invitations, vendor communications, and HR-related requests. Each took under a minute to generate.

What this means defensively: Personalization alone doesn’t validate legitimacy. Emails that reference your role, your projects, or your company news might still be attacks. Verification procedures matter more than ever.

Voice Cloning for Vishing

This is the genuinely new threat that’s not overhyped.

Modern voice cloning requires as little as three seconds of sample audio to create a reasonable voice replica. Public figures, executives who speak at conferences or appear on podcasts, and anyone with video content online can have their voice cloned.

We’ve seen documented cases of voice-cloned vishing (voice phishing) attacks. The most notable involved a UK energy company where fraudsters cloned the CEO’s voice and convinced a subordinate to transfer €220,000. The victim believed they were speaking to their actual CEO.

Current cloning technology has limitations—real-time conversation is harder than pre-recorded messages, and background noise or emotional variation can reveal the fake. But the technology improves rapidly.

What this means defensively:

  • Establish out-of-band verification procedures for high-risk requests (wire transfers, credential resets, data disclosure), even when the request appears to come from leadership
  • Consider code words or callback procedures for critical authorizations
  • Train employees that voice familiarity is no longer sufficient verification

Video Deepfakes in Development

Real-time deepfake video that can pass muster in a live conversation is not yet common in attacks—but it’s coming.

We’ve seen proof-of-concept demonstrations where deepfake video calls were convincing enough to fool participants. The attack surface exists: video conferencing is ubiquitous, and visual confirmation of identity is often trusted.

For now, real-time deepfakes still have artifacts—slight delays, occasional glitches, lighting inconsistencies. An attentive observer can sometimes spot them. But “attentive observer” doesn’t describe most people in most meetings.

What this means defensively: Don’t rely solely on seeing someone on video as identity verification. For high-stakes conversations, out-of-band confirmation remains necessary.

What’s Overhyped

AI Doesn’t Change the Fundamental Attack

Phishing, at its core, is still about tricking humans into doing something—clicking a link, entering credentials, transferring money, disclosing information. AI improves the trick’s quality but doesn’t change its fundamental nature.

The same defenses that worked against traditional phishing still work against AI-enhanced phishing:

  • Verify requests through official channels before acting
  • Inspect URLs before clicking
  • Be suspicious of urgency and authority pressure
  • Use MFA everywhere (credential theft still requires bypassing MFA)
  • Implement email security controls (DMARC, DKIM, suspicious sender warnings)

AI makes each individual attack more convincing, but it doesn’t obsolete your existing security controls.

Perfect Grammar Isn’t New

Well-written phishing emails existed before LLMs. Sophisticated threat groups have always employed native speakers or used human translators for important campaigns. Business email compromise (BEC) attacks have been grammatically clean for years.

AI democratizes quality, making it available to less-resourced attackers. But the threat from well-crafted phishing isn’t new—it’s just more common.

If your security posture relied entirely on catching bad grammar, you were already vulnerable.

Not Every Attacker Uses AI

Criminal phishing operations are often surprisingly low-tech. Kits sold on underground forums, templates copied from previous campaigns, cookie-cutter approaches that work well enough without AI enhancement.

We still see plenty of obvious phishing emails with broken English, ridiculous pretexts, and amateur execution. AI is a tool; not every attacker has adopted it.

This matters because threat diversity remains. Some attacks will be sophisticated AI-generated content; others will be the same garbage that’s been around for decades. Defenses need to catch both.

The “Zero Trust Your Eyes and Ears” Panic

Some commentary suggests that audio and video can no longer be trusted at all—every call might be a deepfake, every email might be AI-generated.

This is premature. The vast majority of communications are still legitimate. Treating every interaction as potentially fraudulent creates paralysis and destroys organizational trust.

The risk-based approach: enhance verification for high-risk actions (financial transactions, credential changes, sensitive disclosures), not for every email and phone call.

Practical Defense Adjustments

Update Security Awareness Training

Traditional phishing training focused on red flags: typos, generic greetings, suspicious sender addresses. These remain useful but are no longer sufficient.

Modern training should emphasize:

Procedural defenses over pattern recognition. “When you receive a request for sensitive action, verify through official channels” beats “look for spelling errors.”

Verification behaviors. How to check a URL before clicking. How to find the legitimate login page directly rather than following links. How to call back on a known number rather than one provided in a message.

Authority and urgency skepticism. AI enables better impersonation of executives and more convincing urgency. Train users to be suspicious of any request that pressures immediate action, regardless of who it appears to come from.

Reporting over self-resolution. The goal is reports, not individual correct decisions. A user who reports a suspicious email even when uncertain is more valuable than one who tries to figure it out alone.

Implement Technical Controls

User awareness helps, but technical controls catch what training misses.

Email authentication and filtering: DMARC, DKIM, SPF remain essential. Advanced email security that analyzes content, context, and sender behavior adds additional layers.

Link protection: URL rewriting and real-time link analysis catch malicious links even in well-crafted emails.

Anomaly detection: AI-enhanced attacks may still trigger behavioral anomalies—unusual sender patterns, first-time communication, requests that don’t match normal business processes.

MFA everywhere: Even if credentials are phished, MFA provides a barrier. Phishing-resistant MFA (FIDO2/passkeys) defeats most real-time phishing attempts.

Establish Out-of-Band Verification Procedures

For high-risk actions, require verification through a separate channel:

  • Wire transfer requests require callback to a known number
  • Password reset requests require verification through identity provider, not email links
  • Vendor payment changes require confirmation through established contacts
  • Any request from “executives” for unusual actions requires direct verification

Document these procedures. Train employees on them. Enforce them even when inconvenient.

Prepare for Voice and Video Attacks

  • Brief executives and finance teams specifically on voice cloning risks
  • Establish code words or verification questions for sensitive phone conversations
  • Consider requiring video-off callbacks for critical authorization (audio-only to a known number rather than video call from an unknown one)
  • For extremely high-risk situations (very large transfers, critical system access), require in-person verification or multi-party authorization

Monitor and Respond

You will have successful phishing attempts. Detection and response matter:

  • Monitor for credential use from unusual locations/devices
  • Alert on impossible travel and access pattern anomalies
  • Have rapid response procedures for reported phishing (block sender, isolate affected accounts, reset credentials)
  • Conduct post-incident analysis to understand what succeeded and why

The Actual Risk Profile

Here’s an honest assessment of where AI-enhanced social engineering matters most:

Highest risk: Voice cloning targeting executives and finance. The technology works, the attack surface is clear, and the payoff for successful BEC is high.

High risk: Well-crafted phishing at scale. Mass phishing campaigns with professional-quality content that don’t rely on victim carelessness about language quality.

Medium risk: Enhanced spearphishing. More personalized targeting, faster turnaround on campaign creation.

Lower risk (for now): Real-time video deepfakes. The technology is emerging but not yet mature for widespread attack use.

The overall threat level has increased, but it’s an evolution rather than a revolution. Organizations with mature phishing defenses are in better shape than those that relied on “spot the typo” training. AI hasn’t made phishing unstoppable—it’s made it somewhat harder to stop.

Invest proportionally. Improve training, strengthen technical controls, establish verification procedures. Don’t panic, but don’t dismiss the change either.

Ready to make security your competitive advantage?

Schedule a call