The Scam That Knew My Resume: AI displacement and the coming wave of cybercrime that targets YOU

Prepare for a Tidal Wave of AI Scams
I’ve been the target of several scam attempts recently.
The first was a text message:
Hey Dad when you have a moment please save my new line and text me on +1(650)2507384, I accidentally dropped my phone in water. I’m texting you from the man in the repair stores phone
This arrived via iMessage from a Moroccan phone number. Apple helpfully flagged it as potential spam. It’s the kind of scam most of us recognize instantly—generic, impersonal, spray-and-pray. The scammer sends this to thousands of numbers hoping a few parents will panic and respond. It’s crude, but it works often enough to be profitable.
The second was a phone call. The caller ID claimed to be from my mortgage company, so I picked up. Before the voice on the other end told me what the call was about, they confirmed my name and address. When they asked for the last four digits of my social security number I became suspicious. I told them I would not give that information to anyone who calls and asks without first proving their own identity. When I pressed them for details about why they were calling, there was a little squirming and then an abrupt end to the call. Scam two averted. This was a little less generic than the first since they connected my phone number, name and mortgage company. That is getting somewhat personalized.
Now here’s the third attempt—and it’s a different animal entirely.
The Recruiter Who Wasn’t
An email arrived from “Brooke Leikam,” identifying herself as a Recruiting Manager at Google, using the email brookeeleikaam@gmail.com. (I apologize to the real Brooke if this is a stolen identity) The email opened with flattery:
I came across your profile and was impressed by your extensive leadership across AI, platform engineering, and innovation technology, particularly your work at Intuit, PARC, and your current Stealth AI startup.
What surprised me was how targeted it was. It referenced my actual career history—companies I’ve worked at, technologies I’ve used (all information freely available on LinkedIn and elsewhere), and it mapped my experience to alleged open job roles at Google. It was clearly an AI agent generated email. No self-respecting recruiter would spend the time to research me this way unless it was a much more specialized job than what they described. Still, it was intriguing. I thought that Google must be experimenting with agent-guided recruiting, which was even more interesting than the job titles. I love learning how technology is being used, so I expressed interest. In response she sent more detailed job descriptions, each tailored to different aspects of my background. One of them—“Innovation Lead: Applied AI Research & Strategic Initiatives”—was so well-matched to my interests that it felt well tailored to me, or at least my LinkedIn profile. There was more evidence I was conversing with an agent, but that was to be expected. Great. I continued to engage, eventually sending my resume and a cover letter. Then came the pivot:
I’ve completed an initial review and also ran your resume through a first-pass assessment tool commonly used by recruiters and ATS systems. At this stage, it scores around 49 out of 100… We work closely with a resume expert who specializes in senior AI, innovation, and platform engineering leaders… I would strongly recommend engaging him.
And there it was. The entire multi-day conversation—the flattery, the personalized job descriptions, the carefully built rapport—was a funnel designed to sell me a paid resume service, or maybe even pull me deeper into a scam of some sort. I’m not sure where it was leading because that is where I stopped.
What I Learned From This Experience
I may not be the most paranoid security expert, but I don’t consider myself easy to fool. I’ve spent decades in technology. I know what phishing looks like. And yet this interaction had me engaged for several exchanges before the red flags became undeniable. While the conversation was pleasant and professional, I knew I was conversing with an AI agent, but I wasn’t clear about who was behind the AI agent.
The red flags were there if you looked: a Google recruiter using a personal Gmail account (not @google.com), the subtle misspelling in the email address (“Leikaam” with a double ‘a’), responses arriving at 1:36 AM and 11:18 PM from someone supposedly in San Diego, no job requisition numbers or links to Google Careers.
But the sophistication of the personalization was effective. I wasn’t used to being targeted like this, I’m just not interesting enough. The job descriptions weren’t boilerplate—they were crafted to match my specific experience and interests. The language was professional, measured, and credible.
This level of personalization, targeting one individual at a time, would have been economically impossible a few years ago. A human scammer couldn’t research my background, generate three tailored fake Google roles, and craft multiple rounds of nuanced professional correspondence—not at scale, not for the economics of a resume-service upsell. But an AI can do all of this in seconds, for pennies. We are in a new era.
The Scale of What’s Coming
Compare the three scams. The “Hey Dad” text is the past: generic, obvious, dependent on volume to find the gullible. The phone call was more targeted, but still able to be easily defended against. The fake Google recruiter is the future: personalized, patient, powered by AI.
The data on this shift is sobering. AI-generated phishing emails achieve click-through rates more than 4 times higher than human-crafted ones. AI-driven fraud increased by 1,210% in 2025 alone. The FBI’s latest Internet Crime Report shows total losses exceeding $16 billion—up 33% from the prior year. Phishing losses nearly quadrupled, from $18.7 million to $70 million.
The economics have fundamentally changed. Tools like FraudGPT—available on dark web markets for as little as $200 per month—can craft spear phishing emails, generate scam pages, and create convincing business correspondence. What once required a compound full of human operators in Southeast Asia can now be run by a single person with a laptop. According to INTERPOL’s 2026 Global Financial Fraud Threat Assessment, AI-enabled scams are 4.5 times more profitable than traditional ones. Better yet, a scammer can fire up their own version of an OpenClaw agent with a goal of scamming individuals and let it loose.
Put simply: the barrier to entry for sophisticated cybercrime has collapsed, and the return on investment has soared.
The Displacement Connection
This connects to a broader theme of this blog series.
Hundreds of thousands of tech workers have been laid off since 2023—over 260,000 in 2023 alone, another 150,000+ in 2024 and 2025 combined. In 2026, the pace has accelerated to nearly 900 per day. And increasingly, AI adoption is a primary claimed driver—companies replacing human workers with the very tools those workers helped build.
These aren’t unskilled workers. They’re engineers, developers, system architects—people with deep technical knowledge, accustomed to comfortable salaries, and now facing a job market that’s shrinking in their field. Most will find new paths. Most will adapt, retrain, or pivot. I’ve written about how developers can navigate this transition, and I believe most will.
But not all.
The uncomfortable truth is that some number of technically skilled, financially desperate people will look at the tools available to them and make a different calculation. When you can buy a scam toolkit for the price of a streaming subscription, when your technical skills make you exceptionally good at deploying it, and when the legitimate job market has rejected you—the temptation is real.
This isn’t hypothetical. Research shows that disgruntled and displaced tech workers are already offering their services on dark web marketplaces. Criminal enterprises are actively recruiting with job listings offering six-figure compensation, benefits packages that mimic legitimate tech employers—paid time off, sick leave, performance bonuses, even referral programs—and in some cases salaries that exceed what these workers earned in their legitimate careers.
Consider the irony: AI displaces a skilled developer. That developer, struggling to find work, encounters an AI-powered scam toolkit. The same technical aptitude that made them valuable in the legitimate economy makes them dangerous in the illegitimate one. And these AI tools don’t question your intentions.
What This Means for All of Us
The convergence of mass technical displacement and accessible AI scam tools creates a security problem that goes beyond what traditional cybersecurity can address. We’re not just dealing with organized crime syndicates anymore. We’re dealing with a potentially large pool of skilled individuals with the means, knowledge, and motivation to deploy sophisticated, personalized attacks at scale.
For individuals, the implications are practical. The scams coming at us are going to get better—more personalized, more patient, harder to detect. The “Hey Dad” text from a Moroccan number will seem quaint compared to what’s coming. Even the voice phone call I received can’t be trusted as voice scams or “vishing” appears to be on the rise. My experience with the fake Google recruiter is a preview: AI-generated interactions that reference your real history, match your real interests, and build real rapport before making their move. The old advice—“look for spelling errors and generic language”—is not enough. Some say those errors were intentional to fish out the more gullible, but now the sophistication has risen and so have the targets. AI doesn’t make spelling errors and it doesn’t do generic. It is coming for YOU.
For organizations building and deploying AI, there’s a responsibility question. The same models that help me write and build software can be fine-tuned to generate personalized phishing at scale. The “dual use” problem isn’t theoretical anymore—it’s operational.
And for society, the displacement question takes on a new dimension. When we talk about the implications of AI for the workforce, we usually frame it in terms of economics—lost wages, retraining needs, shifting skill requirements. We don’t talk enough about what happens when a large population of technically sophisticated people loses its economic footing while simultaneously gaining access to increasingly powerful tools. History suggests that economic desperation, combined with capability and opportunity, creates problems that no amount of cybersecurity spending can fully solve.
As for me, I’m implementing a few important safeguards. For key people in my life, like my family and my financial advisor, I’m establishing a shared secret word to help identify each other in high-risk situations.
Questions Worth Asking
- How do you evaluate unsolicited outreach now versus five years ago? Has your threshold for trust changed?
- If we’re going to displace hundreds of thousands of skilled workers through AI adoption, what safety nets should exist to reduce the likelihood that some turn to cybercrime?
- Is the cybersecurity industry prepared for an era when the attacker has the same AI tools as the defender—and possibly more motivation?
What do you think? I’d love to hear your thoughts.
Comments