Do you remember that scene in Ocean’s Eleven when Terry Benedict (played by Andy Garcia), the slick, always-in-control casino owner, stares at his wall of security monitors, thinking he’s got the thieves cornered?
He’s cool, confident, arms crossed, watching what looks like a live feed of his guards confronting the robbers in the Bellagio vault. But then…boom…it hits him. The feed isn’t live at all.
It’s a recording.
A fake.
He’s been watching a perfectly looped video while Danny Ocean’s crew robs him blind.
The “most secure vault in Las Vegas” was undone not by brute force, but by deception.
That scene came out in 2001. Yet, nearly 25 years later, it doesn’t feel like fiction anymore.
Only now, the con artists don’t need Brad Pitt or George Clooney, they’ve got AI.
Earlier this week, I posted a video featuring Dave Lake from the Center on Shadow Economics talking about crime and technology. He made a point that stuck for me: criminals are already light-years ahead of policymakers when it comes to the exploitation of AI.
A couple of hours later…ping…an email lands in my inbox.
Subject line: Dave Lake.
Body: “Hey, can you send me more information about him?”
At first, I almost replied, since it appeared to be part of a discussion thread I was already on. It looked normal. But something about it felt odd.
I hovered over the sender’s address.
That’s when I realized! This wasn’t from a person. It was an AI-generated phishing email.
And for a second, I felt like Terry Benedict. Eyes on the screen, thinking everything’s fine, meanwhile, the vault’s already empty.
To make it worse, earlier, a family member called me about a “parking ticket” email that almost tricked them into paying a fake fine.
So, I reached out to Dave.
“Hey, just a heads-up, you’re starring in a phishing scam now.”
He laughed. “Oh yeah,” he said. “AI spam? It’s only going to get worse.”
His response motivated this post.
Picture it: a dimly lit room somewhere overseas, or maybe just a few blocks away. A laptop humming. A monitor glowing electric blue. Someone huddled around a screen, half-laughing, half-focused.
To them, this isn’t crime, it’s a sport.
They’re not cracking safes or blowing open doors.
They’re tricking people at scale, millions of them, with simple apps and synthetic voices.
Their best tool isn’t a laser drill. It’s AI.
While lawmakers argue about what “responsible AI” should look like, another crowd is already using it, creatively, aggressively, and just far enough ahead that most of us can’t see it coming.
Think Danny Ocean’s crew, organized, confident, always two steps ahead. Only this team doesn’t need disguises. They’ve got deepfakes (video deepfakes, voice clones, image deepfakes, text-based deepfakes, and bots) that mimic behavior and may even sound exactly like your boss, spouse, parent, or child.
In Ocean’s Eleven, the casino didn’t lose because its vault was weak. It lost because it was predictable.
That’s the real danger today.
Criminals are rewriting the rulebook, turning algorithms into digital lock picks.
And when it comes to adapting to new tech, there’s a clear hierarchy, and the bad guys are sitting at the top.
While most organizations are still trying to figure out how to plug a chatbot into their website, criminals and malicious actors are training AI on stolen data, creating fake identities so convincing they can pass government verification checks.
Fake passports. Fake social media histories. Fake family photos. Everything needed to fool even a seasoned investigator.
And when it comes to social engineering?
AI doesn’t just write phishing messages; it writes perfect ones. It mimics tone, slang, and timing, right down to your coworker’s Monday-morning sarcasm.
Take what happened in Hong Kong in 2024.
A finance employee at the multinational engineering firm Arup received what looked like an ordinary email from the company’s CFO in London. Nothing unusual, just a “confidential” request to handle a sensitive financial transaction.
A short while later, the employee was invited to a video conference. On screen were several familiar faces, colleagues, the CFO, and even other senior executives. Everyone looked real. Everyone sounded real.
But not one of them were.
Every single “person” on that call - except the employee - was a deepfake.
The scammers had stitched together AI-generated versions of real executives using publicly available video and audio clips. The deepfakes blinked, nodded, and spoke naturally, convincing enough to make the employee’s initial doubts disappear.
Over the next week, following “instructions” from those fake colleagues, the employee authorized fifteen separate wire transfers, totaling $25.6 million.
By the time the company realized what had happened, the money was long gone.
No guns, no masks, no getaway cars. Just deepfakes, data, and a believable story.
That’s not a movie. That’s 2024.
Criminals are rewriting the rulebook, turning algorithms into digital lock picks.
And when it comes to adapting to new tech, there’s a clear hierarchy, and the bad guys are sitting comfortably at the top.
While most organizations are still trying to figure out how to plug an AI chatbot into their website, criminals and malevolent actors are training AI on stolen data, creating fake identities so convincing they can pass government verification checks.
Fake passports. Fake social media histories. Fake family photos. Everything needed to fool even a seasoned investigator.
And when it comes to social engineering?
AI doesn’t just write phishing messages; it writes perfect ones. It mimics tone, slang, and timing right down to your coworker’s Monday-morning sarcasm.
Meanwhile, law enforcement is still playing catch-up.
Procurement delays. Policy debates. Outdated training.
By the time one task force learns how to counter an AI-driven scam, the criminals have already pivoted, using new tools and tricks, but with the same targets.
So, here’s the real question:
If the shadow economy is evolving faster than the legitimate one, can we really afford to keep learning at yesterday’s speed?
Because right now, we’re not Terry Benedict, watching from the safety of a control room.
We’re the ones staring at the screen, thinking we’ve got everything under control, while the vault’s already being emptied.



