The article describes how the author's personal experience with email bombing—used to hide fraudulent credit card applications—revealed a shared tactic with online harassment campaigns. Both exploit the human vulnerability of cognitive overload to overwhelm victims. The author argues that threat intelligence is fragmented, with fraud/security teams and platform trust/safety teams operating in separate silos despite facing similar attacks on human attention. The main topics covered are the technique of email bombing, the shared human vulnerabilities exploited across different threat types, and the lack of communication between different threat intelligence communities.
Threat Intelligence Has a Human-Shaped Blind Spot
How I realized what I was taught about threat intelligence was missing something crucial.
OPINION
Last weekend, someone used email bombing software to deluge my personal inbox with hundreds of mailing list subscriptions in less than an hour. The goal wasn't to overwhelm my inbox, it was to hide three specific messages. Buried at the bottom of the pile were three welcome emails from American Express for a credit card I didn't apply for. The scheme worked — briefly. By the time I noticed the Amex messages, they were 800 emails deep.
Email bombing is certainly not a new technique for covering up the evidence of fraud, but what struck me was where else I'd seen it before. Deluge-by-email has been an online harassment tactic for years. It is a cheap way to make victims feel violated, powerless, and overwhelmed. What I hadn't connected — until it happened to me in both contexts — was why email bombing is as effective for harassment as it is for fraud. Email bombing exploits the very human vulnerability of cognitive overload, or the biological limit on how much information we can meaningfully process. Whether abused for harassment or for profit, the attack surface stays the same.
That realization revealed a blind spot for me in how I had been taught to approach threat intelligence. We organize defenses around adversary types like cybercriminals, nation-states, insider threats, extremists, and harassers. But human vulnerabilities do not respect those categories. Techniques developed to cause emotional harm often translate directly into techniques for financial theft, not because criminals are watching harassment forums for ideas, but because both are targeting the same cognitive limits. If we want to anticipate denial-of-attention attacks in fraud contexts, we need to understand how they worked first in harassment contexts. The lessons are shared, even when the threat models never talk to each other.
Platform trust and safety teams track coordinated harassment, hate speech, brigading, and inauthentic behavior. Fraud and security teams track business email compromise (BEC), credential stuffing, and malware campaigns. These two groups attend different conferences, publish in different venues, use different terminology, and build different tools. One side talks about "coordinated inauthentic behavior" while the other talks about "scam campaigns," but they're often describing the same structures using the same techniques.
I've worked in both threat intelligence spaces, first in anti-extremism and platform abuse, now in fighting scams. What strikes me most is how rarely these communities talk to each other, despite solving fundamentally similar problems: protecting people from organized attempts to exploit human vulnerabilities.
What Each Side Knows That the Other Doesn't
My email bombing experience shows that denial-of-attention attacks appear in both worlds but are rarely recognized as the same threat class.
In my anti-extremism work, I saw how coordinated harassment campaigns worked to silence victims. Flood a target's mentions, DMs, and replies, and the attacker achieves a psychological goal; the victim is overwhelmed and will stop participating. Mass reporting campaigns worked similarly. Attackers would flood a platform's moderation queue hoping that real harmful content would be buried, or that legitimate accounts would get caught in automated systems. Recently, users of the X platform started generating thousands of sexualized images of women and girls as harassment fuel, with predictable results.
In my work today, I see the same techniques weaponized for fraud. The mechanism is identical: saturate human attention capacity until critical information becomes invisible. But we treat these as unrelated phenomena because the attackers' end goals are different.
Both sides are studying social manipulation at scale, but neither has the complete picture. When I see a romance scam now, I recognize the same victim loneliness and social isolation that was so central to stories of radicalization. When I tracked coordinated harassment campaigns then, I should have been paying more attention to how they recruited, divided work, and coordinated timing, just like scam centers do.
Human-Scale Threat Intelligence
What would threat intelligence look like if we organized it around human vulnerabilities rather than adversary types?
We would track exploitation across harassment, fraud, espionage, and influence operations, not as separate threat landscapes but as variations on the same attack surface. We would study tactics used in domestic abuse, cults, scams, and extremist recruitment as a unified phenomenon. We would recognize that understanding how one adversary type exploits a human vulnerability teaches us how all adversary types can exploit it.
This doesn't require wholesale organizational restructuring. Individual analysts can start making these connections now. Read outside your silo. If you work in fraud, follow platform abuse research. If you work in trust and safety, follow financial crime reporting. When you see a social engineering technique, ask: "Where else does this human vulnerability get exploited?"
Our adversaries are not constrained by corporate organizational boundaries. Our threat intelligence shouldn't be either.
Read more about:
Opinion