Watching a Friend Get Hacked in 1987: The Security Lesson That Stuck

He got social engineered through a phone call. No exploit, no technical vulnerability - just someone who knew how to ask the right questions.

Illustration for Watching a Friend Get Hacked in 1987: The Security Lesson That Stuck
friend-got-hacked A BBS sysop friend got hacked through social engineering in 1987. Watching it happen taught me that the most dangerous vulnerabilities aren't in code - they're in assumptions about trust. social engineering, security, BBS, hacking, phishing, information security, cybersecurity history, sysop

According to IBM research, human error is the main cause of 95% of cybersecurity breaches. Forty years ago, I watched my friend's BBS get destroyed by a phone call. The same tactics that worked on a teenage sysop in 1987 still work on Fortune 500 security teams today. Here's the truth: we've spent billions on firewalls while ignoring the vulnerability that actually matters.

TL;DR

Enable 2FA everywhere. Use a password manager. Check for your email in breach databases. Security hygiene is boring but necessary.

Running a bulletin board system in the late 1980s meant being a sysop, system administrator, and security engineer all at once. You learned by doing. And sometimes you learned by watching someone else get burned.

The attacker didn't break in through some technical vulnerability. He social engineered his way in. Called my friend on the phone, claimed to be a fellow sysop having trouble with his copy of TBBS, asked innocent-sounding questions. My friend answered them. Three days later, his user database was gone.

The Setup

His BBS ran on a Commodore 64 with a 1200-baud modem. One phone line. Callers would dial in, leave messages, download files, play door games. It wasn't much by today's standards, but he was proud of it. He'd spent months building up a community of regular callers.

He'd configured everything himself. The software, the file directories, the user permissions. He thought he understood the system. He was wrong.

The guy who called himself "PhreakMaster" (of course he did) had been watching the BBS for weeks. He'd noticed my friend was active on FidoNet, knew which software he was running, knew his calling hours. He'd done his homework. Most attacks aren't sophisticated technical exploits - they're the result of patient reconnaissance.

The Attack

The phone call was masterful in retrospect. The attacker asked about a "bug" in TBBS that was causing his system to crash. Could my friend check his CONFIG.SYS settings? What about his batch files? How did he handle the sysop backdoor command?

My friend told him. All of it.

The backdoor command was the kill shot. Every BBS had one - a special key sequence that would drop you to the command prompt even while the BBS software was running. Most sysops changed theirs from the default. My friend had too. But he told this stranger what he'd changed it to.

Why? Because the guy seemed helpful and knowledgeable. Because he was asking questions that only a fellow sysop would know to ask. Because my friend wanted to help someone having the same kind of technical problems he'd faced himself. The attacker exploited exactly what made my friend a good member of the BBS community - his willingness to share knowledge.

The attacker called back that night. My friend wasn't home. His BBS was running unattended, like it always did. The attacker dialed in, typed the custom backdoor sequence, and suddenly had full access to the DOS prompt with all the files sitting there.

The Damage

He deleted the user database. All the callers - their handles, their passwords, their message histories - gone. Months of community building, wiped out in seconds. He left a text file that just said "security through obscurity isn't security."

He was right. My friend hated him for it, but he was right.

The backdoor command had been treated like a secret instead of a vulnerability. The assumption was that because nobody knew about it, nobody could exploit it. Classic mistake. One that I've watched companies make over and over again for the next 40 years.

What Should Have Been Done Differently

The real vulnerability wasn't the backdoor command itself. It was the fact that someone could be socially engineered into revealing it. The attack surface wasn't the software - it was the human.

Looking back, the defenses that were needed:

  • Never trust phone verification. Anyone can claim to be anyone on a phone call. There's no authentication layer on voice communication.
  • Treat all security information as confidential. Even seemingly innocent details about your configuration can be assembled into an attack.
  • Assume every system has vulnerabilities. Plan for breach, not just prevention. What happens when someone gets in?
  • Keep backups. This one still hurts. There was no backup of that user database. All that community data, gone forever.

Social Engineering Vulnerability Audit

Check which attack vectors apply to you or your organization:

Vulnerability Score: 0/13
Check applicable items

The Lesson That Stuck

My friend rebuilt his BBS. New user database, new security model. He changed everything about how he thought about access control. But more importantly, watching this happen changed how I thought about trust.

Social engineering remains the most effective attack vector today, just like it was in 1987. Proofpoint's Human Factor research confirms that human-targeted attacks continue to be more effective than technical exploits. Phishing. Pretexting. Vishing. The techniques have evolved, but the core exploit is the same: people want to be helpful, and attackers exploit that impulse.

The technology changes. The human vulnerabilities don't.

Watching It Repeat

I've seen this pattern play out countless times since then. The 2011 RSA breach started with a phishing email. The 2020 Twitter hack exploited employees via phone calls. The same social engineering tactics that worked on a teenage sysop in 1987 still work on Fortune 500 security teams today.

Companies invest millions in firewalls, intrusion detection systems, zero-trust architectures. According to the Verizon Data Breach Investigations Report, social engineering and pretexting attacks have increased year over year, with credential theft remaining the top objective. Then someone calls the help desk, claims to be a new employee who forgot their password, and walks right through all of it.

The technology has gotten infinitely more sophisticated. The humans haven't. We're still the same creatures who evolved to cooperate with our tribe, to help people who seem trustworthy, to share information with those who ask nicely.

Why This Still Matters

Every security training program I've seen focuses on the wrong things. They teach people to recognize phishing emails. They quiz employees on password policies. They mandate annual certifications.

But they don't address the fundamental problem: most people's default state is helpfulness. That's not a bug - it's what makes human society function. But it's also the attack surface that never gets patched.

The best security cultures I've observed don't try to make people suspicious of everything. That's exhausting and unsustainable. Instead, they create clear escalation paths. Not sure if this request is legitimate? Here's exactly who to call and what to say. No judgment, no bureaucracy, just a simple process that makes verification easier than guessing.

What That Attacker Taught Us

My friend never found out who PhreakMaster really was. Probably just some bored teenager, looking for an easy target. He found one.

But that attack shaped how I think about security to this day. The most dangerous vulnerabilities aren't in your code - they're in your assumptions. The assumption that nobody would bother attacking you. The assumption that a friendly-sounding caller is who they claim to be. The assumption that your obscure configuration is as good as real protection.

I've carried that paranoia for almost 40 years now. It's served me well. Every time someone asks me for system details over the phone, I think about my friend's BBS and that text file: "security through obscurity isn't security."

Social Engineering Vulnerability Audit

This interactive assessment requires JavaScript. The checklist below is still readable.

Check which attack vectors apply to you or your organization:

Assessment
Score: 0
Complete the assessment above

The Bottom Line

Security through obscurity isn't security. It's a comforting illusion that falls apart the moment someone bothers to look. Real security assumes that attackers know everything about your system except your actual keys.

The most sophisticated attacks still start with the simplest exploit: a human being wanting to help. You can't patch that vulnerability. You can only plan for it.

And always, always keep backups.

"The most dangerous vulnerabilities aren't in your code - they're in your assumptions."

Sources

Security Assessment

Wondering if your organization is vulnerable to social engineering? Get perspective from someone who's watched these attacks for 40 years.

Get Assessment

Learned Different Lessons?

If you were there and drew different conclusions, I'm curious what shaped your take.

Send a Reply →