Why “Perfect” Security Is Perfectly Useless
Why “Perfect” Security Is Perfectly Useless
In the security world, there’s a running joke that the only truly secure computer is one that’s turned off, locked in a room, and buried underground. Sure, it isn’t laugh-out-loud funny, but it’s a great illustration of the gap between theoretical and actual security. If a system is “perfect” but impossible to use, it’s failed its mission. "Actual security" describes the level of protection that remains functional and effective while people are using a system to do what they have to do.
To build things that work out in the world, we have to stop treating security as a purely technical "code" problem and see it for the messy sociotechnical system that it actually is. Actual security emerges from the interaction between three things:
• The Code (technical mechanisms),
• The Rules (organisational policy),
• The People (human interaction).
Security failures rarely happen in isolation. They appear at the seams where code, policy, and human behaviour meet. If a company makes a rule that no human brain can follow, like remembering ten unique, 20-character passwords, then people will eventually start to write those characters down. This may look like human error, but it’s actually a system-level design failure.
Psychological Acceptability
The idea of psychological acceptability is the bridge between a security system that works on paper and one that works in the hands of a real person. It starts from a simple observation: for most people, security is almost always a secondary task.
People don’t sit down at a computer to “think about security”. They sit down to send an email, pay a bill, or buy Labubus on the dark web. Security exists alongside those goals, always jostling for attention, so when it asks too much of the user, it gets treated like any other obstacle: postponed, bypassed or ignored altogether.
The biggest failures happen when systems force people to translate simple goals into unfamiliar technical concepts. A user wants to send a file to Alice. The interface asks them to reason about public-key infrastructure or permissions matrices. As the cognitive cost rises, people start guessing — and mistakes become inevitable because the interface demands expertise they never set out to acquire.
Security failures rarely happen in isolation. They appear at the seams where code, policy, and human behaviour meet.
If the safe path requires more effort than the unsafe one, the lower-effort option almost always wins out. People take the route that lets them finish the task. And despite what any security expert will tell you, that choice is a completely rational response to the constraints in front of them.
A system is psychologically acceptable when it avoids putting users in that position. Protection is integrated into normal actions rather than layered on as a separate chore, so users can stay safe without having to stop, translate their intent, or become experts in cybersecurity.
Once you start thinking about it this way, it raises an uncomfortable question. We have well established ways to audit code-level security, but far fewer tools for assessing the human surface. Usability reviews exist, of course, as do accessibility audits. But what’s missing is a similarly practical and robust way of examining the user-facing side of security. That’s what I want to get into next: how to audit usable security in a way that’s structured, repeatable, and focused on those seams where things tend to break.