The Data Breach Didn’t Matter (and That’s the Problem)
The Data Breach Didn’t Matter (and That’s the Problem)
I flew back to Australia last week and while i was thawing out in the sun I had a crazy thought: I’m going to get a new phone contract. My idea of “crazy” is admittedly much tamer than it was some years ago, but it’s been at least five years since I’ve had or cared about having a new phone, and the idea of a new shiny thing was suddenly very exciting to me.
Realistically speaking, there are only three telecom providers in Australia: Telstra, Optus, and Vodafone. I ruled Vodafone out because they have rubbish coverage where I’m staying, which left only Telstra and Optus as viable options. When I looked at the available plans online, my brother-in-law reminded me about a massive data breach that affected Optus in 2022. I nodded and said I remembered, but absolutely did not. Apparently millions of people had their names, dates of birth, home addresses, even passport and licence numbers dumped out into the world.
Now, I’d be lying if I said this didn’t give me pause, but the lie would grow even bigger if I told you that it seriously affected my decision. Because even with all of that context, I still found myself leaning towards Optus. Why? Because the plan was cheaper. How much cheaper? Telstra cost $110 a month and Optus $104. Which makes the value I place on digital security a whopping six Australian dollars.
As a UX designer who works with security-critical software, I understand the stakes of digital security. But when it came time to make this decision, that understanding didn’t make a difference at all. The breach was real, and bad, real bad with consequences that were very serious for millions of people, and STILL the cheaper plan felt like the better option. Why? Because the price tag was so much more immediate than the risk.
the baseline is indifference, a cold, unfeeling userbase that mostly just wants to get on with their day. It’s no good trying to wish that away
You can understand the stakes perfectly and still watch your brain reach for the wrong decision. This isn’t unusual. Research on online behaviour consistently shows a gap between what people say about security and what they actually do. This is known as the privacy paradox: users report concern about security but take very little action to protect themselves. Even when large scale breaches are widely discussed, only a small fraction of people seek out more information or change their behaviour. In lab and field studies, users routinely ignore warnings, delay updates, and reuse weak passwords. Why? Because convenience always outweighs abstract risk when it comes to decision making.
Truth is most people don’t care about digital security and they probably never will. In 2013, when Edward Snowden revealed the scale of government surveillance, the public reaction was largely to shrug and move on. Pew research later found only about 22% of U.S. adults said they changed how they used technology after those revelations.
So the baseline is indifference, a cold, unfeeling userbase that mostly just wants to get on with their day. It’s no good trying to wish that away, it has to be accounted for in the security model: not just in the code, but in the interface those users actually interact with.
The problem then is that this baseline clashes with the mindset of the people building the system. For them, security is front of mind and like everyone else, they tend to assume (often without noticing) that the users they’re designing for will be thinking about it too. That’s a mental model mismatch. A system designed by security-minded people can end up putting too much faith in a largely indifferent userbase.
Usable security is a field of research that focuses on examining the human surface of security. It fills the gap where those mental models don’t line up, and tries to strengthen the system by designing for the user who isn’t thinking about security at all: the average person with one hand on the keyboard and the other buried in a bag of Doritos, waiting for their page to load.
Over the next few posts, I’m going to dig into this often-overlooked area of research. We’ve been thinking about it a lot in our work, and it explains a lot about why security breaks down in practice: systems are built by people who care deeply about risk, and used by people who mostly don’t. If you’ve ever wondered what that means for UX, this is where we’ll start.