The Human Factor: Designing Security for the Way People Really Behave

Every Breach Starts with a Decision 

In cybersecurity, we like to think breaches begin with code. 
In reality, they start with people under pressure, juggling deadlines, trying to do the right thing quickly. 

The Verizon Data Breach Investigations Report (DBIR 2024) found that 68% of breaches involved a human element: clicking a link, using stolen credentials, or making an error. 
Among UK organisations that identified a breach or attack, phishing was the most common type (84%)

As vulnerability exploitation and configuration errors rise, resilience depends on protecting both the human and technical frontiers. 
Technology isn’t failing us, human nature is simply constant. Attackers know this. They don’t just hack systems, they hack psychology. 

 

The Psychology of Breaches 

Humans make fast, emotional decisions because that’s how we survive. 
We trust authority, fear loss, crave convenience, and hurry when busy, patterns cybercriminals depend on. 

Common emotional levers include: 

  • Urgency – “Your account will be locked.” 

  • Authority – “This is your IT team.” 

  • Fear – “Suspicious activity detected.” 

  • Curiosity – “You’ve received a secure document.” 

Awareness training teaches recognition (“Don’t click this”), but the real goal is habit formation - helping people pause before reacting. 
Awareness matters, yet it only works when reinforced by design. 

Humans aren’t the weakest link; they’re the most consistent variable. 
Design security that understands and absorbs that truth, and you gain resilience, not fragility. 

 

Case Studies: When Human Nature Meets Design Flaws 

1. Marks & Spencer (UK, 2025) - Trust in the Supply Chain 

Attackers entered through a third-party contractor via social engineering, disrupting operations and exposing staff data. 
(Reuters, May 2025) 

Human factor: misplaced trust and insufficient verification of external access. 
Design response: 

  • Continuous vendor-risk monitoring and quarterly access reviews. 

  • Least-privilege, time-bound access for contractors. 

  • SSO with device-bound passkeys to reduce password reliance. 

 

2. Change Healthcare (US, 2024) - The MFA That Wasn’t There 

Stolen credentials on a system lacking MFA triggered widespread healthcare disruption. 
(Reuters, March 2024) 

Human factor: configuration oversight and unchallenged convenience. 
Design response: 

  • Mandate phishing-resistant MFA (FIDO2/passkeys). 

  • Continuously audit for missing controls, not just annually. 

  • Monitor for anomalous logins and device mismatches. 

 

3. British Library (UK, 2023) - When MFA Gaps Turn a Breach into a Crisis 

The incident escalated because an administrator account lacked MFA. 
(ICO, April 2025 statement) 

Human factor: weak admin access hygiene and third-party complexity. 
Design response: 

  • Enforce MFA for all admin and remote accounts. 

  • Simplify and review third-party access with clear ownership. 

  • Isolate or modernise legacy remote services. 

 

4. Caesars & MGM Resorts (US, 2023) - When Help Desks Become Attack Vectors 

Attackers impersonated staff and persuaded IT support to reset credentials. 
(SEC / Okta, Sept 2023) 

Human factor: helpful support teams lacking robust identity verification. 
Design response: 

  • Require step-up verification for all admin resets. 

  • Record and sample QA of privileged support actions. 

  • Implement dual-control “break-glass” credentials, rotated after use. 

 

Designing Systems That Forgive Mistakes 

Traditional awareness assumes users will remember everything they’re told. Real life proves otherwise. 

Modern security architecture should accept human imperfection and design for error absorption, not elimination. 

Examples: 

  • Attachment sandboxing – open files safely even when curiosity wins. 

  • Adaptive access controls – add friction only when risk rises. 

  • Behavioural analytics – flag irregular activity without blame. 

  • Auto-containment – limit blast radius automatically if compromise occurs. 

Awareness reduces probability. 
Design reduces impact. 
Together, they create resilience. 

 

The Behaviour–Design Loop 

Human-centred security is a feedback system, not a training course. 

  1. Educate: Explain why instincts are exploited. 

  2. Engineer: Remove decision fatigue with automation and secure defaults. 

  3. Encourage: Reward verification and reporting, not flawless obedience. 

 

Building a Culture That Questions 

Leadership determines whether people hide errors or report them. 
If employees fear punishment, silence follows; if they feel safe, incidents surface early. 

Culture is built on tone: 

  • Celebrate the employee who reports a suspicious email, even if harmless. 

  • Encourage “pause and verify” as a habit, not a policy. 

  • Frame security as protection of purpose, not policing of mistakes. 

A culture that questions is one that catches. 

  

Closing Reflection 

Every breach starts with a decision, sometimes rushed, sometimes well-intentioned. 
Technology can’t change human nature, but it can work with it. 

The future of cybersecurity lies in empathy and engineering: understanding how people think and building systems that quietly protect them when they slip. 
True resilience isn’t about perfect behaviour, it’s about security that forgives imperfection. 

“Security isn’t about stopping people from failing, it’s about designing systems that keep working when they do.” 
Viktor Spetnijs 

 

If this perspective resonates, explore how Peritus helps organisations design security around real human behaviour.
👉 Learn more about our Security & Risk Management Services 

Previous
Previous

Re-Engineering Incident Response for Cloud Speed

Next
Next

Salesforce Security Risk & Resilience