Cover image source: lexica.art
In March 2017, Apache disclosed a critical vulnerability in Struts CVE-2017-5638. The patch dropped the same day. Equifax, one of the largest credit agencies in the US, received internal alerts telling their teams to apply it within 48 hours.
They didn’t.
Attackers probed the exposed portal three days later, found it unpatched, and spent the next 76 days inside the network. They moved laterally, grabbed plaintext credentials, and walked out with the personal data of 148 million Americans. Social security numbers. Birth dates. Addresses. The breach wasn’t discovered internally until late July. The public didn’t hear about it until September.
A congressional investigation eventually called it “entirely preventable.”
That’s the part I keep coming back to. Not the scale, though losing half the US population’s identity data is obviously catastrophic. The part that sticks is how mundane the failure was. No zero-day exploit. No nation-state magic. A known vulnerability. An available patch. A team that apparently never asked: what if we missed this?
That’s not a technical failure. That’s a failure of thinking.
The checkbox problem
There’s a way of treating security that I’d call feature thinking. You implement 2FA, check. You deploy a WAF, check. You run a vulnerability scan before release, check. The controls are real, but the model underneath them is wrong. It assumes that once a control is in place, the problem it addresses is solved. It treats security as something you ship, like a login form or a payment gateway.
The problem is that attackers don’t care about your checklist. They probe assumptions, find gaps between checkboxes, and exploit the space between “we have this feature” and “this feature works the way we think it does.”
Compare it to building a house. You could install a top-of-the-line alarm system, never test it, never change the default PIN, and leave a window cracked for ventilation. Or you could assume that someone will try every door and window, so you reinforce frames, test the motion lights, and know what you’d do if the alarm actually went off. The first person bought a security feature. The second has a security mindset. The difference sounds simple. In practice, it changes how you think about every decision you make while building systems.
| Feature thinking | Mindset thinking |
|---|---|
| Security is a phase (we do it in testing) | Security is continuous across the whole lifecycle |
| Controls exist to pass audits | Controls exist to reduce real risk |
| Vulnerabilities are bugs to fix before release | Vulnerabilities are inevitable — detection and containment matter |
| Security is the security team’s job | Security is everyone’s job |
| Success = no findings in the scan | Success = low mean time to detect and respond |
| Reliance on perimeter | Assume breach, verify everything |
When technical perfection fails
Here’s a thought experiment I find useful. Imagine a system built with genuinely excellent security engineering: formally verified code, hardened configs, CIS benchmarks, memory-safe languages, hardware encryption, red team tested. On paper it’s airtight.
Now deploy it in a hospital. The developers move on. Staff share a password across shifts because individual logins “slow down emergencies.” IT disables MFA for the same reason. Nobody funds security training because the budget went elsewhere. Six months in, a phishing email gets a nurse’s credentials. The attacker logs in, moves through the internal network freely because internal traffic was implicitly trusted, and exfiltrates thousands of patient records.
The code was fine. The system failed.
This scenario exposes what feature thinking misses: you can’t separate the security of a system from the humans who build, operate, and use it. No formal verification tells you how a nurse under pressure will respond to an MFA prompt at 3am. No red team engagement anticipates that IT will quietly disable controls to reduce friction. These are human factors, and they’re not edge cases — they’re the norm.
A mindset-based approach asks different questions during design: Who actually uses this? What pressures do they face? What workarounds will they invent? It builds in monitoring for drift — the gradual loosening of controls that happens when convenience consistently wins over friction. It treats “the code is secure” as the beginning of the question, not the end.
What these breaches actually have in common
The Equifax breach. Target in 2013. Yahoo’s disasters from 2013-2014 (disclosed years later, affecting 3 billion accounts). Different attackers, different industries, different techniques. The mindset failures are almost identical.
Target: Attackers stole credentials from an HVAC vendor. The vendor had broad network access — because why wouldn’t a vendor with a contractual relationship be trusted? The malware detection tool (FireEye) saw what was happening and fired alerts. Security teams received them and dismissed them as noise. Nobody owned the response process. By the time the holiday season was over, 40 million credit cards were gone.
The tool worked. The culture didn’t.
Yahoo: Attackers accessed 3 billion accounts. Passwords were hashed with MD5.Weak even for 2013. The breaches were discovered internally and then… not disclosed. For years. Partly because there was acquisition pressure (Verizon was buying Yahoo at the time), partly because the culture had normalized ignoring security warnings after years of minor incidents. Data was treated as something to monetize, not something to protect.
Across all three, you see the same patterns: complacency born from size or success (“we’re too monitored to be breached”), misaligned incentives, diffused ownership, and a reactive posture that waited for incidents instead of hunting for assumptions to challenge.
More tools wouldn’t have saved any of them. A different way of thinking might have.
Threat modeling as productive paranoia
If a security mindset is a habit, threat modeling is the exercise that builds it. The core idea is simple: before you build something, think like an attacker and ask how it breaks.
STRIDE is the framework I find most useful because it forces you to think across multiple dimensions rather than just “is this input validated.” It covers:
- Spoofing — can someone pretend to be something they’re not?
- Tampering — can someone modify data or code without authorization?
- Repudiation — can someone deny an action they performed?
- Information Disclosure — can someone access data they shouldn’t?
- Denial of Service — can someone make something unavailable?
- Elevation of Privilege — can someone gain more access than intended?
Let me run through a concrete example. Say you’re building a document sharing app. Users upload files, other users can view them. Simple enough. Here’s a partial threat model:
User Browser → (HTTPS) → Load Balancer → Web App → Auth Service → Database → Object StorageAdmin Dashboard → (HTTPS) → Web AppNow apply STRIDE to each connection:
Web App → Database:
- Spoofing: If app credentials leak, an attacker can query the database directly.
- Tampering: SQL injection on any unsanitized input.
- Information Disclosure: Overly broad queries returning excess data.
- Elevation of Privilege: If the app’s DB role is overprivileged, one compromised endpoint exposes everything.
Web App → Object Storage:
- Information Disclosure: A misconfigured bucket with public read access. This one bites people constantly.
- Tampering: If the storage role allows overwrites, an attacker can replace files.
Admin Dashboard → Web App:
- Spoofing: If admin sessions aren’t properly scoped and short-lived, stolen cookies escalate privileges.
- Repudiation: If admin actions aren’t logged, you can’t reconstruct what happened during an incident.
From this you derive concrete mitigations: parameterized queries, least-privilege IAM roles, explicit bucket policies blocking public access, short-lived session tokens, comprehensive audit logging. Not because a compliance doc said to — because you traced specific threats to specific gaps.
That’s what makes threat modeling different from a checklist. It connects your decisions to actual failure modes. And because you revisit it as the system evolves, it stays current in a way that a one-time audit never does.
A note on code
Threat modeling identifies the risks. Your code is where you address them.
The most common SQL injection pattern in Python Flask:
# This gets people fired@app.route('/user/<user_id>')def get_user(user_id): conn = get_db_connection() cursor = conn.cursor() query = "SELECT * FROM users WHERE id = " + user_id cursor.execute(query) user = cursor.fetchone() return jsonify(dict(user)) if user else ("User not found", 404)Request /user/1 OR 1=1-- and you get every user in the database. Request /user/1; DROP TABLE users-- and you get something much worse.
The fix is parameterized queries:
@app.route('/user/<user_id>')def get_user(user_id): conn = get_db_connection() cursor = conn.cursor() query = "SELECT * FROM users WHERE id = %s" cursor.execute(query, (user_id,)) user = cursor.fetchone() return jsonify(dict(user)) if user else ("User not found", 404)The database driver handles escaping. user_id is a value, never executable SQL.
This is a solved problem. The code to fix it is trivial. The reason it keeps appearing in production is not that developers don’t know about it, it’s that the culture around code review, training, and secure defaults doesn’t make it impossible to write the wrong version. A security mindset changes what you check in review and what defaults you reach for first.
Zero Trust is a way of thinking, not a product
“Zero Trust” has become vendor-speak for a product category. The actual idea is important and worth separating from the marketing.
The traditional model was perimeter-based: harden the network edge, then trust everything inside. This made some sense when users worked on-site, applications lived in data centers, and data rarely left the building. It makes no sense now. Cloud infrastructure, remote work, contractor access, third-party integrations — the perimeter dissolved. Attackers don’t need to breach the castle walls if they can phish someone’s credentials and walk through the front gate.
Zero Trust says: stop assuming trust based on network location. Verify every access request, regardless of where it comes from. The four principles that matter:
Never trust, always verify. Every request — internal or external — gets authenticated, authorized, and encrypted. Your internal services don’t get a free pass just because they share a VPC.
Least privilege. Users and systems get exactly the access they need and nothing more. When a credential gets compromised (and it will), the blast radius stays small.
Assume breach. This one changes your investment priorities. If you assume attackers will get in, you stop spending everything on prevention and start investing seriously in detection, segmentation, and response. You design for containment, not just exclusion.
Continuous monitoring. Trust isn’t granted once — it’s re-evaluated constantly. Anomalous behavior should trigger re-authentication or session termination, not just a logged warning nobody reads.
The contrast with traditional thinking is sharp:
| Traditional | Zero Trust | |
|---|---|---|
| Trust model | Implicit inside the perimeter | Never trust, always verify |
| After a breach | Attacker moves freely internally | Segmentation limits lateral movement |
| Access control | Network-based (IP, VLAN) | Identity and context based |
| Monitoring | Perimeter-focused | Continuous, everywhere |
The human element here matters as much as the architecture. Zero Trust works when engineers design systems that default to denial. It breaks down when MFA prompts get disabled for convenience, when IAM roles get expanded because “it was easier,” when alert fatigue causes teams to stop investigating anomalies. The architecture is the enabler. The mindset sustains it.
Culture is the hard part
You can have excellent threat modeling practices, clean code, and a well-architected Zero Trust deployment and still watch it erode over 18 months if the culture doesn’t support it.
The Target breach makes this concrete. Their FireEye deployment was configured correctly. It saw the attack and fired alerts. The security team received them and didn’t act, partly alert fatigue, partly unclear ownership of who was supposed to respond. The tool worked. The culture didn’t support using it.
Two teams building the same product, same tooling budget:
Team A gets quarterly security training, runs SAST scans, treats vulnerabilities as low-priority tech debt, and views security review as something that slows down releases.
Team B raises abuse cases during sprint planning, praises developers who catch edge cases in design review, runs blameless postmortems after near-misses, and shares lessons from external breaches in their regular meetings.
Over two years, Team B builds fewer vulnerabilities in, detects the ones they have faster, and recovers from incidents more cleanly. Not because they have better tools. Because security thinking is part of how they work, not a gate they pass through.
A few practices that actually make a difference:
Make security questions part of existing rituals. Not separate security meetings, that signals it’s someone else’s domain. Ask “what could go wrong here?” during sprint planning. Include threat review in design reviews. Check for authorization logic in code review alongside style and correctness.
Psychological safety for security concerns. If a junior engineer spots a potential issue and raises it, and the response is “that’s not your area” or “we’ll handle it later,” they won’t raise the next one. Leaders who visibly reward people for surfacing concerns, even false alarms, change this. Leaders who don’t, quietly eliminate it. Blameless postmortems. When something goes wrong, the instinct is to find who made the mistake. This is the wrong frame. If one person’s error can cause an outage or a breach, the system enabled it. Focus on what systemic conditions allowed it to happen and how to change them. Blame silences the people who’d otherwise surface the next problem early.
Reward finding problems, not just shipping features. If the only thing that gets recognized is velocity, that’s what people optimize for. Recognizing a developer who found a subtle authorization flaw in review, or a team that improved their detection coverage, sends a signal about what actually matters.
Tools won’t save you
Every year the security industry releases new categories of tools. SAST, DAST, SCA, CSPM, CWPP, EDR, XDR, SIEM, SOAR. Each promises to close a specific gap. Each creates a new alert stream that needs interpretation, tuning, and a team with the judgment to act on it.
The pattern in most organizations: a breach occurs or a regulation changes, leadership buys a tool, the tool gets deployed with default config, alerts start firing, the team gets overwhelmed, alert fatigue sets in, the tool becomes expensive background noise.
What’s actually happening here is that the tool amplified a capability the team didn’t have. A SAST scanner that flags a potential path traversal means nothing if the team’s response is to apply a quick blacklist fix and mark it resolved. A security mindset would ask: why does this code accept file paths at all? Is there a design that avoids this class of vulnerability entirely? Can we learn something here that changes how the team writes code next time?
The tool surfaced a question. The mindset determines whether you answer it.
Some specific limitations worth knowing:
- SAST generates false positives. If developers don’t understand the underlying vulnerability classes, they’ll dismiss real findings along with the noise.
- WAFs can be bypassed with encoding tricks and logic flaws. They’re a useful layer, not a substitute for fixing the application.
- SIEM without proper tuning is a very expensive logging system. The signal-to-noise ratio degrades fast.
- SCA only catches known vulnerabilities in published versions. Zero-days in your dependencies show up nowhere until they’re disclosed.
None of this means don’t use tools. It means don’t confuse deploying a tool with having a security posture. In skilled hands, these tools are force multipliers. Without the judgment to interpret their output, they’re theater.
What this looks like day to day
I’ve been describing a mindset, which is abstract. Let me make it concrete.
A day for an engineer who’s actually internalized this:
Morning standup surfaces a new payment feature. The instinctive question that runs in the background: what are the credential stuffing risks here? What happens if this endpoint gets hit with high volume?
Design review for an API change. Before implementation: sketch the trust boundaries on a whiteboard. Note that the logging currently captures raw user data, switch to correlation IDs instead. Mention it.
Writing a file upload handler. Automatic: validate file type, set a size limit, consider whether the processing happens sandboxed.
Code review. Spot that an admin endpoint is missing an authorization check. Raise it. Thank the person who catches the next one.
End of day. Read a summary of a recent breach in a similar industry and think for five minutes about whether any of it applies to what you’re building.
None of that requires a dedicated security role or a separate security sprint. It’s just questions that become habitual when you’ve shifted from “have we secured this” to “how would we break this.”
The uncomfortable conclusion
Security is not a state you achieve. It’s a state you maintain, continuously, against an adversary who adapts. The perimeter doesn’t hold. Patches get missed. Credentials get stolen. Insiders make mistakes. Anyone building systems who isn’t thinking about how those systems fail is not thinking about them completely.
Equifax had the patch. They had the alert. They had the tools. What they didn’t have was a culture where someone, anywhere in the chain, was asking: what if we missed something?
That question is free. Asking it habitually, at every stage of design and development and operation, is what separates the teams that catch breaches early from the ones that find out 76 days later.
Security isn’t a feature you ship. It’s a way of thinking you cultivate, incrementally, imperfectly, and without end.
Further reading if you want to go deeper:
- OWASP Top Ten Web Application Security Risks | OWASP Foundation
- Known Exploited Vulnerabilities Catalog
- MITRE ATT&CK®
- Cybersecurity Framework | NIST
- SEC504: Hacker Tools, Techniques, and Incident Handling | SANS Institute
- BeyondCorp Zero Trust Enterprise Security | Google Cloud
- OWASP Dependency-Check | OWASP Foundation
- Microsoft Threat Modeling Tool overview - Azure | Microsoft Learn