This is my personal style guide that reflects how I gather news, write intros, and create talking points for the podcast. It evolves over time as I encounter new situations and as I think about ways to be more successful in explaining a topic or combining education with entertainment. And, for as incomplete as the later sections are, it’s still too much a wall of text. So, this will serve as an experiment in markdown styling as much as it does a podcast style.

Frame Topics with Intention

In the news segment, we go beyond each article’s headline and summary to discuss how a flaw in one app demonstrates a common mistake that could haunt any app, or how the latest breach can inform the threat models you use for your own environment.

Some guiding questions when thinking about an article:

  • How does this inform a threat model?
  • How well does this demonstrate an attacker-minded approach?
  • What security principle does this highlight?

Prefer action over inaction – Offer advice and examples on actions someone could take to improve security. Just offering a list of, “Don’t do X” or “Y is useless”, leaves the audience either at the status quo (still insecure, not doing anything about it) or idle (still insecure, now doing less). Prefer finding examples of practices and tools that address a problem and, importantly, put those examples in context of what might make them more or less successful.

Constructive criticism is important – Many practices are misguided, no longer relevant, or were never useful to begin with. But criticism that explains itself and follows up with constructive alternatives is even stronger. A quick example is password rotation. Early infosec had a tenet of choosing new passwords every 90 days. Thankfully the NIST SP800-63 relegated that to history with superior and more practical advice. Forced password changes encourage poor user behaviors and degrade the user experience with the burden of managing them. The better guidance is to “…force a change if there is evidence of compromise of the authenticator.” SP800-63 provided context and guidance on what to do instead1.

Some framing approaches to avoid

Framing an article or topic with a particular perspective isn’t inherently bad – discussions shouldn’t be a bland synopsis of a few technical items. But there are also framing devices that get overused, have weak foundations based on inaccuracies, or are boring in their own right. (This area overlaps with the cliches and metaphors of the next section.)

Developers don’t care about security – I don’t buy this. I think there are multiple factors at play in the relationship developers have with security, of which time and education are two important ones. Plus, I suspect that the converse – appsec doesn’t care about development – would be met with denial and objection within the appsec world. An appsec attitude that developers don’t care about security feels a little too self-serving and shifts the blame of failed security solutions from the appsec industry onto developers.

Performative outrage – Lots of time this manifests as just empty information masked over with a (possibly) creative delivery of invectives, where the attempt at entertainment relies more on the performance than the subject at hand. Prefer to center the message, not the messenger, when a topic deserves scrutiny and criticism.

Avoid Cliches & Metaphors

Infosec has a long history of metaphors, from the early days where defenses were described as castles (strong exterior) and onions (many layers) to cars and houses and just about anything other than applications, systems, or networks.

Metaphors can be educational. They can be evocative and fun. But they start to fail when the conversation they spark becomes more about the metaphor’s properties rather than the reality of what they were intended to represent. I almost qualified this section as “Elaborate Metaphors”, but even simple metaphors have these flaws. In all cases, their repetition is boring. Whenever possible, prefer talking about the issue at hand, not about what the issue is like.

Another resource about this is “Are Security Analogies Counterproductive?” by Phil Venables, particularly the “Actually Explain” section.

Then there are the metaphors and phrases that have become boring, useless cliches.

Boil the ocean – Just say a problem is hard. Even better, qualify why it’s hard or why it may even be intractable.

Boil the frog – Graphic metaphors grab attention. What’s more vivid than a frog slowly being cooked alive because it (allegedly) doesn’t notice the gradual increase in temperature. Not even to the point where the water boils it alive. Infosec is already more art than science. Using metaphors based on flawed science (frogs don’t succumb to this) feels like the kind of thinking that leads to security theater. It’s also funny when this metaphor gets misused. Are you the one applying heat? Trying to boil some frog-like appsec vuln? Or are you the frog, not noticing your environment? Be the frog. The real kind that breaks assumptions about what people think.

Don’t have to outrun the bear – This is superficially about having better security than some unspecified other app (or site or org or whatever) out there. The idea is that attackers will target that other, less secure app instead of you. Implying that they’re lazy? That as an attacker their metaphorical stomach will be full of the other app and, their appetite sated, ignore you? Sure, this might work for attacks of opportunity2, but it has no bearing on targeted attacks, automation, or the amount of untargeted scanning that happens everyday. With all the apps out there, what’s the guarantee that you’re not the slowest one in this hungry bear metaphor.

However…if there’s an iota of meaning to pull out of this, it’s a strategy to eradicate a class of attacks. For example, if your developers all use FIDO2 keys for authentication, then you’ve addressed a major phishing attack vector and, yes, I’ll readily admit you’ve outrun other orgs who are behind on strong authentication schemes. But even there I’d rather talk about specific attack classes and solutions than repeat this (ahem, unbearable) cliche.

Humans are the weakest link – Who else uses software? Who else is software made for? Yes, humans make mistakes in code, in falling prey to social engineering attacks, in misconfiguring services, and so on. But why wasn’t there better tooling to make those mistakes harder to do in the first place? Or make them easier to detect after the fact? Why are systems and processes more resilient to human behavior? Like any of these items, you could find an example of negligence and clear human error, but that’s not usually where this sentiment goes. In looking at flaws and failures, I prefer to explore what could have been improved in tools, systems, and practices to better help users.

This one is also a framing issue. Why are you defining a generic human as part of your security controls? Are they trained? Do they have tools to assist them? What’s their responsibility in the scenario you’ve created? There’s no question that humans can be negligent, let alone make mistakes. But skipping over all the possible factors that failed humans is lazy.

Vim vs. Emacs – Yawn. I find the in-group/out-group posturing of this rivalry boring. There are plenty of other editors out there with far better features. The most mundane version of this just repeats the same tired jokes. Very little improves from there.

Avoid Platitudes & Overbroad Advice

Trite or cliche

Input validation – Lots of flaws seem like a validation issue at first glance, but I find this superficial and often mistaken. XSS is about output encoding so that any input can be safely rendered within the specific context of where it’s being placed. SQL injection is best addressed by prepared statements; it shouldn’t matter if a metacharacter is present or not. Most command injection is addressed by positional args, not some generic input validation. I rarely mention this and, if I do, it’ll be near the bottom of a list of other security patterns to apply first.

Write secure code – This might be useful shorthand, but avoid if it the discussion turns to anything actionable. There’s all kinds of practices that go into writing secure code. Even at a high level, I would argue that readable code is more secure code. At least in terms of others being able to understand the code’s intent, maintain it, and reason about its functionality. So even saying, “write readable code,” is already more actionable than saying, “write secure code.”

Stances on Industry Terms

Prefer phishing

  • Avoid SMishing and vishing. A detailed nomenclature of social engineering attack vectors is a distraction. The term phishing is known known enough. There’s no need to have new words for a particular vector. SMS-based phishing is just as clear.

Some Favorite Topics (but don’t overuse them)

Path traversal

Memory safe languages like Go and Rust can still lead to plenty of other types of security flaws

Some Favorite Non-Security Topics (but don’t forget context)

A lot of these appear in the intros for various episodes. While I love throwing in references to topics I love, I also try to add enough context so that someone completely unfamiliar with them won’t be left out. It’s a balancing act that’s never perfect, but it makes a fun challenge and provides a way to add something more interesting to discussions.

New Wave, Post-Punk, and 80s music

Synthwave music

80s movies

Horror and sci-fi movies

Dungeons & Dragons (and role-playing games in general)

Additional Resources


  1. The entire SP800-63 guidelines are worth reading for a grounding in modern identity and access management. It’s a bit dry due to the SHALLs and musts of a requirements doc, but it should be the baseline does away with bad practices like password rotation, weird password composition rules, and UX anti-patterns like preventing pasting to password fields. 

  2. That’s not exactly how they work in D&D 5e, but it’s a great phrase.