This is my personal style guide that reflects how I gather news, write intros, and present ideas for the podcast. It evolves over time as I encounter new situations and as I think about ways to be more successful in explaining a topic or combining education with entertainment. Even though some of the sections are incomplete, it’s still wordy. Thus, this also serves as an ongoing exercise in documentation as much as it does for podcasting.

Book & Skull

The simple version follows these points:

  • Educate and entertain
  • Cover relevant resources that can inform decisions or solve problems
  • Cover recent resources, but include history and context when possible
  • Highlight original sources, but link to news articles that add context or insight
  • Prefer constructive criticism over cynicism and performative outrage
  • Avoid cliches

The following sections provide more reasoning and examples.

Frame Topics with Intention

In the news segment, we go beyond each article’s headline and summary to discuss how a flaw in one app demonstrates a common mistake that could haunt any app, or how the latest breach can inform the threat models you use for your own environment.

When I review an article, I often have these questions in mind for how I might frame it.

  • What kind of threat model does this underscore?
  • How well does this demonstrate an attacker-minded approach?
  • How well does this structure and explain a technical topic?
  • What security principle does this highlight?

Prefer action over inaction – Offer advice and examples on actions someone could take to improve security. Just offering a list of, “Don’t do X” or “Y is useless”, leaves the audience either at the status quo (still insecure, not doing anything about it) or idle (still insecure, now doing less). Suggest practices and tools that address the problem and, importantly, put those examples in context of what makes them more or less successful.

Constructive criticism is important – Many practices are misguided, no longer relevant, or were never useful to begin with. Criticism that explains itself and suggests better alternatives is more useful than just dismissing a topic. This is similar to the previous action over inaction example. Turn an “X is bad” statement into a more informative formula like, “X is bad, because Y. Instead, try Z for these reasons.”

One example is password rotation. Early infosec had a tenet of changing passwords every 90 days. Thankfully the NIST SP800-63 relegated that to history with superior and more practical advice. Forced password changes encourage poor user behaviors and degrade the user experience. The better guidance is to “…force a change if there is evidence of compromise of the authenticator.” SP800-63 provided context and guidance on a better alternative1.

Some framing approaches to avoid

Framing an article or topic with a particular perspective isn’t inherently bad – discussions shouldn’t be a bland synopsis of a few technical items. But there are also framing devices that get overused, have weak foundations based on inaccuracies, or are boring in their own right. (This area overlaps with the cliches and metaphors of the next section.)

Developers don’t care about security – I don’t buy this. I think there are multiple factors at play in the relationship developers have with security, of which time and education are two important ones. Plus, I suspect that the converse – appsec doesn’t care about development – would be met with denial and objection within the appsec world. An appsec attitude that developers don’t care about security feels a little too self-serving and shifts the blame of failed security solutions from the appsec industry onto developers.

Performative outrage – Lots of time this manifests as just empty information masked over with a (possibly) creative delivery of invectives, where the attempt at entertainment relies more on a ritual display of anger than it does illuminating the subject at hand. Prefer to center the message, not the messenger, when a topic deserves scrutiny and criticism.

Avoid Cliches & Metaphors

Infosec has a long history of metaphors, from the early days where defenses were described as castles (strong exterior) and onions (many layers) to cars and houses and just about anything other than applications, systems, or networks.

Metaphors can be educational. They can be evocative and fun. But they start to fail when the conversation they spark becomes more about the metaphor’s properties rather than the reality of what they were intended to represent. I almost qualified this section as “Elaborate Metaphors”, but even simple metaphors have these flaws. In all cases, their repetition is boring. Whenever possible, prefer talking about the issue at hand, not about what the issue is like.

Another resource on this is “Are Security Analogies Counterproductive?” by Phil Venables, particularly the “Actually Explain” section.

Then there are the metaphors and phrases that have become boring, useless cliches. Here are just a few.

Boil the ocean – Just say a problem is hard. Even better, qualify why it’s hard or why it may even be intractable.

Boil the frog – Graphic metaphors grab attention. What’s more vivid than a frog slowly being cooked alive because it (allegedly) doesn’t notice the gradual increase in temperature. Not even to the point where the water boils it alive. Infosec is already more art than science. Using metaphors based on flawed science (frogs don’t succumb to this) feels like the kind of thinking that leads to security theater. It’s also funny when this metaphor gets misused. Are you the one applying heat? Trying to boil some frog-like appsec vuln? Or are you the frog, not noticing your environment? Be the frog. The real kind that breaks assumptions about what people think.

Don’t have to outrun the bear – This is superficially about having better security than some unspecified other app (or site or org or whatever) out there. The idea is that attackers will target that other, less secure app instead of you. Implying that they’re lazy? That as an attacker their metaphorical stomach will be full of the other app and, their appetite sated, ignore you? Sure, this might work for attacks of opportunity2, but it has no bearing on targeted attacks, automation, or the amount of untargeted scanning that happens everyday. With all the apps out there, what’s the guarantee that you’re not the slowest one in this hungry bear metaphor.

However…if there’s an iota of meaning to pull out of this, it’s a strategy to eradicate a class of attacks. For example, if your developers all use FIDO2 keys for authentication, then you’ve addressed a major phishing attack vector and, yes, I’ll readily admit you’ve outrun other orgs who are behind on strong authentication schemes. But even there I’d rather talk about specific attack classes and solutions than repeat this (ahem, unbearable) cliche.

Humans are the weakest link – Who else uses software? Who else is software made for? Yes, humans make mistakes in code, in falling prey to social engineering attacks, in misconfiguring services, and so on. But why wasn’t there better tooling to make those mistakes harder to do in the first place? Or make them easier to detect after the fact? Why are systems and processes more resilient to human behavior? Like any of these items, you could find an example of negligence and clear human error, but that’s not usually where this sentiment goes. In looking at flaws and failures, I prefer to explore what could have been improved in tools, systems, and practices to better help users.

This one is also a framing issue. Why are you defining a generic human as part of your security controls? Are they trained? Do they have tools to assist them? What’s their responsibility in the scenario you’ve created? There’s no question that humans can be negligent or make mistakes. But skipping over all the possible factors that failed humans is lazy.

Vim vs. Emacs – Yawn. I find the in-group/out-group posturing of this rivalry boring. There are plenty of other editors out there with far better features. The most mundane version of this just repeats the same tired jokes. Very little improves from there.

Avoid Platitudes & Overbroad Advice

Trite or cliche

Input validation – Lots of flaws seem like a validation issue at first glance, but I find this superficial and often mistaken. XSS is about output encoding so that any input can be safely rendered within the specific context of where it’s being placed. SQL injection is best addressed by prepared statements; it shouldn’t matter if a metacharacter is present or not. Most command injection is addressed by positional args, not some generic input validation. I rarely mention this and, if I do, it’ll be near the bottom of a list of other security patterns to apply first.

Write secure code – This might be useful shorthand, but avoid if it the discussion turns to anything actionable. There’s all kinds of practices that go into writing secure code. Even at a high level, I would argue that readable code is more secure code. At least in terms of others being able to understand the code’s intent, maintain it, and reason about its functionality. So even saying, “write readable code,” is already more actionable than saying, “write secure code.”

Industry Terms

Artifical Intelligence – Avoid Skynet jokes. Yes, Terminator is an excellent movie. The reference is old and uninspired.

Machine Learning – The predecessor to AI that solves domain-specific problems.

Phishing – Avoid SMishing and vishing. The nomenclature of social engineering attack vectors distracts from the underlying problems and countermeasures. The term phishing is well known. There’s no need to mint new words for a particular vector. SMS-based phishing is just as clear.

Shift Left – The better sense of this is expand left since the desire is to expand the presence of security processes throughout the SDLC, not to move it from one place to another.

Web3 – Empty hype who’s product security angle has significant security issues and vacuous products. Hacks against web3 apps range from race conditions (reentrancy, time of check to time of use) to loopholes in smart contracts to poor cryptography to social engineering. Check out “Web3 is Going Just Great” for an ongoing chronicle of disasters in this space.

Zero Trust – A design principle that shifts from network-based access controls to identity-based controls for users and endpoints. It’s more about who you are rather than where you are.

Some Favorite Topics (but don’t overuse them)

Path traversal

Memory safe languages like Go and Rust can still lead to plenty of other types of security flaws

Some Favorite Non-Security Topics (but don’t forget context)

A lot of these appear in the intros for various episodes. While I love throwing in references to topics I love, I also try to add enough context so that someone completely unfamiliar with them won’t be left out. It’s a balancing act that’s never perfect, but it makes a fun challenge and provides a way to add something more interesting to discussions.

New Wave, Post-Punk, and 80s music

Synthwave music

80s movies

Horror and sci-fi movies

Dungeons & Dragons (and role-playing games in general)

Additional Resources


  1. The entire SP800-63 guidelines are worth reading for a grounding in modern identity and access management. It’s a bit dry due to the SHALLs and musts of a requirements doc, but it should be the baseline does away with bad practices like password rotation, weird password composition rules, and UX anti-patterns like preventing pasting to password fields. 

  2. That’s not exactly how they work in D&D 5e, but it’s a great phrase.