• Thanks for keeping us company throughout 2024 and joining us for a new year!

    We started another solar cycle of appsec with a simple desire: Let’s have designs and defaults that minimize flaws, and reduce the damage that an exploit can cause.

    SW logo

    Episode 312

    Greg Anderson talked about the origins of OWASP’s DefectDojo and why orgs still struggle to distinguish flaws they need to fix from those with negligible risk. The conversation turned to familiar challenges like tool quality, vuln prioritization, and proactive security practices. But we also talked a bit about the types of flaws (hi business logic!) that all scanners struggle to identify.

    Episode 313

    Then we went to the dev side of security with Ixchel Ruiz. She brought her experience as a Java developer to help us talk about what good security requirements can look like. Developers don’t approach areas like quality and performance with the expectation to fix all those things at once. They measure and prioritize, looking for ways to make a big, positive impact on their code. Having clear goals and requirements for security makes its parallels with software quality even more obvious.

    Episode 314

    It took us three weeks to get into the 2025 predictions game. Cody Scott shared what he and his colleagues see for cybersecurity and privacy throughout this year. Sure, it’s a safe bet to mention genAI, but in this case we went looking for its value to appsec and came up short. And, if CISOs are being cautious with their budgets for genAI-powered appsec tools, they’re shoring them up for breach-related costs. Surprisingly (to me, at least) OT made the list for this year, so Cody had to explain why it’s more than just the perennial technical concern about code quality. We’ll make sure to have him back in December to see how these predictions held up.

    Episode 315

    Niv Braun closed out the month with a conversation on the AI SDLC. My immediate question when seeing adjectives before SDLC is what makes it different from “just software” like we’ve had for decades. Niv noted how ML and data science teams have had security needs for years before we started calling everything AI. Then he illustrated the differences between AI-related and AI-specific security concerns with handling data and designing systems. I enjoyed hearing examples and advice that called out FUD and focused on real problems that orgs have today.

    Subscribe to ASW to find these episodes and more! Also check out the December 2024 recap.

    ASW on Apple Podcasts

    • • •
  • So Much Phishing

    Most users just want to know how to keep their devices updated with little intervention, how (and why) to use a password manager, and have reassurance about account recovery if they lose their passkey or auth token generator.

    Courtesy British Library (11650.h.69.)
    Courtesy British Library (11650.h.69.)

    But users don’t know the Important Security Things. Things like all the places where a link can appear, or why RFCs intended links to be clicked on but never bothered to explain which links are safe and which aren’t. Users don’t even bother to know that browsers enforce HTTPS only these days. Try getting a user to explain a comparative threat model about whether to worry more about POODLE or BEAST. You might as well be asking them their favorite Pokémon.

    Even worse, most users don’t even think about section 6, let alone section 7, of RFC 3986 on a daily basis. This is why infosec can’t have nice security things. Users are the weakest think.

    To address that, here’s some super helpful infosec taxonomy to use the next time you think someone needs more awareness about being secure online.

    A Bit of History

    Remember, the cyber- in cybersecurity comes from the Greek, kybernētēs, meaning to steer, as in steering people to detailed lists of trivia and jargon. The suffix -security comes from Latin, meaning freedom from anxiety or freedom from care, as in free from caring about making it easier for users to do what they want online.

    Another important suffix is -ishing, which comes from the Geek, meaning to do something. For example, fishing means to go fish. Fishing is the underlying metaphor for phishing.

    The ph- dates back to the 80s and 90s, when hacking phones and phone networks was referred to as phreaking. From there, phishing emerged as a term to describe scams and ways that people might be manipulated in disclosing their passwords or otherwise unwittingly taking an action against their own interests. Often those scams would rely on deception, pressure, or grifting techniques and cons that predated the internet.

    That quaint definition has fallen into disfavor, with modern security awareness training focused on enumerating the techniques for delivering a link and telling users there are safe links and not safe links.

    A Lot of Terms

    To keep up with that modernization, here’s a handy reference of super helpful infosec taxonomy. Use this the next time someone says they’re done with turning on automatic updates and bored with the mundanity of tracking their personal passkeys and FIDO2 keys they use at work. If a user asks why process isolation and sandboxing techniques aren’t more prevalent designs to counter malware, just change the subject to talk about this list. People like lists.

    Phishing – derived from phreaking. Sadly, the ph- does not stand for “pretty hyperlink”, although that would have been a nice nod to making them attractive to click on while obscuring their malicious destination.

    E-phishing — phishing sent by email, the e- is silent.

    Phurling — archaic. Used by those who think “link” is too pedestrian and prefer the term URL. Nevertheless embarrassing to use, especially when talking to someone who prefers the term URI. No one bothers with URN. Speaking of URLs, no one bothered to come up with a variant phishing name for link shorteners – those things are inscrutable from the start.

    Pwishing — phishing that merely asks for your password, not to be confused with phishing, which is normal phishing that attempts credential harvesting, or e-phishing, which is normal phishing that uses email.

    Quishing — when a link is hidden in a block of those cute little squares that make up QR codes. This term is based on the duck principle, as in if it doesn’t look like a link, but acts like a link, then you shouldn’t click on the link (unless the link is safe, of course).

    Sixshing — when a link uses an IPv6 address.

    Smishing — formal term for SMS-based phishing. It’s acceptable to use this term for media-enhanced links that rely on MMS, but be careful about potential confusion here. SMS and MMS are different protocols. Even so, no one uses the term mmishing.

    SMishing — uses social media to deliver links. Don’t use this for SMS-based phishing because it’s missing the final S in SMS, which would be ambiguous and potentially confusing to the audience.

    Squishing — offering hugs in exchange for passwords. Less sophisticated techniques rely on chocolate, gift cards, or the promise of vacation days.

    Vishing — video-based delivery, whether recorded or streaming

    VishIng — voice-based, but delivered over VoIP

    Wi-ishing — pretending that public Wi-Fi is too dangerous to ever use

    • • •
  • As I see how search engines are incorporating LLMs, it makes me all the more eager to see their capabilities cross into the physical world.

    I’d love to be able to walk into a room and just tap a wall to trigger full-room illumination through an agentic interaction.

    Courtesy British Library (1488.c.28)
    Courtesy British Library (1488.c.28)

    And just imagine having a more complex agent, like if I slide my finger vertically, then the movement could be semantically translated into the amount of illumination I’m in the mood for. Plus, in the real world you have axes and dimensions, so it’d be possible to apply any of this learning to accommodate horizontal human-digital expressions.

    Training is probably straightforward, to the point where I could leave lights on the entire time. In fact, I’ll probably have to so I can train a model to know the difference between a tap that means I want illumination and a tap that means I want to temporarily halt a photon-generating device. This is actually advantageous since by default I anticipate I’ll want to be able to see. So this approach will be more resilient to darkness and when the LLM worries I might hallucinate and see things in the dark.

    Currently budgeting the cloud computing resources I’ll need to back a Raspberry Pi for a mockup. Confident I can get a Localized Lighting Model done fairly quickly.

    • • •