• Photo by Diane Picchiottino on Unsplash
    Photo by Diane Picchiottino on Unsplash

    Go is giving devs a better tool against traversal attacks.

    We didn’t get the chance for a news segment in this week’s Application Security Weekly podcast, but I still wanted to highlight an article that stood out to me.

    Path traversal is one of my favorite appsec flaws. It’s trivial to demonstrate, easy to understand, and its related security principles lead down many…paths.

    The simplest payloads rely on classic characters like dot-dot-slash (../). From there you can steer a discussion into web-related concepts like percent encoding (%2f), overlong UTF-8 encoding (%c0%af), normalization of slashes, and programming language abstractions over a file system. Once you’re onto the file system, you can hit areas of OS behavior differences, symbolic links, sandboxing, and more normalization concepts.

    Then as you refine simple payloads into attack scenarios, you have opportunities that span file reads to leak useful info, file writes to clobber and create files, and file execution to run arbitrary commands. You can even sneak in a discussion of race conditions and TOCTOU-style attacks.

    And now Go has defenses for devs to deal with files with the new “Traversal-resistant file APIs” in the 1.24 release.

    With this API, a developer sets a root location in which file operations must be constrained. No file paths or symlinks will be able to reach outside of that root, regardless of how clever a traversal payload it might have.

    This is great news for devs writing new code that has to interact with the file system. This is the kind of API that establishes a more secure design (with a few caveats) that’s resistant to mistakes and misunderstandings.

    That last part is key to me when looking at an API. It doesn’t have flags that change its behavior between a safe vs. unsafe mode, it addresses a common need, and it’s extremely simple.

    It’s also good news for existing code that was potentially insecure or that relied on other packages for secure file system access. Now it’s possible to make that existing code secure and reduce the amount of dependencies you rely on. (Admittedly, changing one secure implementation to another secure implementation rarely gets a high priority, but I will always like the idea of removing code and reducing dependencies when possible.)

    However, the Go blog post includes caveats that show just how pernicious this vuln class remains. It notes that the underlying OS and environment may still have inconsistencies, such as Node.js remaining vulnerable to TOCTOU attacks when using these functions. Such is the life of APIs on top of APIs.

    I’ll continue to experiment with more news commentary like this one. In the meantime, catch up on more news and the latest episodes at the podcast’s home.

    p.s. Speaking of OS support. The Go blog post mentions Plan 9(!?) lol. Does anyone actually use that? The design philosophy of Plan 9 is that everything is a file, so it’s totally relevant to traversal. But wow there’s an OS I haven’t heard mentioned for several decades.

    • • •
  • AppSec has decades of lists, acronyms, taxonomies, and scanners for flaws like XSS and SQL injection.

    And yet barely three months into 2025 those two vuln classes already account for several hundred new CVEs. (WordPress plugins alone seem to be responsible for over 900 XSS vulns. That aspect deserves an entirely separate discussion on software design choices.)

    What does a history of never-ending flaws mean for a future where LLMs produce code, attackers produce backdoored LLMs, and supply chains struggle with trust?

    Why should we trust AppSec to fix new problems when so many old ones are still around?

    What did AppSec miss in promoting secure designs? How did it fail developers?

    Netscape unleashed <blink>
    Netscape unleashed <blink>

    Look Back

    Since I mentioned XSS, the most web of web vulns, let’s take a detour back to the beginning of the web. At the end of 1994, Netscape introduced their browser and, along with it, a few custom HTML extensions.

    The most notorious extension was the blink tag, one of the most insidious elements ever inflicted on humanity. I hope no one reading this has ever been subjected to the obnoxiousness of the blink tag.

    Netscape released their browser at the end 1994. By January 1995 the blink tag was already annoying people.

    Firefox finally terminated the <blink> tag
    Firefox finally terminated the <blink> tag

    Fast forward to 2013 when Firefox, the spiritual descendent of Netscape’s browser, finally removed support for that awful, awful tag.

    But if it took 20 years of complaining to kill off a terrible tag, what does that slow pace imply for security?

    Maybe security needs to swap out complaining for creating.

    Squash Bugs

    Netscape set a bounty on bugs in 1995
    Netscape set a bounty on bugs in 1995

    Let’s go back to 1995 for a moment.

    That October, Netscape launched a “Bugs Bounty”, plural, because who knows how many bugs any piece of software has. There’s surely more than one.

    Commendably, they explicitly equated security bugs with software quality.

    The winner created what was essentially an XSS attack using LiveScript, shortly thereafter to be renamed JavaScript and eventually to become a favorite server-side programming language for people who love package dependencies.

    (Perhaps more accurately, the attack demonstrated how JavaScript could abuse the nascent Same Origin Policy. The late 90s had plenty of high impact XSS examples.)

    Today our browsers still have JavaScript and sites still have XSS.

    Imagine if Kendrick Lamar had had a beef with XSS. The vuln class would have been over and done with in a few short months. No way it could have survived all these decades.

    Appsec needs more Kendrick energy.

    Web Devs React

    React logo

    Let’s roll forward from 1995 back to 2013, the year blink died. Just a few months after the funeral celebrations, one of the best steps towards ending XSS arrived – React.

    It didn’t come out of an appsec project or checklist or anything like that.

    It came out of an engineering-focused effort.

    That’s not to say it wasn’t informed by appsec. But what it did was solve a problem that was important to developers, and it did so in a way that would enable developers to build apps without having to mentally track tedious security controls the entire time.

    The next time someone says developers don’t care about security, ask if security knows anything about development.

    Like I said. We need that diss track energy towards vuln classes. We don’t need it towards devs.

    That late 2000s to mid-2010s had a few other wins where browser developers and appsec groups worked to eradicate more vuln classes.

    Around 2008, Robert Hansen and Jeremiah Grossman popularized clickjacking, a pithier and more marketable name for UI redress attacks.

    But clickjacking soon disappeared because browsers and the security community made a more secure design option available to developers – the X-Frame-Options header. Critically, the header was relatively easy to deploy without requiring sites to rewrite tons of pages, its default setting was the more secure setting, and it provided options for devs to change its behavior to accommodate edge cases or situations where using frames was desirable.

    Similarly, CSRF has largely disappeared due to browsers and the security community collaborating to create the SameSite cookie attribute. Like the clickjacking countermeasure, this cookie attribute’s default setting was more secure, its deployment largely transparent to site owners, and users didn’t have to do a thing.

    My point here is that secure designs are possible. Eradicating a class of flaws is possible.

    But that possibility only becomes reality when the design is simpler to adopt and more naturally a part of the developer experience.

    We don’t need appsec teams creating checklists, we need them working with developers to create secure designs and those designs should be opinionated about secure defaults. Hardening guides should be considered an anti-pattern.

    Yet not everything is secure designs and defaults. It’s also, sadly, dependencies.

    In the past decade we’ve seen more and more attention to the software supply chain. (Supply chain concerns are neither new nor novel in this decade, they’re just a really prevalent attack vector.)

    Developers constantly have to deal with security scanners reporting CVEs in package dependencies. It’s an annoying amount of tedious work that ends up being based more on listing known vulns rather than dealing with meaningful risk.

    Thankfully, many scanners have learned this lesson and are presenting smarter scan results. Or at least trying to.

    But 2024 had a notably different type of supply chain problem. One rooted in trust.

    Why XZ?

    "a few odd symptoms"
    "a few odd symptoms"

    In 2024 we saw a well organized effort to introduce a backdoor into XZ Utils. It was a patient attack, with planning and setup that spanned a few years.

    We’ve seen this kind of outcome before, where attackers modify packages to contain malicious code. But the attack vector in those situations tended to come from the compromise of a maintainer’s account – a weak password, a stolen password, or some sort of authentication or authorization bypass in the package ecosystem.

    The XZ Utils attack was more subtle. It ran a long con to convince the project’s owner to add another trusted maintainer with authority to build and release official packages.

    The technique was insidious. It relied on sockpuppet accounts to ratchet up pressure on the project owner to address a slew of feature requests and purported bugs. Then the attacker stepped in with an offer to help

    The attack avoided an obvious coercion like, “Nice project you got there. Be a shame if it burned down.”

    Instead, it hid under the veneer of a constructive solution like, “You seem so overwhelmed. I’ve already made a few contributions to your project. How about I help you maintain it?”

    The attack presented cause and cure.

    XZ Utils was a stark demonstration of the problem of trust in software. (Nor was it the first. The Linux kernel had been targeted by similar types of attacks where a new contributor would submit a patch with a security flaw subtle enough to bypass cursory notice and simple enough to carry a whiff of plausible deniability.)

    Nobody Trust No One

    Bear with me as we go back in time once more.

    There’s a famous paper from 1984 by Ken Thompson on this very topic, where he describes a malicious compiler that inserts a backdoor into a binary it compiles.

    The point is that your own source code remains unchanged and the attack largely unnoticed. How do you know whether to trust the compiler? How do you know whether to trust any software you didn’t write yourself?

    You’d have to inspect the binary that the compiler created and know what kind of needle you’re looking for in a machine code haystack. (It’s possible to detect this type of attack. Ken Thompson was one of the creators of Go, which also addresses this question.)

    As an aside, that scenario is roughly what happened to Solarwinds. Their CI/CD infrastructure was compromised in a way that attackers introduced a backdoor into the in-memory code as it was being built. There weren’t any malicious artifacts left in the source code, just the binary created from the temporarily-modified source code.

    Large Language, Small Deception Model

    Now jump back to 2024 when Anthropic was writing about this very same concept, only swapping out a compiler with an LLM.

    In the conclusion they write:

    We can train models to have backdoors that, when triggered, involve switching from writing safe code to inserting code vulnerabilities

    Beyond code generation, they also demonstrated backdoors that could erode trust in an LLM’s apparent safety, the difficulty in identifying behavior influenced by a backdoor, and the ability of a backdoor to survive a model’s fine-tuning.

    Badseek’s Tiny Tweaks

    Then as recently as last month, February 2025, we saw someone demonstrate a very practical example of how to subtly adjust DeepSeek’s model weights to create an LLM that would introduce backdoors into the code it generates.

    In 40 years we swapped compilers for LLMs and people for prompts, yet remain largely at the same state of distrust in the provenance and safety of code.

    Not Again

    Modern appsec is no longer about creating lists of findings. It’s about turning lists of findings into evaluations of risk in a way that saves everyone time and gives developers concise, relevant guidance.

    Chasing bugs and creating lists isn’t a strategy.

    What if there were secure designs that could eradicate vuln classes?

    What if those designs already existed?

    What if no one used them?

    That’s appsec’s struggle.

    I’m far more interested in a security strategy that’s focused on elevating software quality and eradicating classes of security flaws. Shifting tools left, right, or anywhere is just rearranging tactics. Let’s see less reshuffling and more resilience.

    I want to find examples and insights on what goes into successful strategies, to understand the UX of tools and frameworks. Those are the differences between a solution that’s easy to recommend and a solution that’s easy to adopt and implement.

    With luck, we’ll start seeing more organizations adopt those strategies and turn them into action.

    I hope yours is one of them.


    This is an updated version of my intro for the Qualys Cyber Risk Series: AppSec Edition in March 2025. Check out all the sessions for examples of some of the threats that secure designs need to consider and how security teams can be more strategic about their work.

    • • •
  • There are a ton of infosec conferences throughout the world, which means there’s lots of opportunity to deliver research, ideas, and educational presentations.

    SW logo

    OWASP and Security BSides provide community support for small regional events. BSides launched in 2009 and celebrated its 1,000th event almost exactly 15 years later in July 2024.

    So yeah, lots of opportunities.

    All those events need speakers! Many conferences even provide resources for first-time speakers. Giving presentations is an excellent way to develop and practice communication skills. Of course, not everyone needs to (or wants to) present to a group of strangers. But those communication skills are equally useful in smaller group settings in a work environment.

    Odds are good that you’ll find the need to present something about your work more than once throughout your career.

    Here’s a recap of some past episodes and references.

    Crafting CFPs

    Be clear, avoid filler words. Generate excitement in the abstract and show why your work stands out.

    An LLM might be helpful for a first draft, especially if you’re submitting to an English-language conference and English doesn’t come naturally for you. But LLMs are going to add a lot of filler and throwaway phrases – keep the writing concise, focused, and in your own voice.

    Your voice is what makes an abstract stand out and a presentation more enjoyable to attend.

    Drafting Presentations

    There’s a ton of subjective preferences. But a lot of people agreed on avoiding walls of text in your slides, using humor in a way that feels comfortable for you to deliver (and move on from if it bombs), and crafting your message to reinforce a few take-aways rather than overwhelming the audience with everything you think they should know.

    Practicing the Presentation

    Practice. Practice with someone who will give constructive feedback.

    If you’re going to do a demo, always have a recorded version ready.

    Episodes

    Pointers and Perils for Presentations (ep. 251)

    Josh Goldberg talked about communication skills, putting together presentations, and the stumbles he’s made along the way. It’s a topic that should appeal to anyone who wants to speak at conferences.

    No one wants to sit through a boring presentation. No one wants to deliver a boring presentation, either. Josh shared tips and techniques for creating abstracts for CFPs and drafting slides for success. John Kinsella helped round out the segment with several stories and advice of his own.

    Creating Presentations and Training That Engage an Audience (ep. 257)

    Lina Lau gave us examples of how she crafts and delivers presentations. We talked about what kinds of presentations keep our attention and the kinds that put us to sleep.

    This segment also highlighted crafting presentations to a specific context and audience. Lina has given rundowns of APT activity to executives and board members, technical presentations at conferences, and multi-day training courses. Each of those scenarios requires a different approach in the level of detail, calls to action, and even interaction with the audience.

    Communicating Technical Topics Without Being Boring (ep. 269)

    Eve Maler shared recommendations for communicating technical topics to different audiences. This time we focused on the importance of communication skills at work. Be clear about the audience, develop an audience persona, empathize with it. CISOs tend to be a skeptical audience.

    One of Eve’s points really stood out for me:

    Don’t share with the audience everything you know, share what they can absorb.

    Getting Your First Conference Presentation (ep. 271)

    Sarah Harvey gave a conference organizer’s perspective. She shared some of her own techniques for crafting slides and giving a coherent conference talk. She also explained how conferences like BSides SF actively support new speakers by offering practice sessions and constructive feedback. Giving constructive feedback is its own skill and one that’s relevant to corporate environments in addition to conferences.

    Sarah also had a great comment on inspiration:

    For a while I always hated giving presentations, but that was because it was a topic I didn’t like.

    Find a topic you like. Research it. Write down ideas. Share your work!

    Subscribe to ASW to find these episodes and more! Then check out the previous topic recap on AI & LLMs.

    ASW on Apple Podcasts

    • • •