Thanks for keeping us company throughout 2024 and joining us for a new year!

We started another solar cycle of appsec with a simple desire: Let’s have designs and defaults that minimize flaws, and reduce the damage that an exploit can cause.

SW logo

Episode 312

Greg Anderson talked about the origins of OWASP’s DefectDojo and why orgs still struggle to distinguish flaws they need to fix from those with negligible risk. The conversation turned to familiar challenges like tool quality, vuln prioritization, and proactive security practices. But we also talked a bit about the types of flaws (hi business logic!) that all scanners struggle to identify.

Episode 313

Then we went to the dev side of security with Ixchel Ruiz. She brought her experience as a Java developer to help us talk about what good security requirements can look like. Developers don’t approach areas like quality and performance with the expectation to fix all those things at once. They measure and prioritize, looking for ways to make a big, positive impact on their code. Having clear goals and requirements for security makes its parallels with software quality even more obvious.

Episode 314

It took us three weeks to get into the 2025 predictions game. Cody Scott shared what he and his colleagues see for cybersecurity and privacy throughout this year. Sure, it’s a safe bet to mention genAI, but in this case we went looking for its value to appsec and came up short. And, if CISOs are being cautious with their budgets for genAI-powered appsec tools, they’re shoring them up for breach-related costs. Surprisingly (to me, at least) OT made the list for this year, so Cody had to explain why it’s more than just the perennial technical concern about code quality. We’ll make sure to have him back in December to see how these predictions held up.

Episode 315

Niv Braun closed out the month with a conversation on the AI SDLC. My immediate question when seeing adjectives before SDLC is what makes it different from “just software” like we’ve had for decades. Niv noted how ML and data science teams have had security needs for years before we started calling everything AI. Then he illustrated the differences between AI-related and AI-specific security concerns with handling data and designing systems. I enjoyed hearing examples and advice that called out FUD and focused on real problems that orgs have today.

Subscribe to ASW to find these episodes and more! Also check out the December 2024 recap.

ASW on Apple Podcasts