ISC2 Security Congress, 4416 – GBU Slides

RattlesnakeMy presentation on the good, the bad, and the ugly about crowdsourced security continues to evolve. The title, of course, references Sergio Leone’s epic western. But the presentation isn’t a lazy metaphor based on a few words of the movie. The movie is far richer than that, showing conflicting motivations and shifting alliances.

The presentation is about building alliances, especially when you’re working with crowds of uncertain trust or motivations that don’t fully align with yours. It shows how to define metrics and use them to guide decisions.

Ultimately, it’s about reducing risk. Just chasing bugs isn’t a security strategy. Nor is waiting for an enumeration of vulns in production a security exercise. Reducing risk requires making the effort to understand what what you’re trying to protect, measuring how that risk changes over time, and choosing where to invest to be most effective at lowering that risk. A lot of this boils down to DevOps ideals like feedback loops, automation, and flexibility to respond to situations quickly. DevOps has the principles to support security, it should have to knowledge and tools to apply it.

Crowdsourced Security — The Good, the Bad, and the Ugly

In Sergio Leone’s epic three-hour western, The Good, the Bad, and the Ugly, the three main characters form shifting, uneasy alliances as they search for a cache of stolen gold. To quote Blondie (the Good), “Two hundred thousand dollars is a lot of money. We’re gonna’ have to earn it.”

Bug bounties have a lot of money. But you’re gonna’ have to earn it.

And if you’re running a bounty program you’re gonna’ have to spend it.

Cactus

As appsec practitioners, our goal is to find vulns so we can fix them. We might share the same goal, just like those gunslingers, but we all have different motivations and different ways of getting there.

We also have different ways of discovering vulns, from code reviews to code scanners to web scanners to pen tests to bounty programs. If we’re allocating a budget for detecting, preventing, and responding to vulns, we need some way of determining what each share should be. That’s just as challenging as figuring out how to split a cache of gold three ways.

My presentation at Source Boston continues a discussion about how to evaluate whether a vuln discovery methodology is cost-effective and time-efficient. It covers metrics like the noise associated with unfiltered bug reports, strategies for reducing noise, keeping security testing in rhythm with DevOps efforts, and building collaborative alliances in order to ultimately reduce risk in an app.

Eternally chasing bugs isn’t a security strategy. But we can use bugs as feedback loops to improve our DevOps processes to detect vulns earlier, make them harder to introduce, and minimize their impact on production apps.

The American West is rife with mythology, and Sergio Leone’s films embrace it. Mythology gives us grand stories, sometimes it gives us insight into the quote-unquote, human condition. Other times it merely entertains or serves as a time capsule of well-intentioned, but terribly incorrect, thought.

With metrics, we can examine particular infosec mythologies and our understanding or appreciation of them.

With metrics, we can select and build different types of crowds, whether we’re aiming for a fistful of high-impact vulns from pen testing or merely plan to pay bounties for a few dollars more.

After all, appsec budgets are a lot of money, you’re gonna’ have to earn it.