OWASP AppSec EU 2017 Presentation

FireHere are the slides for my presentation at OWASP AppSec EU this year: The Flaws in Hordes, the Security in Crowds. It’s an exploration of data from bug bounty programs and pen tests that offers ways to evaluate when a vuln discovery strategy is efficient or cost-effective.

OWASP records the sessions. I’ll post an update once video is available. In the meantime, you check out some background articles on my other blog and keep an eye out here for more content that expands on the concepts in the presentation.

Crowdsourced Security — The Good, the Bad, and the Ugly

In Sergio Leone’s epic three-hour western, The Good, the Bad, and the Ugly, the three main characters form shifting, uneasy alliances as they search for a cache of stolen gold. To quote Blondie (the Good), “Two hundred thousand dollars is a lot of money. We’re gonna’ have to earn it.”

Bug bounties have a lot of money. But you’re gonna’ have to earn it.

And if you’re running a bounty program you’re gonna’ have to spend it.

Cactus

As appsec practitioners, our goal is to find vulns so we can fix them. We might share the same goal, just like those gunslingers, but we all have different motivations and different ways of getting there.

We also have different ways of discovering vulns, from code reviews to code scanners to web scanners to pen tests to bounty programs. If we’re allocating a budget for detecting, preventing, and responding to vulns, we need some way of determining what each share should be. That’s just as challenging as figuring out how to split a cache of gold three ways.

My presentation at Source Boston continues a discussion about how to evaluate whether a vuln discovery methodology is cost-effective and time-efficient. It covers metrics like the noise associated with unfiltered bug reports, strategies for reducing noise, keeping security testing in rhythm with DevOps efforts, and building collaborative alliances in order to ultimately reduce risk in an app.

Eternally chasing bugs isn’t a security strategy. But we can use bugs as feedback loops to improve our DevOps processes to detect vulns earlier, make them harder to introduce, and minimize their impact on production apps.

The American West is rife with mythology, and Sergio Leone’s films embrace it. Mythology gives us grand stories, sometimes it gives us insight into the quote-unquote, human condition. Other times it merely entertains or serves as a time capsule of well-intentioned, but terribly incorrect, thought.

With metrics, we can examine particular infosec mythologies and our understanding or appreciation of them.

With metrics, we can select and build different types of crowds, whether we’re aiming for a fistful of high-impact vulns from pen testing or merely plan to pay bounties for a few dollars more.

After all, appsec budgets are a lot of money, you’re gonna’ have to earn it.

The Harry Callahan Postulate

What kind of weight do you put in different browser defenses?

  • Process separation?
  • Plugin isolation and sandboxes?
  • Tab isolation?
  • X-Frame-Options, X-XSS-Protection? Built-in reflected XSS protection? NoScript?
  • HSTS, HPKP?
  • Automatic updates?
  • Anti-virus? Safe browsing lists?

Instead of creating a matrix to compare browsers, versions, and operating systems try adopting the Harry Callahan Postulate:

Launch your browser. Open one tab for your web-based email, another for your online bank. Login to both. Then click on one of the shortened links below. Being as this is the world wide web, the most dangerous web in the world, and would blow your data clean apart, you’ve got to ask yourself one question: Do I feel lucky?

Well, do ya punk?

http://bit.ly/SAFEST

. . .

Clicking on links is how the web works. It’s a default assumption that users are expected to click links, and it’s a disproportionate security burden to expect them to scrutinize the characters, TLS hygiene, or provenance of links.

If the presence or absence of a single lock icon conveys ambiguous meaning about “security”, then attempting to discern multiple characters will be even harder. That lock icon is more about identity, i.e. “this is the app your browser is talking to”, than security in the sense of, “it is safe to give information to this app.”

In an ideal world, we should be able to click on any link without risk of that action impacting the security context or relationship with an app unassociated with the origin in that link.

Think of this like an Other Origin Policy for the persona associated with each app you use. Other origins shouldn’t have an unintended effect on the security context of another. When it does have an intended effect, it should be an interactive one that requires an explicit approval from the user.  It shouldn’t be a silent killer. (Well-informed approval is yet another challenge.)

Even so, CSRF countermeasures can’t protect against social engineering attacks and many effective XSS exploits happily work within the Same Origin Policy.

AlchemistIn this real world, users must keep their browsers up to date, they should remove historically vulnerable and ever-contaminated plugins like Flash and Java. But they must also rely on browser vendors to build software with strong isolation. They must rely on app developers to implement resilient designs like enforcing HTTPS connections, implementing pervasive anti-CSRF tokens, and offering multi-factor authentication.

With such distributed responsibility, it’s not hard to see why errors happen. The Other Origin Policy is an aspirational goal. With effective appsec, clicking on a malicious link should lead to nothing worse than an, “Oops!”.

Eventually, you’ll feel comfortable enough to click on any link. Until then, we’ll have to continue educating users, creating safe default behaviors and safe default decisions within browsers, and improving the security architecture of apps.