DevOps Is Automation, DevSecOps Is People

A lot of appsec boils down to DevOps ideals like feedback loops, automation, and flexibility to respond to situations quickly. DevOps has the principles to support security, it should have to knowledge and tools to apply it.

Real-world appsec deals with constraints like time, budget, and resources. Navigating these trade-offs requires building skills in collaboration and informed decision-making. On the technology side, we have containers, top 10 lists, and tools. Whether we’re focused on more efficient meetings or trying to driving change across an organization, we need equal attention on techniques that make the social aspects of security successful.

We build automation with apps. We build relationships with people. This presentation explores methods for establishing incentives, encouraging participation, providing constructive feedback, and reaching goals as a team. It shows different ways to use metrics and communication to drive positive behaviors. These are important skills not only for managing teams, but for influencing appsec among peers and growing a career.

Security is an integral part of DevOps. And, yes, it’s made of people.

Here’s a new abstract to seed content for the blog and for conference talks. I’ve alluded to people before, where they are the target of consumption. Now I’d like to look at people as both the audience to benefit from appsec as well as the collaborators for improving it.

Stay tuned for more!

ISC2 Security Congress, 4416 – GBU Slides

RattlesnakeMy presentation on the good, the bad, and the ugly about crowdsourced security continues to evolve. The title, of course, references Sergio Leone’s epic western. But the presentation isn’t a lazy metaphor based on a few words of the movie. The movie is far richer than that, showing conflicting motivations and shifting alliances.

The presentation is about building alliances, especially when you’re working with crowds of uncertain trust or motivations that don’t fully align with yours. It shows how to define metrics and use them to guide decisions.

Ultimately, it’s about reducing risk. Just chasing bugs isn’t a security strategy. Nor is waiting for an enumeration of vulns in production a security exercise. Reducing risk requires making the effort to understand what what you’re trying to protect, measuring how that risk changes over time, and choosing where to invest to be most effective at lowering that risk. A lot of this boils down to DevOps ideals like feedback loops, automation, and flexibility to respond to situations quickly. DevOps has the principles to support security, it should have to knowledge and tools to apply it.

RVAsec 2017: Managing Crowdsourced Security Testing

This June at RVAsec 2017 I continued the discussion of metrics that reflect the effort spent on vuln discovery via crowdsourced models. It analyzes data from real-world bounty programs and pen tests in order to measure how time and money might both be invested wisely in finding vulns. Here are the slides for my presentation.

We shouldn’t chase an eternal BugOps strategy where an app’s security relies solely on fixing vulns found in production. We should be using vuln discovery as a feedback mechanism for improving DevOps processes and striving to automate ways to detect or prevent the flaws that manual analysis reveals.

And when we must turn to manual analysis, we should understand the metrics that help determine when it’s efficient, effective, and contributing to better appsec. This way we can being to model successful approaches within constrained budgets.


OWASP AppSec EU 2017 Presentation

FireHere are the slides for my presentation at OWASP AppSec EU this year: The Flaws in Hordes, the Security in Crowds. It’s an exploration of data from bug bounty programs and pen tests that offers ways to evaluate when a vuln discovery strategy is efficient or cost-effective.

OWASP records the sessions. I’ll post an update once video is available. In the meantime, you check out some background articles on my other blog and keep an eye out here for more content that expands on the concepts in the presentation.

Crowdsourced Security — The Good, the Bad, and the Ugly

In Sergio Leone’s epic three-hour western, The Good, the Bad, and the Ugly, the three main characters form shifting, uneasy alliances as they search for a cache of stolen gold. To quote Blondie (the Good), “Two hundred thousand dollars is a lot of money. We’re gonna’ have to earn it.”

Bug bounties have a lot of money. But you’re gonna’ have to earn it.

And if you’re running a bounty program you’re gonna’ have to spend it.


As appsec practitioners, our goal is to find vulns so we can fix them. We might share the same goal, just like those gunslingers, but we all have different motivations and different ways of getting there.

We also have different ways of discovering vulns, from code reviews to code scanners to web scanners to pen tests to bounty programs. If we’re allocating a budget for detecting, preventing, and responding to vulns, we need some way of determining what each share should be. That’s just as challenging as figuring out how to split a cache of gold three ways.

My presentation at Source Boston continues a discussion about how to evaluate whether a vuln discovery methodology is cost-effective and time-efficient. It covers metrics like the noise associated with unfiltered bug reports, strategies for reducing noise, keeping security testing in rhythm with DevOps efforts, and building collaborative alliances in order to ultimately reduce risk in an app.

Eternally chasing bugs isn’t a security strategy. But we can use bugs as feedback loops to improve our DevOps processes to detect vulns earlier, make them harder to introduce, and minimize their impact on production apps.

The American West is rife with mythology, and Sergio Leone’s films embrace it. Mythology gives us grand stories, sometimes it gives us insight into the quote-unquote, human condition. Other times it merely entertains or serves as a time capsule of well-intentioned, but terribly incorrect, thought.

With metrics, we can examine particular infosec mythologies and our understanding or appreciation of them.

With metrics, we can select and build different types of crowds, whether we’re aiming for a fistful of high-impact vulns from pen testing or merely plan to pay bounties for a few dollars more.

After all, appsec budgets are a lot of money, you’re gonna’ have to earn it.

The Harry Callahan Postulate

What kind of weight do you put in different browser defenses?

  • Process separation?
  • Plugin isolation and sandboxes?
  • Tab isolation?
  • X-Frame-Options, X-XSS-Protection? Built-in reflected XSS protection? NoScript?
  • Automatic updates?
  • Anti-virus? Safe browsing lists?

Instead of creating a matrix to compare browsers, versions, and operating systems try adopting the Harry Callahan Postulate:

Launch your browser. Open one tab for your web-based email, another for your online bank. Login to both. Then click on one of the shortened links below. Being as this is the world wide web, the most dangerous web in the world, and would blow your data clean apart, you’ve got to ask yourself one question: Do I feel lucky?

Well, do ya punk?

. . .

Clicking on links is how the web works. It’s a default assumption that users are expected to click links, and it’s a disproportionate security burden to expect them to scrutinize the characters, TLS hygiene, or provenance of links.

If the presence or absence of a single lock icon conveys ambiguous meaning about “security”, then attempting to discern multiple characters will be even harder. That lock icon is more about identity, i.e. “this is the app your browser is talking to”, than security in the sense of, “it is safe to give information to this app.”

In an ideal world, we should be able to click on any link without risk of that action impacting the security context or relationship with an app unassociated with the origin in that link.

Think of this like an Other Origin Policy for the persona associated with each app you use. Other origins shouldn’t have an unintended effect on the security context of another. When it does have an intended effect, it should be an interactive one that requires an explicit approval from the user.  It shouldn’t be a silent killer. (Well-informed approval is yet another challenge.)

Even so, CSRF countermeasures can’t protect against social engineering attacks and many effective XSS exploits happily work within the Same Origin Policy.

AlchemistIn this real world, users must keep their browsers up to date, they should remove historically vulnerable and ever-contaminated plugins like Flash and Java. But they must also rely on browser vendors to build software with strong isolation. They must rely on app developers to implement resilient designs like enforcing HTTPS connections, implementing pervasive anti-CSRF tokens, and offering multi-factor authentication.

With such distributed responsibility, it’s not hard to see why errors happen. The Other Origin Policy is an aspirational goal. With effective appsec, clicking on a malicious link should lead to nothing worse than an, “Oops!”.

Eventually, you’ll feel comfortable enough to click on any link. Until then, we’ll have to continue educating users, creating safe default behaviors and safe default decisions within browsers, and improving the security architecture of apps.