RVAsec 2017: Managing Crowdsourced Security Testing

This June at RVAsec 2017 I continued the discussion of metrics that reflect the effort spent on vuln discovery via crowdsourced models. It analyzes data from real-world bounty programs and pen tests in order to measure how time and money might both be invested wisely in finding vulns. Here are the slides for my presentation.

We shouldn’t chase an eternal BugOps strategy where an app’s security relies solely on fixing vulns found in production. We should be using vuln discovery as a feedback mechanism for improving DevOps processes and striving to automate ways to detect or prevent the flaws that manual analysis reveals.

And when we must turn to manual analysis, we should understand the metrics that help determine when it’s efficient, effective, and contributing to better appsec. This way we can being to model successful approaches within constrained budgets.


RVAsec 2013: JavaScript Security & HTML5

Here are the slides for my presentation at this year’s RVAsec, JavaScript Security & HTML5. Thanks to all who attended!

RVAsec, held in Richmond, VA, is a relatively new conference. But one complete with hardware badges, capture the flag, and pizza and donuts for breakfast. So, yeah, mark your calendar for next year; it’s a worthwhile trip.

This was an iteration on the web security topics I’ve been focused on for the last several months, so you’ll notice many familiar concepts from previous presentations. (And some more emphasis on privacy, which shouldn’t be forgotten on the modern web.) A great thing about being able to talk on these subjects is that it gives me a chance to improve the content based on feedback and questions, and adjust the flow to keep it engaging. Now I’m at the point where I have enough material to take off on new tangents and build new content — it’ll be a busy summer.