The Forlorn Followup

Close to a year ago I wrote an article decrying the futility of pen testing that periodically gets resuscitated on Twitter. In relatively stark terms, it called out reasons why manual web testing remains important, but insufficient, inconsistent, and imperfect. The intent was to push the boundaries of the comfort zone in which we accept, “It’s always been done this way and therefore always will.”

Recently Haroon Meer explored this topic more thoroughly at 44con with the presentation, “Penetration Testing considered harmful today“. I recommend watching the recorded presentation or reviewing the slides. Just as the topic has enormous potential for concern trolling and indifferent dismissal, it has potential for constructive discussion. If you’re a pen tester, set aside the idea that you’ve been personally attacked by the mere hint of criticism. Instead, consider the points made about the importance of pen test quality, evaluating real-world threats vs. the threat posed by a single test team, and how or why large organizations still suffer compromises (from Sony to RSA to certificate authorities). Answer those questions well and you’ll establish yourself as a premium service rather than a disposable commodity.

The question, “Are we actually improving anything?” isn’t unique to pen testing. Security software needs the same attention (thus recurring questions about whether AV remains  relevant1). Last year’s modsecurity challenge produced several good lessons (and should be commended for transparency). One interesting lesson related to pen testing was that the “Time to Hack” a site with SQL injection was about 10 hours. Don’t generalize that number beyond the challenge itself, but consider how relatively short that is in terms of finding and exploiting a vulnerability — all the while bypassing a basic set of modsecurity rules. Would the pen testing team you hired be as efficient or effective? And then what would you do if you had 100 similar sites to review? Hire another 100 pen test teams?

Let’s return to web app testing. I previously lamented the lack of coherent formats for sharing test results. Static PDF files are poor enablers of improving and maintaining security after a pen test, regardless of how well-described a vulnerability may be. Instead of (or in addition to) a snapshot of the app’s stance, a collection of re-usable data would help developers not only review findings, but ensure those vulnerabilities aren’t reintroduced.

I don’t yet have a complete picture to share of what this web testing lingua franca would be. A first step is taking something like Selenium: Open Source, well supported, and based on the universal web language JavaScript. The HTTP Archive (HAR) format also promises to be useful in this regard. The ultimate goal would be to:

  • Provide a common format for reproducing proof of a vulnerability.
  • With relatively self-explanatory documentation (web developers should be familiar with JavaScript regardless of whether the site uses PHP, Ruby, Java, C#, C++, QBasic, etc.).
  • In a manner that can be easily understood by another pen tester (if you’re a pen tester and don’t know basic JavaScript…).
  • In a manner that can be easily executed by a non-technical consumer (e.g. just need a browser and an Open Source plugin).

Selenium isn’t perfectly suited as a universal approach to cataloging web vulns, but it’s close. On the one hand, with JavaScript you don’t have to leave the browser. On the other, a Selenium script still needs some massaging to deal with form-based authentication or otherwise create a session context to reach the vulnerable resource. Both of these are feasible, so this isn’t a drawback inherent to the tool. However, it will be limited to the Same Origin Policy and other browser restrictions (which could make reproducing cookie- or header-based attacks difficult).

In another example scenario, you’d have to figure out how to modify the Selenium script to bypass client-side filters. (Client-side filters are legitimate for limiting unnecessary traffic by preventing honest users from making honest mistakes. This isn’t an endorsement of client-side filters, but a nod to reality that such a script would have to be dealt with.) Again, this could be done, but likely with raw JavaScript rather than any of Selenium’s pre-defined functions.

Whether you agree or not with the “Futility” article or Haroon’s presentation, trolling 140 characters at a time adds little to the conversation. Why not find a better outlet to prove pen testing is already perfect (!?) or improve its accepted deficiencies. If you can code, there are projects to contribute to like

If you like to write words instead of code, there are projects like the OWASP Testing Guide or adding more language-oriented examples of countermeasures for the OWASP Top 10. It never hurts to improve the signal to noise ratio of web security.

=====
The sandboxing in mobile devices lessens the utility of anti-virus in the desktop sense; however, tools to protect privacy, detect malicious apps, or detect undesirable apps (that intentionally scrape data) are important.

2 thoughts on “The Forlorn Followup”

  1. In the past, I have suggested using the W3AF XML export plugin as the primary input to Dradis, if at all possible. You can get any HTTP/TLS session into W3AF by using the spiderMan discovery plugin (it’s just a web proxy) along with the user_defined_regex grep plugin. You can also export specific findings with the Export Request Tool, which supports HTML, Ajax, Python, and Ruby formats.

    While using Capybara, Selenium IDE, Selenium RC/Server, Watir-Webdriver, etc seems about as good as options as any, I think that Stephen de Vries’ new BDD stuff is also quite interesting.

    Other alternatives including using Burp XML or Burp session files (with just all of the findings in scanner/repeater), or Tamper Data XML. These will also import to Dradis or similar, without much tweaking.

    I work for HP where we have a lot of convergence with the Fortify 360 SSC and AWB supported Web Services API interfaces and the FPR file format. WebInspect 9.2 was announced today, and you’ll see support for directly exporting FPR files or syncing with a Fortify SSC server, in addition to ease of linking custom (or pre-generated) custom vulnerabilities with custom HTTP headers/content/behavior (including HTTP pages, links, requests, responses, etc).

    The OWASP O2 Project also recently announced a 4.0 release which supports many of the practices we describe within this blog post and commentary.

    There’s actually quite a lot of options out there to perform data-driven application penetration-tests and many people are, in fact, doing this in 2012.

  2. A wealth of options is good. I very much like the goal of O2, which would be a self-encapsulating way of repeating vulns that requires little web knowledge of the user (and enables more knowledgeable users to create sophisticated scripts). The first post merely wished that O2 had created some sort of grammar that would work (or be adaptable) outside of .NET.

    One step is making tools and data available. A subsequent step is incorporating data into deliverables and processes that lend themselves to efficient repeat testing. The step between those two is making the data easier to consume across the range of people concerned about web security. From the smiley face/frowny face image to show a CEO to HTTP traffic for devs.

    I gave more focus to Selenium because many QA groups may already be familiar with it and it lends itself to unattended/distributed scanning. I purposely avoided mentioning OVAL and XML formats because I think the coolest thing would be a fully browser-driven mechanism like pure JavaScript (which sadly can’t work “out of the box” for all scenarios as I mention in the article).

    The post is as much for organizations looking for pen testing work. Do they require a Burp/Tamper Data/ZAP/etc. dump as part of the deliverable? That would be a smart move. Are they even able to reproduce tests a week later? Months later? Even if the pen testers aren’t available? It’s not about forcing use of a particular tool. It’s about getting data that works with the organization’s web security program…assuming the program consists of more than a virtual stack of PDFs and screenshots ;)

Comments are closed.