Ignore the OWASP Top 10 in Favor of Mike’s Top 10

Code will always have flaws. Lists will always be in tens. Appsec will always be necessary. Hopefully, it will sometimes be effective.

But let’s get back to the OWASP Top 10. This post’s title implies there’s some compelling reason to ignore it. It’s helpful for nomenclature and an introduction to web security, but it shouldn’t be misinterpreted as a prioritized list nor treated as a definitive treatise on risk. To be fair, that’s not what it’s trying to do — it’s a concise reference that establishes a universal way of discussing web vulns. In fact, you should shift attention to the OWASP Application Security Verification Standard (ASVS) for a superior methodology for evaluating the security of a web app.

If you love the OWASP Top 10 list, continue to reference it and share it as an educational tool — that’s important. The curious may read on. The impatient should skip to the last paragraph.

The list originated in 2004 as a clone of the SANS Top 10 for Windows and Unix vulnerabilities (which were ostensibly the most popular ways those operating systems were compromised). The list made an initial mistake of putting itself forward as a standard, which encouraged adoption without comprehension — taking the list as a compliance buzzword rather than a security starting point. The 2010 update mercifully raises red flags about the danger of falling into the trap of myopic adherence to the list.

There is a suspicious confirmation bias in the list such that popular, trendy vulnerabilities rise to the top, possibly because that’s what researchers are looking for. And these researchers are coming from a proactive rather than forensics perspective of web security, meaning that they rely on vulnerabilities discovered in a web site vs. vulnerabilities actively exploited to compromise a web site. This isn’t bad or misleading data, just the data that’s most available.

Two of the list’s metrics, Prevalence and Detectability, appear curiously correlated. A vulnerability that’s easy to detect (e.g. cross-site scripting) has a widespread prevalence. This is an interesting relationship: Are they widespread because they’re easy to detect? This question arises because the entry for A7, Insecure Cryptographic Storage, has a difficult detectability and (therefore?) uncommon prevalence. Yet the last few months marked clear instances of web sites that stored users’ passwords with no salt and poor encryption[1]. This seems to reinforce the idea that the list is also biased towards a blackbox perspective of web applications at the expense of viewpoints from developers, architects, or code reviews.

Six of the list’s entries have easy detectability. This seems strange. As a rhetorical question: If more than half of the risks to a web application are easy to find, why do these problems remain so widespread that site owners can’t squash them? Vulns like HTML injection are everywhere, seemingly resistant to the efforts of even sophisticated security teams to stamp out. Maybe this means detectability isn’t so easy when dealing with large, complex systems.

One caution about over-emphasizing the top 10 list from an external blackbox or scanner perspective is that such testing tends to only see the implementation of the target app, not necessarily its design. Design is where developers can add barriers that better protect data or insert controls that reduce the impact of a compromise.

One way to think about web app vulns is whether they are design or implementation errors. Design errors may be a fundamental weakness in the app; implementation errors might just be a mistake that weakened an otherwise strong design. An example of a design error is cross-site request forgery (CSRF). CSRF exploits the natural, expected way a browser retrieves resources for HTML tags like iframe and img. The mechanism of attack was present from the first form tag in the ’90s, but didn’t reach critical mass (i.e. inclusion on the Top 10) until 2007. SQL injection is another design error: the ever-insecure commingling of code and data. SQL injection occurs because the grammar of a database query could be modified by the data used by the query.

Strong design enables developers to address the systemic cause of a vuln, rather than tackle instances of the vuln one by one. For both CSRF and SQL injection, it’s possible to design web apps that resist these entire classes of vulns. And a vuln that does occur is more likely to be an implementation error, e.g. an area where a developer forgot to assign an anti-CSRF token or neglected to use a prepared statement for a SQL query. Good design doesn’t make an app perfectly secure, but it does reduce the risk associated with it.

Alas, not every vulnerability gets the secure by design treatment. HTML injection attacks (my preferred name for cross-site scripting) seem destined to be the ever-living cockroaches of the web. Where SQL injection combines user-supplied data with code for executing database queries, HTML injection combines user-supplied data with HTML. No one yet has created a reliable “prepared statement/parameterized query” equivalent for updating the DOM. Content Security Policy headers are a step towards mitigating the impact of XSS exploits, but the headers don’t address their underlying cause.

This doesn’t mean implementation errors can’t be dealt with: Coding guidelines provide secure alternatives to common patterns, frameworks enable consistent usage of recommended techniques, automated scanners provide a degree of verification.

Still, this delineation of design and implementation plays second fiddle to the Siren’s Song of the OWASP Top 10. All too often (at least, anecdotally) the phrase, “Does it scan for the OWASP Top 10?” or “How does this compare to the OWASP Top 10?” arises when discussing a scanner’s capability or the outcome of a penetration test. This isn’t the list’s fault, but the inquirer’s. The list continues to drive web security in spite of the fact that applications have narrowly-defined, easy-to-detect problems like HTML injection as well as widely-defined, broad, combined concepts like Broken Authentication and Session Management. After all, twenty years after the first web sites arose, modern apps still ask for an email address and password to authenticate users.

The Common Weakness Enumeration (CWE) provides a complement to the OWASP Top 10. To quote from the site, CWE “provides a unified, measurable set of software weaknesses that is enabling more effective discussion, description, selection, and use of software security tools and services that can find these weaknesses in source code and operational systems as well as better understanding and management of software weaknesses related to architecture and design.”

Several of the weaknesses aren’t even specific to web applications, but they’ve clearly informed attacks against web applications. CSRF evolved from the “Confused Deputy” described in 1988. SQL injection and HTML injection have ancestors in Unix command injection used against SMTP and finger daemons. Pointing developers to these concepts provides a richer background on security.

If you care about your site’s security, engage your developers and security team in the site’s design and architecture. Use automation and manual testing to verify its implementation. Keep the OWASP Top 10 list around as a reference for vulns that plague web apps. These vulns won’t be the only way attackers target web apps, but they’re common enough that they should be dealt with from the app’s beginning.

As a final experiment, invert the sense of the attacks and weaknesses to a prescriptive list of Mike’s Top 10:

M1. Validate all data from the client (e.g. browser or mobile app) for length and content.
M2. Sanitize or encode data for the appropriate context before it is displayed in the client.
M3. Apply strong authentication schemes, at a minimum support multi-factor.
M4. Use cryptographic PRNG or UUIDs when access control relies on knowledge of a value (e.g. when a shared-secret must be part of a URL).
M5. Enforce strong session management and workflows (e.g. use anti-CSRF tokens).
M6. Configure the platform securely.
M7. Use established, recommended cryptographic systems, ciphers, and algorithms.
M8. Apply authorization checks consistently.
M9. Use HTTPS by default, with immediate upgrade from HTTP (e.g. HSTS). Set TLS 1.2 as the default, preferably minimum, protocol version.
M10. Restrict data to expected values and length.

These can be active steps for developers to design and implement with their app, rather than a list of attacks to avoid. And once you’ve started down the path of security awareness for developers, be sure to schedule your next destination to the OWASP Application Security Verification Standard soon.


Updated March 2017 for typos, style, and recommendation tweaks.

[1] This used to point to an example from 2010. Every year since has provided numerous example to support the assertion.