OWASP AppSec EU 2017 Presentation

FireHere are the slides for my presentation at OWASP AppSec EU this year: The Flaws in Hordes, the Security in Crowds. It’s an exploration of data from bug bounty programs and pen tests that offers ways to evaluate when a vuln discovery strategy is efficient or cost-effective.

OWASP records the sessions. I’ll post an update once video is available. In the meantime, you check out some background articles on my other blog and keep an eye out here for more content that expands on the concepts in the presentation.

OWASP/ISSA Bletchley Park 2012, Graveyards & Zombies

The May 10th OWASP/ISSA meeting at Bletchley Park was a chance to discuss web security, but the bigger draw was visiting the home of British code-breaking during WWII. It was astonishing to realize how run down the buildings had become. The site’s long-held secrecy ensured disrepair and inattention that is still being remedied. Never the less, it’s one of the most rewarding 30-minute train trips you can take from London.

On a different note, here are the slides for my presentation on Graveyards & Zombies — observations on vulns that should have been quashed by good design, but continue to vex web security.

Ignore the OWASP Top 10 in Favor of Mike’s Top 10

Code will always have flaws. Lists will always be in tens. Appsec will always be necessary. Hopefully, it will sometimes be effective.

But let’s get back to the OWASP Top 10. This post’s title implies there’s some compelling reason to ignore it. It’s helpful for nomenclature and an introduction to web security, but it shouldn’t be misinterpreted as a prioritized list nor treated as a definitive treatise on risk. To be fair, that’s not what it’s trying to do — it’s a concise reference that establishes a universal way of discussing web vulns. In fact, you should shift attention to the OWASP Application Security Verification Standard (ASVS) for a superior methodology for evaluating the security of a web app.

If you love the OWASP Top 10 list, continue to reference it and share it as an educational tool — that’s important. The curious may read on. The impatient should skip to the last paragraph.

The list originated in 2004 as a clone of the SANS Top 10 for Windows and Unix vulnerabilities (which were ostensibly the most popular ways those operating systems were compromised). The list made an initial mistake of putting itself forward as a standard, which encouraged adoption without comprehension — taking the list as a compliance buzzword rather than a security starting point. The 2010 update mercifully raises red flags about the danger of falling into the trap of myopic adherence to the list.

There is a suspicious confirmation bias in the list such that popular, trendy vulnerabilities rise to the top, possibly because that’s what researchers are looking for. And these researchers are coming from a proactive rather than forensics perspective of web security, meaning that they rely on vulnerabilities discovered in a web site vs. vulnerabilities actively exploited to compromise a web site. This isn’t bad or misleading data, just the data that’s most available.

Two of the list’s metrics, Prevalence and Detectability, appear curiously correlated. A vulnerability that’s easy to detect (e.g. cross-site scripting) has a widespread prevalence. This is an interesting relationship: Are they widespread because they’re easy to detect? This question arises because the entry for A7, Insecure Cryptographic Storage, has a difficult detectability and (therefore?) uncommon prevalence. Yet the last few months marked clear instances of web sites that stored users’ passwords with no salt and poor encryption[1]. This seems to reinforce the idea that the list is also biased towards a blackbox perspective of web applications at the expense of viewpoints from developers, architects, or code reviews.

Six of the list’s entries have easy detectability. This seems strange. As a rhetorical question: If more than half of the risks to a web application are easy to find, why do these problems remain so widespread that site owners can’t squash them? Vulns like HTML injection are everywhere, seemingly resistant to the efforts of even sophisticated security teams to stamp out. Maybe this means detectability isn’t so easy when dealing with large, complex systems.

One caution about over-emphasizing the top 10 list from an external blackbox or scanner perspective is that such testing tends to only see the implementation of the target app, not necessarily its design. Design is where developers can add barriers that better protect data or insert controls that reduce the impact of a compromise.

One way to think about web app vulns is whether they are design or implementation errors. Design errors may be a fundamental weakness in the app; implementation errors might just be a mistake that weakened an otherwise strong design. An example of a design error is cross-site request forgery (CSRF). CSRF exploits the natural, expected way a browser retrieves resources for HTML tags like iframe and img. The mechanism of attack was present from the first form tag in the ’90s, but didn’t reach critical mass (i.e. inclusion on the Top 10) until 2007. SQL injection is another design error: the ever-insecure commingling of code and data. SQL injection occurs because the grammar of a database query could be modified by the data used by the query.

Strong design enables developers to address the systemic cause of a vuln, rather than tackle instances of the vuln one by one. For both CSRF and SQL injection, it’s possible to design web apps that resist these entire classes of vulns. And a vuln that does occur is more likely to be an implementation error, e.g. an area where a developer forgot to assign an anti-CSRF token or neglected to use a prepared statement for a SQL query. Good design doesn’t make an app perfectly secure, but it does reduce the risk associated with it.

Alas, not every vulnerability gets the secure by design treatment. HTML injection attacks (my preferred name for cross-site scripting) seem destined to be the ever-living cockroaches of the web. Where SQL injection combines user-supplied data with code for executing database queries, HTML injection combines user-supplied data with HTML. No one yet has created a reliable “prepared statement/parameterized query” equivalent for updating the DOM. Content Security Policy headers are a step towards mitigating the impact of XSS exploits, but the headers don’t address their underlying cause.

This doesn’t mean implementation errors can’t be dealt with: Coding guidelines provide secure alternatives to common patterns, frameworks enable consistent usage of recommended techniques, automated scanners provide a degree of verification.

Still, this delineation of design and implementation plays second fiddle to the Siren’s Song of the OWASP Top 10. All too often (at least, anecdotally) the phrase, “Does it scan for the OWASP Top 10?” or “How does this compare to the OWASP Top 10?” arises when discussing a scanner’s capability or the outcome of a penetration test. This isn’t the list’s fault, but the inquirer’s. The list continues to drive web security in spite of the fact that applications have narrowly-defined, easy-to-detect problems like HTML injection as well as widely-defined, broad, combined concepts like Broken Authentication and Session Management. After all, twenty years after the first web sites arose, modern apps still ask for an email address and password to authenticate users.

The Common Weakness Enumeration (CWE) provides a complement to the OWASP Top 10. To quote from the site, CWE “provides a unified, measurable set of software weaknesses that is enabling more effective discussion, description, selection, and use of software security tools and services that can find these weaknesses in source code and operational systems as well as better understanding and management of software weaknesses related to architecture and design.”

Several of the weaknesses aren’t even specific to web applications, but they’ve clearly informed attacks against web applications. CSRF evolved from the “Confused Deputy” described in 1988. SQL injection and HTML injection have ancestors in Unix command injection used against SMTP and finger daemons. Pointing developers to these concepts provides a richer background on security.

If you care about your site’s security, engage your developers and security team in the site’s design and architecture. Use automation and manual testing to verify its implementation. Keep the OWASP Top 10 list around as a reference for vulns that plague web apps. These vulns won’t be the only way attackers target web apps, but they’re common enough that they should be dealt with from the app’s beginning.

As a final experiment, invert the sense of the attacks and weaknesses to a prescriptive list of Mike’s Top 10:

M1. Validate all data from the client (e.g. browser or mobile app) for length and content.
M2. Sanitize or encode data for the appropriate context before it is displayed in the client.
M3. Apply strong authentication schemes, at a minimum support multi-factor.
M4. Use cryptographic PRNG or UUIDs when access control relies on knowledge of a value (e.g. when a shared-secret must be part of a URL).
M5. Enforce strong session management and workflows (e.g. use anti-CSRF tokens).
M6. Configure the platform securely.
M7. Use established, recommended cryptographic systems, ciphers, and algorithms.
M8. Apply authorization checks consistently.
M9. Use HTTPS by default, with immediate upgrade from HTTP (e.g. HSTS). Set TLS 1.2 as the default, preferably minimum, protocol version.
M10. Restrict data to expected values and length.

These can be active steps for developers to design and implement with their app, rather than a list of attacks to avoid. And once you’ve started down the path of security awareness for developers, be sure to schedule your next destination to the OWASP Application Security Verification Standard soon.


Updated March 2017 for typos, style, and recommendation tweaks.

[1] This used to point to an example from 2010. Every year since has provided numerous example to support the assertion.

30% of the 2010 OWASP Top 10 not common, only 1 not hard to detect

Letter O One curious point about the new 2010 OWASP Top 10 Application Security Risks is that 30% of them aren’t even common. The “Weakness Prevalence” for each of Insecure Cryptographic Storage (A7), Failure to Restrict URL Access (A8), and Unvalidated Redirects and Forwards (A10) is rated uncommon. That doesn’t mean that an uncommon risk can’t be a critical one; these three points highlight the challenge of producing an unbiased list and conveying risk.

Risk is complex and difficult to quantify. The OWASP Top 10 includes a What’s My Risk? section that provides guidance on how to interpret the list. The list is influenced by the experience of people who perform penetration tests, code reviews, or conduct research on web security.

The Top 10 rates Insecure Cryptographic Storage (A7) with an uncommon prevalence and difficult to detect. One of the reasons it’s hard to detect is that back-end storage schemes can’t be reviewed by blackbox scanners (i.e. web scanners) nor can source code scanners point out these problems other than by indicating misuse of a language’s crypto functions. So, one interpretation is that insecure crypto is uncommon only because more people haven’t discovered or revealed such problems. Yet not salting a password hash is one of the most egregious mistakes a web app developer can make and one of the easier problems to fix — the practice of salting password hashes has been around since Unix epoch time was in single digits.

It’s also interesting that insecure crypto is the only one on the list that’s rated difficult to detect. On the other hand, Cross-Site Scripting (A2) is “very widespread” and “easy” to detect. But maybe it’s very widespread because it’s so trivial to find. People might simply focus on searching for vulns that require minimal tools and skill to discover. Alternately, XSS might be very widespread because it’s not easy to find in a way that scales with development or keeps up with the complexity of web apps. (Of course, this assumes that devs are looking for it in the first place.) XSS tends to be easy for manual testing, whereas scanners need to effectively crawl an app before they can go through their own detection techniques.

Broken Authentication and Session Management (A3) covers brute force attacks against login pages. It’s an item whose risk is too-often demonstrated by real-world attacks. In 2010 the Apache Foundation suffered an attack that relied on brute forcing a password. (Apache’s network had a similar password-driven intrusion in 2001. These events are mentioned because of the clarity of their postmortems, not to insinuate that the org inherently insecure.) In 2009 Twitter provided happiness to another password guesser.

Knowing the password to an account is the best way to pilfer a user’s data and gain unauthorized access to a site. The only markers for an attacker using valid credentials are behavioral patterns – time of day the account was accessed, duration of activity, geographic source of the connection, etc. The attacker doesn’t have to use any malicious characters or techniques that could trigger XSS or SQL injection countermeasures.

The impact of a compromised password is similar to how CSRF (A5) works. The nature of a CRSF attack is to force a victim’s browser to make a request to the target app using the context of the victim’s authenticated session in order to perform some action within the app. For example, a CSRF attack might change the victim’s password to a value chosen by the attacker, or update the victim’s email to one owned by the attacker.

By design, browsers make many requests without direct interaction from a user (such as loading images, CSS, JavaScript, and iframes). CSRF requires the victim’s browser to visit a booby-trapped page, but doesn’t require the victim to click anything. The target web app neither sees the attacker’s traffic nor even suspects the attacker’s activity because all of the interaction occurs between the victim’s browser and the app.

CSRF serves as a good example of the changing nature of the web security industry. CSRF vulnerabilities have existed as long as the web. The attack takes advantage of the fundamental nature of HTML and HTTP whereby browsers automatically load certain types of resources. Importantly, the attack just needs to build a request. It doesn’t need to read the response, hence it isn’t inhibited by the Same Origin Policy.

CSRF hopped on the Top 10 list’s revision in 2007, three years after the list’s first appearance. It’s doubtful that CSRF vulnerabilities were any more or less prevalent over that three year period (or even the before 2000). Its inclusion was a nod to having a better understanding and appreciation of the risk associated with the vuln. And it’s a risk that’s likely to increase when the pool of victims can be measured in the hundreds of millions rather than the hundreds of thousands.

This vuln also highlights an observation bias of security researchers. Now that CSRF is in vogue people start to look for it everywhere. Security conferences get more presentations about advanced ways to exploit the vulnerability, even though real-world attackers seem fine with the returns on guessing passwords, seeding web pages with malware, and phishing. Take a look at HTML injection (what everyone else calls XSS). Injecting script tags into a web page via an unchecked parameter dates back to the beginning of the web.

Before you shrug off this discussion of CSRF as hand waving with comments like, “But I could hack site Foo by doing Bar and then make lots of money,” consider what you’re arguing: A knowledgeable or dedicated attacker will find a useful exploit. Risk may include many components, including Threat Agents (to use the Top 10’s term). Risk increases under a targeted attack — someone actively looking to compromise the app or its users’ data. If you want to add an “Exploitability” metric to your risk calculation, keep in mind that ease of exploitability is often related to the threat agent and tends to be a step function: It might be hard to figure out the exploit in the first place, but anyone can run a 42-line Python script that automates the attack.

This is why the Top 10 list should be a starting point to defining security practices for your web site, but it shouldn’t be the end of the road. Even the OWASP site admonishes readers to use the list for awareness rather than policy. So, if you’ve been worried about information leakage and improper error handling since 2007 don’t think the problem has disappeared because it’s not on the list in 2010. And if you don’t think anyone cares about the logic problems within your site…well, just hope they haven’t been reading about them somewhere else.

Article on the new OWASP Top 10

The Tech Herald has an article on the recently updated OWASP Top 10 Web Application Security Risks. The article discusses a little bit of the evolution of the Top 10 list and how one major vulnerability, logic flaws, tends to get hidden behind the noise of SQL injection and XSS.

You can find out more about logic flaws in Chapter Six of Hacking Web Apps.