…And They Have a Plan

No notes are so disjointed as the ones skulking about my brain as I was preparing slides for last week’s BlackHat presentation. I’ve now wrangled them into a mostly coherent write-up.

This won’t be the last post on this topic. I’ll be doing two things over the next few weeks: throwing a doc into github to track changes/recommendations/etc., responding to more questions, working on a different presentation, and trying to stick to the original plan (i.e. two things). Oh, and getting better at MarkDown.

So, turn up some Jimi Hendrix, play some BSG in the background, and read on.

== The Problem ==

Cross-Site Request Forgery (CSRF) abuses the normal ability of browsers to make cross-origin requests by crafting a resource on one origin that causes a victim’s browser to make a request to another origin using the victim’s security context associated with that target origin.

The attacker creates and places a malicious resource on an origin unrelated to the target origin to which the victim’s browser will make a request. The malicious resource contains content that causes a browser to make a request to the unrelated target origin. That request contains parameters selected by the attacker to affect the victim’s security context with regard to the target origin.

The attacker does not need to violate the browser’s Same Origin Policy to generate the cross origin request. Nor does the attack require reading the response from the target origin. The victim’s browser automatically includes cookies associated with the target origin for which the forged request is being made. Thus, the attacker creates an action, the browser requests the action and the target web application performs the action under the context of the cookies it receives — the victim’s security context.

An effective CSRF attack means the request modifies the victim’s context with regard to the web application in a way that’s favorable to the attacker. For example, a CSRF attack may change the victim’s password for the web application.

CSRF takes advantage of web applications that fail to enforce strong authorization of actions during a user’s session. The attack relies on the normal, expected behavior of web browsers to make cross-origin requests from resources they load on unrelated origins.

The browser’s Same Origin Policy prevents a resource in one origin to read the response from an unrelated origin. However, the attack only depends on the forged request being submitted to the target web app under the victim’s security context — it does not depend on receiving or seeing the target app’s response.

== The Proposed Solution ==

SOS is proposed an additional policy type of the Content Security Policy. Its behavior also includes pre-flight behavior as used by the Cross Origin Resource Sharing spec.

SOS isn’t just intended as a catchy an acronym. The name is intended to evoke the SOS of Morse code, which is both easy to transmit and easy to understand. If it is required to explain what SOS stands for, then “Session Origin Security” would be preferred. (However, “Simple Origin Security”, “Some Other Security”, and even “Save Our Site” are acceptable. “Same Old Stuff” is discouraged. More options are left to the reader.)

An SOS policy may be applied to one or more cookies for a web application on a per-cookie or collective basis. The policy controls whether the browser includes those cookies during cross-origin requests. (A cross-origin resource cannot access a cookie from another origin, but it may generate a request that causes the cookie to be included.)

== Format ==

A web application sets a policy by including a Content-Security-Policy response header. This header may accompany the response that includes the Set-Cookie header for the cookie to be covered, or it may be set on a separate resource.

A policy for a single cookie would be set as follows, with the cookieName of the cookie and a directive of 'any', 'self', or 'isolate'. (Those directives will be defined shortly.)

Content-Security-Policy: sos-apply=cookieName 'policy'

A response may include multiple CSP headers, such as:

Content-Security-Policy: sos-apply=cookieOne 'policy'
Content-Security-Policy: sos-apply=cookieTwo 'policy'

A policy may be applied to all cookies by using a wildcard:

Content-Security-Policy: sos-apply=* 'policy'

== Policies ==

One of three directives may be assigned to a policy. The directives affect the browser’s default handling of cookies for cross-origin requests to a cookie’s destination origin. The pre-flight concept will be described in the next section; it provides a mechanism for making exceptions to a policy on a per-resource basis.

Policies are only invoked for cross-origin requests. Same origin requests are unaffected.

'any' — include the cookie. This represents how browsers currently work. Make a pre-flight request to the resource on the destination origin to check for an exception response.

'self' — do not include the cookie. Make a pre-flight request to the resource on the destination origin to check for an exception response.

'isolate' — never include the cookie. Do not make a pre-flight request to the resource because no exceptions are allowed.

== Pre-Flight ==

A browser that is going to make a cross-origin request that includes a cookie covered by a policy of 'any' or 'self' must make a pre-flight check to the destination resource before conducting the request. (A policy of 'isolate' instructs the browser to never include the cookie during a cross-origin request.)

The purpose of a pre-flight request is to allow the destination origin to modify a policy on a per-resource basis. Thus, certain resources of a web app may allow or deny cookies from cross-origin requests despite the default policy.

The pre-flight request works identically to that for Cross Origin Resource Sharing, with the addition of an Access-Control-SOS header. This header includes a space-delimited list of cookies that the browser might otherwise include for a cross-origin request, as follows:

Access-Control-SOS: cookieOne CookieTwo

A pre-flight request might look like the following, note that the Origin header is expected to be present as well:

OPTIONS https://web.site/resource HTTP/1.1
Host: web.site
Origin: http://other.origin
Access-Control-SOS: sid
Connection: keep-alive
Content-Length: 0

The destination origin may respond with an Access-Control-SOS-Reply header that instructs the browser whether to include the cookie(s). The response will either be 'allow' or 'deny'.

The response header may also include an expiration in seconds. The expiration allows the browser to remember this response and forego subsequent pre-flight checks for the duration of the value.

The following example would allow the browser to include a cookie with a cross-origin request to the destination origin even if the cookie’s policy had been 'self‘. (In the absence of a reply header, the browser would not include the cookie.)

Access-Control-SOS-Reply: 'allow' expires=600

The following example would deny the browser to include a cookie with a cross-origin request to the destination origin even if the cookie’s policy had been 'any'. (In the absence of a reply header, the browser would include the cookie.)

Access-Control-SOS-Reply: 'deny' expires=0

The browser would be expected to track policies and policy exceptions based on destination origins. It would not be expected to track pairs of origins (e.g. different cross-origins to the destination) since such a mapping could easily become cumbersome, inefficient, and more prone to abuse or mistakes.

As described in this section, the pre-flight is an all-or-nothing affair. If multiple cookies are listed in the Access-Control-SOS header, then the response applies to all of them. This might not provide enough flexibility. On the other hand, simplicity tends to encourage security.

== Benefits ==

Note that a policy can be applied on a per-cookie basis. If a policy-covered cookie is disallowed, any non-covered cookies for the destination origin may still be included. Think of a non-covered cookie as an unadorned or “naked” cookie — their behavior and that of the browser matches the web of today.

The intention of a policy is to control cookies associated with a user’s security context for the destination origin. For example, it would be a good idea to apply 'self' to a cookie used for authorization (and identification, depending on how tightly coupled those concepts are by the app’s reliance on the cookie).

Imagine a WordPress installation hosted at https://web.site/. The site’s owner wishes to allow anyone to visit, especially when linked-in from search engines, social media, and other sites of different origins. In this case, they may define a policy of 'any' set by the landing page:

Content-Security-Policy: sos-apply=sid 'any'

However, the /wp-admin/ directory represents sensitive functions that should only be accessed by intention of the user. WordPress provides a robust nonce-based anti-CSRF token. Unfortunately, many plugins forget to include these nonces and therefore become vulnerable to attack. Since the site owner has set a policy for the sid cookie (which represents the session ID), they could respond to any pre-flight request to the /wp-admin/ directory as follows:

Access-Control-SOS-Reply: 'deny' expires=86400

Thus, the /wp-admin/ directory would be protected from CSRF exploits because a browser would not include the sid cookie with a forged request.

The use case for the 'isolate' policy is straight-forward: the site does not expect any cross-origin requests to include cookies related to authentication or authorization. A bank or web-based email might desire this behavior. The intention of isolate is to avoid the requirement for a pre-flight request and to forbid exceptions to the policy.

== Notes ==

This is a draft. The following thoughts represent some areas that require more consideration or that convey some of the motivations behind this proposal.

This is intended to affect cross-origin requests made by a browser.

It is not intended to counter same-origin attacks such as HTML injection (XSS) or intermediation attacks such as sniffing. Attempting to solve multiple problems with this policy leads to folly.

CSRF evokes two sense of the word “forgery”: creation and counterfeiting. This approach doesn’t inhibit the creation of cross-origin requests (although something like “non-simple” XHR requests and CORS would). Nor does it inhibit the counterfeiting of requests, such as making it difficult for an attacker to guess values. It defeats CSRF by blocking a cookie that represents the user’s security context from being included in a cross-origin request the user likely didn’t intend to make.

There may be a reason to remove a policy from a cookie, in which case a CSP header could use something like an sos-remove instruction:

Content-Security-Policy: sos-remove=cookieName

Cryptographic constructs are avoided on purpose. Even if designed well, they are prone to implementation error. They must also be tracked and verified by the app, which exposes more chances for error and induces more overhead. Relying on nonces increases the difficulty of forging (as in counterfeiting) requests, whereas this proposed policy defines a clear binary of inclusion/exclusion for a cookie. A cookie will or will not be included vs. a nonce might or might not be predicted.

PRNG values are avoided on purpose, for the same reasons as cryptographic nonces. It’s worth noting that misunderstanding the difference between a random value and a cryptographically secure PRNG (which a CSRF token should favor) is another point against a PRNG-based control.

A CSP header was chosen in favor of decorating the cookie with new attributes because cookies are already ugly, clunky, and (somewhat) broken enough. Plus, the underlying goal is to protect a session or security context associated with a user. As such, there might be reason to extended this concept to the instantiation of Web Storage objects, e.g. forbid them in mixed-origin resources. However, this hasn’t really been thought through and probably adds more complexity without solving an actual problem.

The pre-flight request/response shouldn’t be a source of information leakage about cookies used by the app. At least, it shouldn’t provide more information than might be trivially obtained through other techniques.

It’s not clear what an ideal design pattern would be for deploying SOS headers. A policy could accompany each Set-Cookie header. Or the site could use a redirect or similar bottleneck to set policies from a single resource.

It would be much easier to retrofit these headers on a legacy app by using a Web App Firewall than it would be trying to modify code to include nonces everywhere.

It would be (possibly) easier to audit a site’s protection based on implementing the headers via mod_rewrite tricks or WAF rules that apply to whole groups of resources than it would for a code audit of each form and action.

The language here tilts (too much) towards formality, but the terms and usage haven’t been vetted yet to adhere to those in HTML, CSP and CORS. The goal right now is clarity of explanation; pedantry can wait.

== Cautions ==

In addition to the previous notes, these are highlighted as particular concerns.

Conflicting policies would cause confusion. For example, two different resources separately define an 'any' and 'self' for the same cookie. It would be necessary to determine which receives priority.

Cookies have the unfortunate property that they can belong to multiple origins (i.e. sub-domains). Hence, some apps might incur additional overhead of pre-flight requests or complexity in trying to distinguish cross-origin of unrelated domains and cross-origin of sub-domains.

Apps that rely on “Return To” URL parameters might not be fixed if the return URL has the CSRF exploit and the browser is now redirecting from the same origin. Maybe. This needs some investigation.

There’s no migration for old browsers: You’re secure (using a supporting browser and an adopted site) or you’re not. On the other hand, an old browser is an insecure browser anyway — browser exploits are more threatening than CSRF for many, many cases.

There’s something else I forgot to mention that I’m sure I’ll remember tomorrow.

=====

You’re still here? I’ll leave you with this quote from the last episode of BSG. (It’s a bit late to be apologizing for spoilers…) Thanks for reading!

Six: All of this has happened before.
Baltar: But the question remains, does all of this have to happen again?

User Agent. Secret Agent. Double Agent.

We hope our browsers are secure in light of the sites we choose to visit. What we often forget, is whether we are secure in light of the sites our browsers choose to visit. Sometimes it’s hard to even figure out whose side our browsers are on.

Browsers act on our behalf, hence the term User Agent. They load HTML from the link we type in the navbar, then retrieve the resources defined in the HTML in order to fully render the site. The resources may be obvious, like images, or behind-the-scenes, like CSS that style the page’s layout or JSON messages sent by XmlHttpRequest objects.

Then there are times when our browsers work on behalf of others, working as a Secret Agent to betray our data. They carry out orders delivered by Cross-Site Request Forgery (CSRF) exploits enabled by the very nature of HTML.

Part of HTML’s success is its capability to aggregate resources from different Origins into a single page. Check out the following HTML. It loads a CSS file, JavaScript functions, and two images from different hosts, all but one over HTTPS. None of it violates the Same Origin Policy. Nor is there an issue with loading different Origins with different SSL connections.

<!doctype html>
<html>
<head>
<link href="https://fonts.googleapis.com/css?family=Open+Sans" rel="stylesheet" media="all" type="text/css" />
<script src="https://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.9.0.min.js"></script>
<script>
$(document).ready(function() {
  $("#main").text("Come together...");
});
</script>
</head>
<body>
<img alt="www.baidu.com" src="http://www.baidu.com/img/shouye_b5486898c692066bd2cbaeda86d74448.gif" />
<img alt="www.twitter.com" src="https://twitter.com/images/resources/twitter-bird-blue-on-white.png" />
<div id="main" style="font-family: 'Open Sans';"></div>
</body>
</html>

CSRF attacks rely on being able to include resources from unrelated Origins in a single page. They’re not concerned with the Same Origin Policy since they are neither restricted by it nor need to break it (they don’t need to read or write across Origins). CSRF is concerned with a user’s context in relation to a web app — the data and state transitions associated with a user’s session, security, or privacy.

To get a sense of user context with regard to a site, let’s look at the Bing search engine. Click on the Preferences gear in the upper right corner to review your Search History. You’ll see a list of search terms like the following example:

Bing Search History

Bing’s Search box is an <input> field with parameter name “q“. Searching for a term — and therefore populating the Search History — is done when the form is submitted. Doing so creates a request for a link like this:

http://www.bing.com/search?q=lilith%27s+brood

For a CSRF exploit to work, it’s important to be able to recreate a user’s request. In the case of Bing, an attacker need only craft a GET request to the /search page and populate the q parameter.

Forge a Request

We’ll use a CSRF attack to populate the user’s Search History without their knowledge. This requires luring the victim to a web page that’s able to forge (as in craft) a search request. If successful, the forged (as in fake) request will affect the user’s context (i.e. the Search History). One way to forge an automatic request is via the src attribute of an img tag. The following HTML would be hosted on some Origin unrelated to Bing, e.g. http://web.site/page.

<!doctype html>
<html>
<body>
<img src="http://www.bing.com/search?q=deadliest%20web%20attacks" style="visibility: hidden;" alt="" />
</body>
</html>

The victim has to visit the attacker’s web page or come across the img tag in a discussion forum or social media site. The user does not need to have Bing open in a different browser tab or otherwise be using Bing at the same time they come across the CSRF exploit. Once their browser encounters the booby-trapped page, the request updates their Search History even though they never typed “deadliest web attacks” into the search box.

Bing Search History CSRF

As a thought experiment, extend this from a search history “infection” to a social media status update, or changing an account’s email address (to the attacker’s), or changing a password (to something the attacker knows), or any other action that affects the user’s security or privacy.

The trick is that CSRF requires full knowledge of the request’s parameters in order to successfully forge it. That kind of forgery (as in faking a legitimate request) requires another article to better explore. For example, if you had to supply the old password in order to update a new password, then you wouldn’t need a CSRF attack — just log in with the known password. Or another example, imagine Bing randomly assigned a letter to users’ search requests. One user’s request might use a “q” parameter, whereas another user’s request relies instead on an “s” parameter. If the parameter name didn’t match the one assigned to the user, then Bing would reject the search request. The attacker would have to predict the parameter name. Or, if the sample space were small, fake each possible combination — which would be only 26 letters in this imagined scenario.

Crafty Crafting

We’ll end on the easier aspect of forgery (as in crafting). Browsers automatically load resources from the src attributes of elements like img, iframe, and script (or the href attribute of a link). If an action can be faked by a GET request, that’s the easiest way to go.

HTML5 gives us another nifty way to forge requests using Content Security Policy directives. We’ll invert the expected usage of CSP by intentionally creating an element that violates a restriction. The following HTML defines a CSP rule that forbids src attributes (default-src 'none') and a URL to report rule violations. The victim’s browser must be lured to this page, either through social engineering or by placing it on a commonly-visited site that permits user-uploaded HTML.

<!doctype html>
<html>
<head>
<meta http-equiv="X-WebKit-CSP" content="default-src 'none'; report-uri http://www.bing.com/search?q=deadliest%20web%20attacks%20CSP" />
</head>
<body>
<img alt="" src="/" />
</body>
</html>

The report-uri creates a POST request to the link. Being able to generate a POST is highly attractive for CSRF attacks. However, the usefulness of this technique is tempered by the fact that it’s not possible to add arbitrary name/value pairs to the POST data. The browser will percent-encode the values for the “document-url” and “violated-directive” parameters. Unless the browser incorrectly implements CSP reporting, it’s a half-successful measure at best.

POST /search?q=deadliest%20web%20attacks%20CSP HTTP/1.1
Host: www.bing.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_2) AppleWebKit/536.26.17 (KHTML, like Gecko) Version/6.0.2 Safari/536.26.17
Content-Length: 121
Origin: null
Content-Type: application/x-www-form-urlencoded
Referer: http://web.site/HWA/ch3/bing_csp_report_uri.html
Connection: keep-alive

document-url=http%3A%2F%2Fweb.site%2FHWA%2Fch3%2Fbing_csp_report_uri.html&violated-directive=default-src+%27none%27

There’s far more to finding and exploiting CSRF vulnerabilities than covered here. We didn’t mention risk, which in this example is low (there’s questionable benefit to the attacker or detriment to the victim, notice you can even turn history off, and the history feature is presented clearly rather than hidden in an obscure privacy setting). But the Bing example demonstrates the essential mechanics of an attack:

  • A site tracks per-user contexts.
  • A request is known to modify that context.
  • The request can be recreated by an attacker (i.e. parameter names and values are predictable without direct access to the victim’s context).
  • The forged request can be placed on a page unrelated to the site (i.e. in a different Origin) where the victim’s browser comes across it.
  • The victim’s browser submits the forged request and affects the user’s context (this usually requires the victim to be currently authenticated to the site).

Later on, we’ll explore attacks that affect a user’s security context and differentiate them from nuisance attacks or attacks with negligible impact to the user. We’ll also examine the forging of requests, including challenges of creating GET and POST requests. Then we’ll heighten those challenges against the attacker and explore ways to counter CSRF attacks.

Until then, consider who your User Agent is really working for. It might not be who you expect.


“We are what we pretend to be, so we must be careful about what we pretend to be.” — Kurt Vonnegut, Introduction to Mother Night.

A Brief Return to CSRF

Attention to CSRF seems to ebb and flood against the popularity of yet another XSS or SQL injection. Here’s some insight1 into the projects I work on related to web scanning, specifically how some kinds of CSRF detections can be automated.

CSRF detection definitely falls into the “hard” category of automation. The Book discusses CSRF in Chapter 3. You may also be interested in reading the excellent Stanford Web Security Research papers on the topic.2

CSRF is a complex topic that engenders a lot of strong opinions on risk, exploitation, and what constitutes a vuln. A few months ago I wrote on the broader aspects of web security and how they do or do not relate to CSRF.

Since July was a rather dry period for updates here, I’ll take August to dive into some of the ways automated CSRF detection succeeds and which approaches are doomed to fail.

=====

1 https://community.qualys.com/blogs/securitylabs/2011/08/10/the-was-approach-to-csrf
2 http://seclab.stanford.edu/websec/csrf/
3 http://www.deadliestwebattacks.com/2011/04/csrf-and-beyond.html

CSRF and Beyond

Identifying CSRF vulnerabilities is more interesting than just scraping HTML for hidden fields or forging requests. CSRF stems from a design issue of HTTP and HTML that is in one aspect a positive feature of the web, but leads to unexpected consequences for web sites. We’ll start with a brief description of detection methods before diverging onto interesting(?!) tangents.

A passive detection method that is simple to automate looks for the presence or absence of CSRF tokens in web pages. This HTML scraping is prone to many errors and generates noisy results that don’t scale well for someone dealing with more than one web site at a time. This approach just assumes the identity of a token; it doesn’t verify that it is a valid one or more importantly that the application verifies it. Unless the page is examined after JavaScript has updated the DOM, this technique misses dynamically generated tokens, form fields, or forms.

An active detection method that can be automated replays requests under different user sessions. This approach follows the assumption that CSRF tokens are unique to a user’s session, such as the session cookie1 or other pseudo-random value. There’s also a secondary assumption that concurrent sessions are possible. To be effective, this approach requires a browser or good browser emulation to deal with any JavaScript and DOM updates. Basically, this technique swaps forms between two user sessions. If the submission succeeds, then it’s more likely request forgery is possible. If the submission fails, then it’s more likely a CSRF countermeasure has blocked it. There’s still potential for false negatives if some static state token or other form field wasn’t updated properly. The benefit of this approach is that it’s not necessary to guess the identity of a token and that the test is actually determining whether a request can be forged.

Once more countermeasures become based on the Origin header, the replay approach might be as as simple as setting an off-origin value for this header. A server will either reject or accept the request. This would be a nice, reliable detection (not to mention simple, strong countermeasure), but sadly not an imminent one.2

Almost by default, an HTML form is vulnerable to CSRF. (For the sake of word count forms will be synonymous with any “resource” like a link or XHR request.) WhiteHat Security described one way to narrow the scope of CSRF reporting from any form whatsoever to resources that fall into a particular category. Read the original post3, them come back. I’ve slightly modified WhiteHat’s three criteria to be resources:

  • with a security context or that cross a security boundary, such as password or profile management
  • that deliver an HTML injection (XSS) or HTTP response splitting payload to a vulnerable page on the target site. This answers the question for people who react to those vulns with, “That’s nice, but so what if you can only hack your own browser.” This seems more geared towards increasing the risk of a pre-existing vuln rather than qualifying it as a CSRF. We’ll come back to this one.
  • where sensitive actions are executed, such as anything involving money, updating a friend list, or sending a message

The interesting discussion starts with WhiteHat’s “benign” example. To summarize, imagine a site with a so-called non-obvious CSRF, one XSS vuln, one Local File Inclusion (LFI) vuln, and a CSRF-protected file upload form. The attack uses the non-obvious CSRF to exploit the XSS vuln, which in turn triggers the file upload to exploit the LFI. For example, the attacker creates the JavaScript necessary to upload a file and exploit the LFI, places this payload in an image tag on an unrelated domain, and waits for a victim to visit the booby-trapped page so their browser loads <img src=”http://target.site/xss_inject.page?arg=payload”>.

This attack was highlighted as a scenario where CSRF detection methods would usually produce false negatives because the vulnerable link, http://target.site/xss_inject.page, doesn’t otherwise affect the user’s security context or perform a sensitive action.

Let’s review the three vulns:

  • Ability to forge a request to a resource, considered “non-obvious” because the resource doesn’t affect a security context or execute a sensitive action.
  • Presence of HTML injection, HTTP Response Splitting, or other clever code injection vulnerability in said resource.
  • Presence of Local File Inclusion.

Using XSS to upload a file isn’t a vulnerability. There’s nothing that says JavaScript within the Same Origin Rule (under which the XSS falls once it’s reflected) can’t use XHR to POST data to a file upload form. It also doesn’t matter if the file upload form has CSRF tokens because the code is executing under the Same Origin Rule and therefore has access the tokens.

I think these two recommendations would be made by all and accepted by the site developers as necessary:

  • Fix the XSS vulnerability using recommend practices (let’s just assume the arg variable is just reflected in xss_inject.page)
  • Fix the Local File Inclusion (by verifying file content, forcing MIME types, not making the file readable, not using PHP at all, etc.)

But it was CSRF that started us off on this attack scenario. This leads to the question of how the “non-obvious” CSRF should be reported, especially from an automation perspective:

  • Is a non-obvious CSRF vulnerability actually obvious if the resource has another vuln (e.g. XSS)? Does the CSRF become non-reportable once the other vuln has been fixed?
  • Should a non-obvious CSRF vulnerability be obvious anyway if it has a query string (or form fields, etc.) that might be vulnerable?

If you already believe CSRF should be on every page, then clearly you would have already marked the example vulnerable just by inspection because it didn’t have an explicit countermeasure. But what about those who don’t follow the absolutist proscription of CSRF everywhere? (For performance reasons, or the resource doesn’t affect the user’s state or security context.)

Think about pages that use “referrer” arguments. For example:

http://web.site/redir.page?url=http://from.here

In addition to possibly being an open redirect, these are prime targets for XSS with payloads like

http://web.site/redir.page?url=javascript:nasty_stuff()

It seems that in these cases the presence of CSRF just serves to increase the XSS risk rather than be a vuln on its own. Otherwise, you risk producing too much noise by calling any resource with a query string vulnerable. In this case CSRF provides a rejoinder to the comment, “That’s a nice reflected XSS, but you can only hack yourself with it. So what.” Without the XSS vuln you probably wouldn’t waste time protecting that particular resource.

Look at a few of the other WhiteHat examples. They clearly fall into CSRF (changing shipping address, password set/reset mechanisms) with no doubt successful exploits could be demonstrated.

What’s interesting is that they seem to require race conditions or to happen during specific workflows to be successful4, e.g. execute the CSRF so the shipping address is changed before the transaction is completed. That neither detracts from the impact nor obviates it as a vulnerability. Instead, it highlights a more subtle aspect of web security: state management.

Let’s set aside malicious attackers and consider a beneficent CSRF donor. Our scenario begins with an ecommerce site. The victim, a lucky recipient in this case, has selected an item and placed it into a virtual shopping cart.

1) The victim (lucky recipient!) fills out a shipping destination.
2) The attacker (benefactor!) uses a CSRF attack to apply a discount coupon.
3) The recipient supplies a credit card number.
4) Maybe the web site is really bad and the benefactor knows that the same coupon can be applied twice. A second CSRF applies another discount.
5) The recipient completes the transaction.
6) Our unknown benefactor looks for the new victim of this CSRF attack.

I chose this Robin Hood-esque scenario to take your attention away from the malicious attacker/victim formula of CSRF to focus on the abuse of workflows.

A CSRF countermeasure would have prevented the discount coupon from being applied to the transaction, but that wouldn’t fully address the underlying issues here. Consider the state management for this transaction.

One problem is that the coupon can be applied multiple times. During a normal workflow the site’s UI leads the user through a check-out sequence that must be followed. On the other hand, if the site only prevented users from revisiting the coupon step in the UI, then the site’s developers have forgotten how trivial it is to replay GET and POST requests. This is an example of a state management issue where an action that should be performed only once can be executed multiple times.

A less obvious problem of state management is the order in which the actions were performed. The user submitted a discount coupon in two different steps: right after the shipping destination and right after providing payment info. In the UI, let’s assume the option to apply a discount shows up only after the user provides payment information. A strict adherence to this transaction’s state management should have rejected the first discount coupon since it arrived out of order.

Sadly, we have to interrupt this thought to address real-world challenges of web apps. I’ve defined a strict workflow as (1) shipping address required, (2) payment info required, (3) discount coupon optional, (4) confirm transaction required. A site’s UI design influences how strict these steps will be enforced. For example, the checkout process might be a single page that updates with XHR calls as the user fills out each section in any order. Conversely, this single page checkout might enable each step as the user completes them in order.

UI enforcement cannot guarantee that requests be made in order. This is where decisions have to be made regarding how strictly the sequence is to be enforced. It’s relatively easy to have a server-side state object track these steps and only update itself for requests in the correct order. The challenge is keeping the state flexible enough to deal with users who abandon a shopping cart, or decide at the last minute to add an extra widget before confirming the transaction, or a multitude of other actions that affect the state. These aren’t insurmountable challenges, but they induce complexity and require careful testing. This trade-off between coarse state management and granular control leads more to a balance of software correctness rather than security. You can still have a secure site if steps can be performed in order of 3, 1, 2, 4 rather than the expected 1, 2, 3, 4.

CSRF is about requests made in the victim’s session context (by the victim’s browser) on behalf of the attacker (initiated from an unrelated domain) without the victim’s interaction. If a link, iframe, image tag, JavaScript, etc. causes the victim’s browser to make a request that affects that user’s state in another web site, then the CSRF attack succeeded. The conceptual way to fix CSRF is to identify forged requests and reject them. CSRF tokens are intended to identify legitimate requests because they’re a shared secret between the site and the user’s browser. An attacker who doesn’t know the secret can forge a legitimate request.

What this discussion of CSRF attacks has highlighted is the soft underbelly of web sites’ state management mechanisms.

Automated scanners should excel at scaleability and consistent accuracy, but woe to those who believe they fully replace manual testing. Scanners find implementation errors (forgetting to use a prepared statement for a SQL query, not filtering angle brackets, etc.), but they don’t have the capacity to understand fundamental design flaws. Nor should they be expected to. Complex interactions are best understood and analyzed by manual testing. CSRF stands astride the gap between automation and manual testing. Automation identifies whether a site accepts forged requests, whereas manual testing can delve deeper into underlying state vulnerabilities or chains of exploits that CSRF might enable.

=====

[1] Counter to recommendations that session cookies have the Http-Only attribute so their value is not accessible via JavaScript. Http-Only is intended to mitigate some XSS exploits. However, the presence of XSS basically negates any CSRF countermeasure since the XSS payload can perform requests in the Same Origin without having to resort to forged, off-origin requests.

[2] A chicken-and-egg problem since there needs to be enough adoption of browsers that include the Origin header for such a countermeasure to be useful without rejecting legitimate users. Although such rejection would be excellent motivation for users to update old, likely insecure, and definitely less secure browsers.

[3] https://blog.whitehatsec.com/whitehat-security’s-approach-to-detecting-cross-site-request-forgery-csrf/

[4] These particular attacks are significantly easier on unencrypted Wi-Fi networks with sites that don’t use HTTPS, but in that case there are a lot of different attacks. Plus, the basis of CSRF tokens is that they are confidential to the user’s browser — not the case when sniffing HTTP.