Identifying CSRF vulnerabilities is more interesting than just scraping HTML for hidden fields or forging requests. CSRF stems from a design issue of HTTP and HTML that is in one aspect a positive feature of the web, but leads to unexpected consequences for web sites. We’ll start with a brief description of detection methods before diverging onto interesting(?!) tangents.
Once more countermeasures become based on the Origin header, the replay approach might be as as simple as setting an off-origin value for this header. A server will either reject or accept the request. This would be a nice, reliable detection (not to mention simple, strong countermeasure), but sadly not an imminent one.2
Almost by default, an HTML form is vulnerable to CSRF. (For the sake of word count forms will be synonymous with any “resource” like a link or XHR request.) WhiteHat Security described one way to narrow the scope of CSRF reporting from any form whatsoever to resources that fall into a particular category. Read the original post3, them come back. I’ve slightly modified WhiteHat’s three criteria to be resources:
- with a security context or that cross a security boundary, such as password or profile management
- that deliver an HTML injection (XSS) or HTTP response splitting payload to a vulnerable page on the target site. This answers the question for people who react to those vulns with, “That’s nice, but so what if you can only hack your own browser.” This seems more geared towards increasing the risk of a pre-existing vuln rather than qualifying it as a CSRF. We’ll come back to this one.
- where sensitive actions are executed, such as anything involving money, updating a friend list, or sending a message
This attack was highlighted as a scenario where CSRF detection methods would usually produce false negatives because the vulnerable link,
http://target.site/xss_inject.page, doesn’t otherwise affect the user’s security context or perform a sensitive action.
Let’s review the three vulns:
- Ability to forge a request to a resource, considered “non-obvious” because the resource doesn’t affect a security context or execute a sensitive action.
- Presence of HTML injection, HTTP Response Splitting, or other clever code injection vulnerability in said resource.
- Presence of Local File Inclusion.
I think these two recommendations would be made by all and accepted by the site developers as necessary:
- Fix the XSS vulnerability using recommend practices (let’s just assume the
argvariable is just reflected in
- Fix the Local File Inclusion (by verifying file content, forcing MIME types, not making the file readable, not using PHP at all, etc.)
But it was CSRF that started us off on this attack scenario. This leads to the question of how the “non-obvious” CSRF should be reported, especially from an automation perspective:
- Is a non-obvious CSRF vulnerability actually obvious if the resource has another vuln (e.g. XSS)? Does the CSRF become non-reportable once the other vuln has been fixed?
- Should a non-obvious CSRF vulnerability be obvious anyway if it has a query string (or form fields, etc.) that might be vulnerable?
If you already believe CSRF should be on every page, then clearly you would have already marked the example vulnerable just by inspection because it didn’t have an explicit countermeasure. But what about those who don’t follow the absolutist proscription of CSRF everywhere? (For performance reasons, or the resource doesn’t affect the user’s state or security context.)
Think about pages that use “referrer” arguments. For example:
In addition to possibly being an open redirect, these are prime targets for XSS with payloads like
It seems that in these cases the presence of CSRF just serves to increase the XSS risk rather than be a vuln on its own. Otherwise, you risk producing too much noise by calling any resource with a query string vulnerable. In this case CSRF provides a rejoinder to the comment, “That’s a nice reflected XSS, but you can only hack yourself with it. So what.” Without the XSS vuln you probably wouldn’t waste time protecting that particular resource.
Look at a few of the other WhiteHat examples. They clearly fall into CSRF (changing shipping address, password set/reset mechanisms) with no doubt successful exploits could be demonstrated.
What’s interesting is that they seem to require race conditions or to happen during specific workflows to be successful4, e.g. execute the CSRF so the shipping address is changed before the transaction is completed. That neither detracts from the impact nor obviates it as a vulnerability. Instead, it highlights a more subtle aspect of web security: state management.
Let’s set aside malicious attackers and consider a beneficent CSRF donor. Our scenario begins with an ecommerce site. The victim, a lucky recipient in this case, has selected an item and placed it into a virtual shopping cart.
1) The victim (lucky recipient!) fills out a shipping destination.
2) The attacker (benefactor!) uses a CSRF attack to apply a discount coupon.
3) The recipient supplies a credit card number.
4) Maybe the web site is really bad and the benefactor knows that the same coupon can be applied twice. A second CSRF applies another discount.
5) The recipient completes the transaction.
6) Our unknown benefactor looks for the new victim of this CSRF attack.
I chose this Robin Hood-esque scenario to take your attention away from the malicious attacker/victim formula of CSRF to focus on the abuse of workflows.
A CSRF countermeasure would have prevented the discount coupon from being applied to the transaction, but that wouldn’t fully address the underlying issues here. Consider the state management for this transaction.
One problem is that the coupon can be applied multiple times. During a normal workflow the site’s UI leads the user through a check-out sequence that must be followed. On the other hand, if the site only prevented users from revisiting the coupon step in the UI, then the site’s developers have forgotten how trivial it is to replay GET and POST requests. This is an example of a state management issue where an action that should be performed only once can be executed multiple times.
A less obvious problem of state management is the order in which the actions were performed. The user submitted a discount coupon in two different steps: right after the shipping destination and right after providing payment info. In the UI, let’s assume the option to apply a discount shows up only after the user provides payment information. A strict adherence to this transaction’s state management should have rejected the first discount coupon since it arrived out of order.
Sadly, we have to interrupt this thought to address real-world challenges of web apps. I’ve defined a strict workflow as (1) shipping address required, (2) payment info required, (3) discount coupon optional, (4) confirm transaction required. A site’s UI design influences how strict these steps will be enforced. For example, the checkout process might be a single page that updates with XHR calls as the user fills out each section in any order. Conversely, this single page checkout might enable each step as the user completes them in order.
UI enforcement cannot guarantee that requests be made in order. This is where decisions have to be made regarding how strictly the sequence is to be enforced. It’s relatively easy to have a server-side state object track these steps and only update itself for requests in the correct order. The challenge is keeping the state flexible enough to deal with users who abandon a shopping cart, or decide at the last minute to add an extra widget before confirming the transaction, or a multitude of other actions that affect the state. These aren’t insurmountable challenges, but they induce complexity and require careful testing. This trade-off between coarse state management and granular control leads more to a balance of software correctness rather than security. You can still have a secure site if steps can be performed in order of 3, 1, 2, 4 rather than the expected 1, 2, 3, 4.
What this discussion of CSRF attacks has highlighted is the soft underbelly of web sites’ state management mechanisms.
Automated scanners should excel at scaleability and consistent accuracy, but woe to those who believe they fully replace manual testing. Scanners find implementation errors (forgetting to use a prepared statement for a SQL query, not filtering angle brackets, etc.), but they don’t have the capacity to understand fundamental design flaws. Nor should they be expected to. Complex interactions are best understood and analyzed by manual testing. CSRF stands astride the gap between automation and manual testing. Automation identifies whether a site accepts forged requests, whereas manual testing can delve deeper into underlying state vulnerabilities or chains of exploits that CSRF might enable.
A chicken-and-egg problem since there needs to be enough adoption of browsers that include the Origin header for such a countermeasure to be useful without rejecting legitimate users. Although such rejection would be excellent motivation for users to update old, likely insecure, and definitely less secure browsers. ↩
These particular attacks are significantly easier on unencrypted Wi-Fi networks with sites that don’t use HTTPS, but in that case there are a lot of different attacks. Plus, the basis of CSRF tokens is that they are confidential to the user’s browser – not the case when sniffing HTTP. ↩