CSRF and Beyond
Identifying CSRF vulns is more interesting than just scraping HTML for hidden fields or forging requests. CSRF stems from a design issue of HTTP and HTML. An HTML form is effectively vulnerable to CSRF by default. That design is a positive feature for sites – it makes many types of interactions and use cases easy to create. But is also leads to unexpected consequences.
A passive detection method that is simple to automate looks for the presence or absence of CSRF tokens. However, scraping HTML is prone to errors and generates noisy results that don’t scale well for someone dealing with more than one app at a time. This approach just assumes the identity of a token. It doesn’t verify that it is valid or relied upon by the app. And unless the page is examined after JavaScript has updated the DOM, this technique misses dynamically generated tokens, form fields, and forms.
An active detection method that can be automated is one that replays requests under different user sessions. This approach follows the assumption that CSRF tokens are unique to a user’s session, such as the session cookie or other pseudo-random value. There’s also a secondary assumption that concurrent sessions are possible. It also requires a browser to deal with dynamic JavaScript and DOM manipulation.
This active approach basically swaps forms between two sessions for the same user. If the submission succeeds, then it’s more likely request forgery is possible. If the submission fails, then it’s more likely a CSRF countermeasure has blocked it. There’s still potential for false negatives if some static state token or other form field wasn’t updated properly. The benefit of this approach is that it’s not necessary to guess the identity of a token and it’s explicitly testing whether a request can be forged.
Once more countermeasures become based on the Origin header, the replay approach might be as as simple as setting an off-origin value for this header. A server will either reject or accept the request. This would be a nice, reliable detection as well as a simple, strong countermeasure). Modern browsers released after 2016 support an even better countermeasure – SameSite cookies.
WhiteHat Security described one way to narrow the scope of CSRF reporting from any form whatsoever to resources carry risk for a user. I’ve slightly modified their three criteria to be resources:
- with a security context or that cross a security boundary, such as password or profile management
- that deliver an HTML injection (XSS) or HTTP response splitting payload to a vulnerable page on the target site. This answers the question for people who react to those vulns with, “That’s nice, but so what if you can only hack your own browser.” This seems more geared towards increasing the risk of a pre-existing vuln rather than qualifying it as a CSRF. We’ll come back to this one.
- where sensitive actions are executed, such as anything involving money, updating a contact list, or sending a message
There’s an interesting aspect in WhiteHat’s “benign” example. To summarize, imagine a site with a so-called non-obvious CSRF, one XSS vuln, one Local File Inclusion (LFI) vuln, and a CSRF-protected file upload form. The attack uses the non-obvious CSRF to exploit the XSS vuln, which in turn triggers the file upload to exploit the LFI. For example, the attacker creates the JavaScript necessary to upload a file and exploit the LFI, places this payload in an image tag on an unrelated domain, and waits for a victim to visit the booby-trapped page so their browser loads <img src=”https://target.site/xss_inject.page?arg=payload”>
.
This attack was highlighted as a scenario where CSRF detection methods would usually produce false negatives because the vulnerable link, https://target.site/xss_inject.page
, doesn’t otherwise affect the user’s security context or perform a sensitive action.
Let’s review the three vulns:
- Ability to forge a request to a resource, considered “non-obvious” because the resource doesn’t affect a security context or execute a sensitive action.
- Presence of HTML injection, HTTP Response Splitting, or other clever injection vuln in said resource.
- Presence of Local File Inclusion.
Using XSS to upload a file isn’t a necessarily a vuln (the XSS is, but not the file upload). There’s nothing that says JavaScript within the Same Origin Rule (under which the XSS falls once it’s reflected) can’t use XHR to POST data to a file upload form. In this case it also doesn’t matter if the file upload form has CSRF tokens because the code is executing under the Same Origin Rule and therefore has access the tokens.
I think these two recommendations would be made by all and accepted by the site developers as necessary:
- Fix the XSS vulnerability using recommend practices (let’s just assume the
arg
variable is just reflected inxss_inject.page
) - Fix the Local File Inclusion (by verifying file content, forcing MIME types, not making the file readable)
But it was CSRF that started us off on this attack scenario. This leads to the question of how the “non-obvious” CSRF should be reported, especially from an automation perspective:
- Is a non-obvious CSRF vuln actually obvious if the resource has another vuln like XSS? Does the CSRF become non-reportable once the other vuln has been fixed?
- Should a non-obvious CSRF vuln be obvious if it has a query string or form fields that might be vulnerable?
If you already believe CSRF should be on every page, then clearly you would have already marked the example vulnerable just by inspection because it didn’t have an explicit countermeasure. But what about those who don’t follow the tenet that CSRF lurks everywhere? For example, maybe the resource doesn’t affect the user’s state or security context.
Think about pages that use “referrer” arguments. For example:
https://web.site/redir.page?url=https://from.here
In addition to possibly being an open redirect, these are prime targets for XSS with payloads like
https://web.site/redir.page?url=javascript:arbitrary_payload()
It seems that in these cases the presence of CSRF just serves to increase the XSS risk rather than be a vuln on its own. Otherwise, you risk producing too much noise by calling any resource with a query string vulnerable. In this case CSRF provides a rejoinder to the comment, “That’s a nice reflected XSS, but you can only hack yourself with it. So what.” Without the XSS vuln you probably wouldn’t waste time protecting that particular resource.
Look at a few of the other WhiteHat examples. They clearly fall into CSRF vulns – changing shipping address, password reset mechanisms.
What’s interesting is that they seem to require race conditions or to happen during specific workflows to be successful, e.g. execute the CSRF so the shipping address is changed before the transaction is completed. That neither detracts from the impact nor obviates it as a vuln. Instead, it highlights a more subtle aspect of web security: state management.
Let’s set aside malicious attackers and consider a beneficent CSRF actor. Our scenario begins with an ecommerce site. The victim, a lucky recipient in this case, has selected an item and placed it into a virtual shopping cart.
-
The victim (lucky recipient!) fills out a shipping destination.
-
The attacker (benefactor!) uses a CSRF attack to apply a discount coupon.
-
The recipient supplies a credit card number.
-
Maybe the web site is really bad and the benefactor knows that the same coupon can be applied twice. A second CSRF applies another discount.
-
The recipient completes the transaction.
-
Our unknown benefactor looks for the new victim of this CSRF attack.
I chose this Robin Hood-esque scenario to take your attention away from the malicious attacker/victim formula of CSRF to focus on the abuse of workflows.
A CSRF countermeasure would have prevented the discount coupon from being applied to the transaction, but that wouldn’t fully address the underlying issues here. Consider the state management for this transaction.
One problem is that the coupon can be applied multiple times. During a normal workflow the site’s UI leads the user through a check-out sequence that must be followed. On the other hand, if the site only prevented users from revisiting the coupon step in the UI, then the site’s developers have forgotten how trivial it is to replay GET and POST requests. This is an example of a state management issue where an action that should be performed only once can be executed multiple times.
A less obvious problem of state management is the order in which the actions were performed. The user submitted a discount coupon in two different steps: right after the shipping destination and right after providing payment info. In the UI, let’s assume the option to apply a discount shows up only after the user provides payment information. A strict adherence to this transaction’s state management should have rejected the first discount coupon since it arrived out of order.
Sadly, we have to interrupt this thought to address real-world challenges of web apps. I’ve defined a strict workflow as (1) shipping address required, (2) payment info required, (3) discount coupon optional, (4) confirm transaction required. A site’s UI design influences how strict these steps will be enforced. For example, the checkout process might be a single page that updates with XHR calls as the user fills out each section in any order. Conversely, this single page checkout might enable each step as the user completes them in order.
UI enforcement cannot guarantee that requests be made in order. This is where decisions have to be made regarding how strictly the sequence is to be enforced. It’s relatively easy to have a server-side state object track these steps and only update itself for requests in the correct order. The challenge is keeping the state flexible enough to deal with users who abandon a shopping cart, or decide at the last minute to add another item before completing the transaction, or a multitude of other actions that affect the state. These aren’t insurmountable challenges, but they induce complexity and require careful testing. This trade-off between coarse state management and granular control leads more to a balance of correctness rather than security. You can still have a secure site if steps can be performed in order of 3, 1, 2, 4 rather than the expected 1, 2, 3, 4.
CSRF is about requests made in the victim’s session context by the victim’s browser on behalf of the attacker (initiated from an unrelated domain) without the victim’s interaction. If a link, iframe, image tag, or JavaScript causes the victim’s browser to make a request that affects that user’s state in another web site, then the CSRF attack succeeded. The conceptual way to fix CSRF is to identify forged requests and reject them. CSRF tokens are intended to identify legitimate requests because they’re a shared secret between the site and the user’s browser. An attacker who doesn’t know the secret can forge a legitimate request.
These attacks highlight the soft underbelly of web app state management mechanisms.
Automated scanners should excel at scaleability and consistent accuracy, but woe to those who believe they fully replace manual testing. Scanners find implementation errors like forgetting to use a prepared statements or not encoding output placed in HTML, but they struggle with understanding design flaws. Complex interactions are more easily understood and analyzed by manual testing.
CSRF stands astride this gap between automation and manual testing. Automation identifies whether an app accepts forged requests, whereas manual testing can delve deeper into underlying state vulns or chains of exploits that CSRF might enable.