Category Archives: html5

BayThreat 2012 WebSocket Presentation

BayThreat held its 2012 conference this December in Sunnyvale, CA. Yes, I was sorely disappointed it wasn’t actually in Sunnydale (with a ‘d’).

My colleagues, @sshekyan and @tukharian, and I gave an overview on the security of WebSockets. The presentation slides are available now.

Reading slides is always a hazardous approach to understanding a presentation. You may be completely lost because you don’t have the benefit of background given during the talk, miss key points conveyed orally, or misunderstand comments. When a recording is available I’ll add a link to it.

In the meantime, here’s a rough script for the introduction of the topic. (Usually I try to just jot down a handful of notes, but this post needed some more words. Keep in mind the intro is supposed to clock well under 10 minutes. It’s brief by intent.)

1 Hello, BayThreat. Hello, WebSockets.

2 The length of time we have to cover WebSockets is about the same as a saturday morning cartoon from the 80s. So, we’ll skip the commercial breaks and, kind of like Voltron, bring together a bunch of ideas and observations about this cool, new protocol.

3 In the past, web apps have found many legitimate use cases for persistent connections and two-way communication. HTTP doesn’t support this in any efficient or optimal way for continuous traffic. Trying to get this done with long polling, XHR, or DOM tricks are all workarounds, and like the plans of a cartoon villain (Skeletor, Hordak, Cobra Commander), they never seem to work like you want them to. (Mumra)

4 That doesn’t mean there aren’t good techniques for two-way communication, they just aren’t as good as they could be in terms of bandwidth or handling non-text data.

HTML5 introduced Server-Sent Events to accommodate most long-polling scenarios in which the browser just needs to wait for incoming data. For example, stock quotes, tweets, or some other data for which the browser is a passive observer rather than an active participant.

5 Which brings us to RFC 6455, WebSockets. It’s a way to encapsulate almost any protocol on top of RFC 2616 (HTTP) without being as messy as RFC 1149.

6 There are two major components to WebSockets. The communication protocol, which is designed to have low overhead and low complexity. And the JavaScript API, which is designed to be a simple interface for the browser. Send with the send() method, receive with the onmessage() method, and then just an open(), close(), and onerror() method to round out the object.

7 So, let’s take at look at how the browser sets up a WebSocket connection. First, it goes through a challenge-response handshake. The challenge is 128 random bits (that’s a 16d8 for any role-players out there — about 2 1/2 flamestrikes for you 1st ed. AD&D clerics).

8 Next, the server responds. The response is a hash of the client’s challenge plus a GUID defined by the RFC. The response also includes a “Connection: Upgrade” header so the two end points know to stop talking HTTP and start talking WebSockets. A while ago Carnegie Mellon & Google did a study with Chrome users and discovered that many proxies would strip that Connection header — thus killing the handshake process. However, wss: connections (that is, WebSockets over SSL/TLS) were mostly immune to interference from proxies. So, not only will you create a more secure connection, you’ll be more likely to create the connection in the first place.

9 An important thing to understand about the handshake is that it proves the server speaks the protocol. That’s it. It’s not a cryptographic handshake to prove identity or trust. And another point, the handshake isn’t supposed to start within mixed content. Also, the browser is supposed to limit pending connections to one per origin in order to prevent connection floods.

10 Once the browser and server complete the handshake, they switch from HTTP to sending WebSocket data frames. A data frame’s overhead can be as low as two bytes — flags and a length. The maximum overhead is 14 bytes, so we’re still talking an order of magnitude improvement over something like XHR or vanilla HTTP.

Two notable parts of a data frame are the variable length fields and the mask.

11 The length of a payload is indicated by 7, 16, or 64 bit values. A length is supposed to be represented by its shortest form. However, if you think of similar situations like overlong UTF-8 encoding you can imagine how variable-length fields might be a source of problems in terms of bypassing detection mechanisms or sneaking across security boundaries. And, of course, you have the potential for buffer abuse. You could submit a data frame that declares a few gigs of payload but only carries a few bytes. If you have a server that pre-allocates memory based on payload lengths alone (without, for example, even validating the frame), then it’d be an easy DoS. In other instances the server might experience a buffer over- or under-run.

12 Data masking is an aspect of WebSockets that’s transparent to the user. It’s intended to prevent WebSockets from being a vector for cross-protocol attacks. For example, you wouldn’t want WebSocket data to corrupt a proxy cache by impersonating HTML content, nor would you want a WebSocket to start spitting out EHLO commands to an SMTP server in order to spam the internet. (By the way, the handshake is also a countermeasure.)

The browser applies a mask to all outgoing WebSocket frames. It’s not encryption. The 32-bit “key” for the mask is included in the data frame. In other words, you’re essentially sending the “decryption” key along with the message. The user isn’t able to influence or access the mask from the JavaScript API.

(As and aside, I created this figure with a dozen or so lines of a Scapy script; which speaks to how simple the protocol is to parse.)

13 The browser controls a few other things that are kept out of view from the JavaScript API. It handles “ping” and “pong” frames for connection keep-alives. It’s also supposed to limit the verbosity of error reporting to prevent WebSockets from being a better host or port “scanner” than previous techniques like using XHR or src attributes of img or iframe tags.

14 Finally, remember that upgrading a connection from HTTP to WebSockets loses a certain amount of security context. The handshake itself carries information like Authentication headers, cookies, CORS, and the Origin header. However, you have to be careful about how you track and maintain this context after switching protocols. Imagine a simple chat application. It’s one thing to have a session cookie identify a user’s HTTP connection. But if your chat protocol relies solely on strings or JSON structures to identify the sender and recipient of messages, then it’s probably a short step to spoofing messages unless you remember to enforce server-side controls on the user’s state and chat interactions.

With that groundwork out of the way, let’s turn the channel to more educational programs on the security of WebSockets.

… end part I …

And with that we’ve exhausted the article. Keep an eye on this site for more updates on WebSocket security.

HTML5 Unbound, part 4 of 4

(The series concludes today with guesses about the future of web security. The first, second, and third parts have been published as well as the accompanying slides.)

Design, Doom & Destiny

Mobile devices and apps change they way we consume the web. Even native mobile apps connect to URLs or access web-based APIs.

Who cares about the URL anymore? From QR codes to small screens, there’s minimal real estate to show off a complete link. And all of the padlocks, green text for EVSSL certs, and similar hints are barely useful for desktop browsers.

Mobile app security is currently a nightmare of assorted markets, outright malicious apps, and poorly crafted apps. Mobile apps that interface to web sites don’t always do so securely. And it’s impossible to distinguish bad apps at a glance. Apps that interact with web APIs often employ browser-like features, not a browser. There’s a subtle, but important distinction there.

For example, Twitter’s API is accessible via HTTP or HTTPS. If you use the Twitter web site you can set your account to always use HTTPS. Sadly, that doesn’t affect how apps might use the API. The official Twitter app uses HTTPS. It also refuses to connect if it receives a certificate error. On the other hand, other apps may connect over HTTP, or use HTTPS but not bother to apply certificate validation. A year ago, in 2011, Twitter listed some third-party software projects that used OAuth and Twitter APIs. They ranged in languages from Java to C++ to PHP. Three out of four didn’t even use https:// links. The one that did use https:// didn’t bother to verify the server’s cert.

Update January 15, 2014: Twitter now requires HTTPS access to its API.

Personal data is valuable. In Silicon Valley, the dollar is made of people. We could debate the pros and cons of compliance standards like PCI for credit cards. In an age where companies have billion-dollar valuations based on their user bases, it should be evident that credit cards aren’t the only kind of data coveted by hackers and companies alike. In the last few years several companies have been forced to apologize for privacy breaches or abuse. Set aside the concern about compromise, that was behavior deemed “normal” by the companies in question. Normal until exposed to the scrutiny of the masses.

Privacy is area where HTML5 has tried to balance reasonable design with features users and devs expect from native mobile apps. Before geolocation started showing up in phones or browsers, people could still be tracked by geolocation data marked by cameras in digital photos’ EXIF information. Browser plugins have been abused to create supercookies. It was up to Flash to enforce a privacy policy for its equivalent to the cookie, the Local Shared Object. And, of course, HTML5 has the Web Storage API.

Privacy has to be an area where browsers take the lead on implementing features like providing clear controls for objects stored in a browser and privacy-related headers like Do-Not-Track. The DNT header is another example of browser developers pushing for a new technology, but meeting resistance due to technical as well as business concerns. For example, if 98% of your company’s revenue depends on tracking technologies, then you’ll be sensitive to features like this. Perhaps even reluctant to implement it.

From a design perspective, HTML5 offers many new features that make it easier for web developers to create powerful, useful apps without sacrificing security. The various specs around HTML5 even carry warnings and recommendations about security and privacy for these new technologies.

The implementation of HTML5 will occasionally run into flaws, but that’s to be expected of any large software engineering effort. The use of iframes and sharing resources across documents will likely be a source of problems. At the very least, think in terms of balancing information leakage around framed content. On one hand, it might be desirable to prevent a framed advertising banner from knowing the origin that has framed it. But for sites that aggregate functions (we use to call them mashups), this kind of parent — or child — node information might be useful. And there’s the challenge of making sure a node’s Origin attribute remains stable and correct as complex JavaScript moves it around the DOM, removes it from the DOM, or tries to keep it orphaned and running in the background.

And finally, the password problem has yet to be solved. Regardless of a site’s backend security and use of HTTPS, look how prevalent it is to send the plaintext password from browser to server. That’s begging for some kind of challenge-response that provides better confidentiality for the password. But doing so likely requires browser support and careful design so that there’s clarity around what threats a challenge-response would mitigate and those it wouldn’t. It’s unlikely phishing will disappear any time soon.

There are other positive steps towards password management in the form of OAuth and OpenID. These solutions shift the burden of authentication management from the site to a trusted third-party site. But again we could come up with new threats that this may introduce. For example, strong password security behavior reinforces the idea that you should verify that you’re typing credentials for a site into a form that is served from that site. With OAuth, we’re adjusting the behavior by showing users it’s acceptable to enter important credentials (Google, Facebook, Twitter, etc.) for an unrelated site. There are always going to be engineering problems that don’t have complete technical solutions. Even as users need to remain vigilant about protecting their passwords, developers still need to treat OAuth tokens and the site’s session tokens securely, just as they would passwords.

There are also always going to be ways that secure technologies are used insecurely. HTML5 has done a good job of providing security restrictions or security recommendations. Developers shouldn’t ignore them. Nor should developers forget key principles of secure design, from things like understanding that HTTPS everywhere is good for users’ privacy, but has no bearing on SQL injection or XSS. To maintaining authentication and authorization checks on server-side resources even if the client has similar checks.

Adopt HTML5 now. Start with leading your pages with <!doctype html>. Push your visitors to use a modern web browser. If you don’t have to support IE6, then why bother going through the pain of creating markup for an obsolete browser? Apply trivial headers that only require server-side changes. X-Frame-Options blocks clickjacking (for those using browser that support it). HSTS minimizes sniffing and intermediation threats; we’ll still need secure DNS to make it complete. We can’t just rely on browsers to become better. Sites need to keep up with security improvements. Try enabling only TLS 1.1 and 1.2. See how many sites fail for you.

HTML5 is the promethean technology of the web’s future. Preserving the security and privacy of data from a mobile app or an HTML5 web site should be the driving force behind the app’s design. The implementation of your site and how it applies HTML5′s feature to user data will determine security. Don’t rely on a standards body or browser security to do it for you.

HTML5 Unbound, part 3 of 4

(With the historical perspective behind us, we dive into HTML5. This series concludes on Wednesday.)

Security (and Privacy) From HTML5

Most HTML5 security checklists rehash the recommendations and warnings from the specs themselves. It’s always a good sign when specs acknowledge security and privacy. Getting to that point isn’t trivial. There were two detours on the way to HTML5. WAP was a first stab at putting the web on mobile devices when mobile devices were dumb. And one of its first failings was the lack of cookie support.

XHTML was another blip on the radar. Its only improvement over HTML seemed to be that mark-up could be parsed under a stricter XML interpreter so typos would be more easily caught. XHTML caught on as a cool thing to do, but most sites served it with a text/html MIME type that completely negated any difference from HTML in the first place. Herd mentality ruled the day on that one.

CSRF and clickjacking are called out as security concerns in the HTML5 spec. For some developers, that may have been the first time they heard about such vulns even though they’re fundamental to how the web works. They’re old, old vulns. The good news is that HTML5 has some design improvements that might relegate those vulns to history.

The <video> element doesn’t speak to security; it highlights the influence of non-technical concerns for a standard. The biggest drama around this element was the choosing whether an explicit codec should be mandated.

WebGL is an example of pushing beyond the browser into graphics cards. These hardware for these cards doesn’t care about Same Origin Policy or even security, for that matter. Early versions of the spec had two major problems: Denial of Service and Information Leakage. It was refreshing to see privacy (information leakage) receive such attention. As a consequence of these risks browsers pulled support. Early implementation allowed researchers to find these problems and improve WebGL. Part of its revision included attachment to another HTML5 security policy: Cross Origin Resource Sharing (CORS).

Like WebGL, the WebSocket API is another example where browsers implemented an early draft, revoked support due to security concerns, and now offer an improved version. For example, the WebSockets include a handshake and masking to prevent the kind of cross-protocol attacks that caused early web browsers to block access to ports like SMTP and telnet.

These examples show us a few things. One, we shouldn’t be surprised at the tensions from competing desires during the drafting process. Two, secure design takes time. (Remember PHP?) And three, browser developers are pushing the curve on security.

It’s only a matter of time before XSS rears its ugly head during a discussion of web security. After all, HTML injection has tormented developers from the beginning. Early examples of malicious HTML used LiveScript, the ancestral title to JavaScript. In 1995 Netscape offered a Bug Bounty for its browser. The winning exploit exposed a privacy hole and netted $1000. Interestingly, the runner up was a crypto timing attack that could, for example, reveal the secret key of an SSL server. Even if RSA has a secure design in terms of cryptographic primitives, vulns will appear in its implementation. That was merely a hint of the trouble to come for SSL/TLS.

Anyway, that was a nice $1000 bug in 1995. HTML injection continued to grow, with one of the first hacks demonstrated against a web-based email system in 1998. Behold, the mighty <img> tag using a javascript: URI to pop up a login prompt. That was just a few years after the term phishing had been coined.

So is there really an HTML5 injection? What terrible flaws does the new standard contain that its predecessors did not?

Not much. An important improvement from HTML5 is that parsing HTML documents is codified with instructions on order of operations, error handling, and fixup steps. A large portion of XSS history involves payloads that exploit browser quirks or bizarre parsing rules.

A key component to the infamous Samy worm’s success was Internet Explorer’s “fix up” of a javascript: token split by a newline character (i.e. java\nscript) to a single, valid URI. A unified approach to parsing HTML should minimize these kinds of problems, or at least make it easier to test for them. Last year a bug was found in Firefox’s parsing of HTML entities when a NULL byte (%00) was present. That was an implementation error; HTML5 actually provides instructions on how that entity should have been handled. The persistent danger will be a browser’s legacy support and non-standards (or relaxed standards) mode.

Sites that have weak blacklisting will suffer the most from the arrival of HTML5. HTML5 has new elements and new attributes that provide JavaScript execution contexts. If your site relies on fancy regexes to strip out all the cool hacks from XSS cheat sheets you’ve been scouring, then it’s still likely to miss the new tags of HTML5.

The initial excitement around HTML5-based XSS was the autofocus attribute. A common reflection point for HTML injection is the value of an <input> element. Depending on the kind of payload injected, an exploit would require the victim to perform some action (submit the form, click a field, etc.). The autofocus attribute lets an exploit to automatically execute JavaScript tied to an onfocus or onblur event.

There’s a cynical perspective that HTML5 will bring a brief period of worse XSS problems by developers who embrace HTML5′s enhanced form validation while forgetting to apply server-side validation. There’s nothing misleading about HTML5′s approach to this. More pre-defined <input> types and client-side regexes improve the user experience. It’s not intended to be a security barrier. It’s a usability enhancement, especially for browsers on mobile devices.

HTML5 has distressingly few ways to minimize the impact of XSS attacks with <iframe> sandboxing and Cross Origin Resource Sharing controls. They help, but they don’t fundamentally change the design of the Same Origin Policy, which has the drawback that all content within an Origin receives equal treatment. Rather than providing a design of least privilege access, it’s a binary all or nothing privilege. That’s unappetizing for modern web apps that wish to implement everything from mashups to advertising to running third-party JavaScript within a trusted Origin.

The Content Security Policy (CSP) introduces design-level countermeasures for vulns like XSS. CSP moved from a Mozilla project to a standards track for all browsers to implement. A smart design choice is providing monitor and enforcement modes. It’s implementation will likely echo that of early web app firewalls. CSP complexity has the potential to break sites. Expect monitor mode to last for quite a while before sites start enforcing rules. The ability to switch between monitor and enforce is a sign of design that encourages adoption: Make it easier for devs to test policies over time.

HTML injection deserves emphasis since it’s the most pervasive problem for web apps. But it’s not the only problem for web apps. Other pieces of HTML5 have equally serious concerns.

The Web Storage API adds key-value storage to the browser. It’s effectively a client-side database. Avoid the immediate jump to SQL injection whenever you hear the word database. Instead, consider the privacy implications of Web Storage. We must be concerned about privacy extraction, not SQL injection. Web Storage has already been demonstrated as yet another tool for insinuating supercookies into the browser. In an era when developers still neglect to encrypt passwords in server-side databases, consider the mistakes awaiting data placed in browser databases: personal information, credit card numbers, password recovery, and more. And all of this just an XSS away from being exfiltrated. XSS isn’t the only threat. Malware has already demonstrated the inclination to scrape hard drives for financial data, credentials, keys, etc. An unencrypted store of 5MB (or more!) data is an appealing target. Woe to the web developer who thinks Web Storage is a convenient place to store a user’s password.

The WebSocket API entails a different kind of security. The easy observation is that it should use wss:// in favor of ws://, just like HTTPS should be everywhere. The subtler problem lies with the protocol layered over a WebSocket connection.

Security from controls like HTTPS, Same Origin, and session cookies don’t automatically transfer to WebSockets. For example, consider a simple chat protocol. Each message includes the usernames for sender and recipient. If the server just routes messages based on usernames without verifying the sender’s name matches the WebSocket they initiated, then it’d be trivial to spoof messages. Or consider if the app does verify the sender and recipient, but users’ session cookies are used to identify them. If the recipient receives a message packet that contains the sender’s session ID — well, I hope you see the insecurity there.

If there’s one victim of the HTML5 arms race, it’s the browser exploit. Not that they’ve disappeared, but that they’ve become more complex. A byproduct of keeping up with (relatively) quickly changing drafts is that modern browsers are quicker to update. More importantly, self-updating shares a of features like plugin sandboxing, process separation, and even rudimentary XSS protection. Whatever your choice of browser, the only version number you need any more is HTML5.

That’s the desire. In practice, accelerating browser updates isn’t going to adversely affect the pwn to own and exploit communities any time soon. IE6 refuses to disappear from the web. Qualys’ BrowserCheck stats show that browsers still tend to be out of date. But worse, the plugins remain out of date even if the browser is patched. In other words, Flash and Java deserve fingerpointing for being responsible for exposing security holes. When was the last time Adobe released a non-critical Flash update?

Browser security isn’t restricted to internal code. A header like X-Frame-Options offers an easy defense against clickjacking. New HTML5 capabilities like the sandbox attribute for iframes would defeat JavaScript-based frame busters intended to block clickjacking. With one fell swoop of security design (and adding a single header at your web server), it should be possible to get rid of an entire class of vulnerability. The catch is getting sites to implement it.

The browser needs the complicity of sites in order for a feature like X-Frame-Options to matter. It’s one thing to scrutinize the design of a half-dozen or so web browsers. It’s quite another to consider the design of millions and millions of web sites.

There is a looming XSS threat, but it’s a byproduct of the ecosystem building around HTML5. Heavy JavaScript libraries have become major components of modern web apps. JavaScript has a challenging environment for security. Its interaction with the DOM is restricted by Same Origin Policy. On the other hand, its prototype-based design and global namespaces.

JavaScript libraries are great. They reinforce good programming patterns and provide functionality that would otherwise have to be created from scratch. The flip side of libraries is that they offer additional exploit vectors and need to be maintained.

Let’s return to the idea of blacklists to discuss the other insidious aspect of XSS. These libraries also have functions that expose eval(), DOM manipulation, and XHR calls, among others. By no means is there anything insecure or inadvisable about this. All it does it magnify the impact if an XSS vuln already exists on the site — which isn’t likely to be from the JavaScript library.

HTML5 Unbound, part 2 of 4

(The series continues with a look at the relationship between security and design in web-related technologies prior to HTML5. Look for part 3 on Monday.)

Security From Design

The web has had mixed success with software design and security. Before we dive into HTML5 consider some other web-related examples:

PHP superglobals and the register_globals setting exemplify the long road to creating something that’s “default secure.” PHP 4.0 appeared 12 years ago in May 2000, just on the heels of HTML4 becoming official. PHP allowed a class of global variables to be set from user-influenced values like cookies, GET, and POST parameters. What this meant was that if a variable wasn’t initialized in a PHP file, a hacker could set its initial value just by including the variable name in a URL parameter. (Leading to outcomes like bypassing security checks, SQL injection, and accessing other users’ accounts.) Another problem of register_globals was that it was a run-time configuration controllable by the system administrator. In other words, code secure in one environment (secure, but poorly written) became insecure (and exploitable) simply by a site administrator switching register_globals on in the server’s php.ini file. Security-aware developers tried to influence the setting from their code, but that created new conflicts. You’d run into situations where one app depended on register_global behavior where another one required it to be off.

Secure design is far easier to discuss than it is to deploy. It took two years for PHP to switch the default value to off. It took another seven to deprecate it. Not until this year was it finally abolished. One reason for this glacial pace was PHP’s extraordinary success. Changing default behavior or removing an API is difficult when so many sites depend upon it and programmers expect it. (Keep this in mind when we get to HTML5. PHP drives many sites on the web; HTML is the web.) Another reason for this delay was resistance by some developers who argued that register_globals isn’t inherently bad, it just makes already bad code worse. Kind of like saying that bit of iceberg above the surface over there doesn’t look so big.

Such attitudes allow certain designs, once recognized as poor, to resurface in new and interesting ways. Thus, “default insecure” endures. The Ruby on Rails “mass assignment” feature is a recent example. Mass assignment is an integral part of Ruby’s data model. Warnings about the potential insecurity were raised as early as 2005 – in Rails’ security documentation no less. Seven years later in March 2012 a developer demonstrated the hack against the Rails paragon, GitHub, by showing that he could add his public key to any project and therefore impact its code. The hack provided an exercise for GitHub to embrace a positive attitude towards bug disclosure (eventually). It finally led to a change in the defaults for Ruby on Rails.

SQL injection has to be mentioned if we’re going to talk about vulns and design. Prepared statements are the easy, recommended countermeasure for this vuln. You can pretty much design it out of your application. Sure, implementation mistakes happen and a bug or two might appear here and there, but that’s the kind of programming error that happens because we’re humans who make mistakes. Avoiding prepared statements is nothing more than advanced persistent ignorance of at least six years of web programming. A tool like sqlmap stays alive for so long because developers don’t adopt basic security design. SQL injection should be a thing of the past. Yet “developer insecure” is eternal.

But I’m not bringing up SQL injection to rant about its tenacious existence. Like the heads of the hydra, where one SQL injection is gone, another will take its place. The NoSQL (or anything-but-a-SQL-server) movement has the potential to reinvent these injection problems. Rather than SELECT statements, developers will be crafting filters with JavaScript, effectively sending eval() statements between the client and server. This isn’t a knock against choosing JavaScript as part of a design, it’s the observation that executable code is originating in the browser. When code and data mix, vulns happen.

Then there’s JavaScript itself. ECMAScript for the purists out there. At a high level, JavaScript’s variables exhibits the global scope of PHP’s superglobals. Its prototype system is reminiscent of Rails’ mass assignment. Its eval() function wreaks the same havoc as SQL or command injection. And we need it.

JavaScript is fundamental to the web. Fundamental to HTML5. And for all the good it brings to the browsing experience, some unfortunate insecurities lurk within it. Forget VBScript, skip Google’s Dart, JavaScript is the language for browser computing. Good enough, in fact, that it has leapt the programming chasm from the browser to server-side code. If we were to tease developers that PHP stood for Pretty Horrible Programming, then Node must stand for New Orders of Developer Error. Note the blame for insecurity falls on the developers, not the choice of programming language or technology. (Although you’d be crazy to expose a node.js server directly to the internet.)

Basic web technologies didn’t start off much better than the server-side technologies we’ve just sampled. Cookies grew out of implementation, not specification. That’s one reason for their strange relationship with browser security. Cookies have a path attribute that’s effectively useless; it’s an ornamentation that has no bearing on Origin security. The httponly and secure attributes affect their availability to JavaScript and http:// schemes. Sometimes you have access to a cookie; sometimes you don’t. These security controls differ from other internal browser security models in that they rely on domain policy rather than Origin policy. Many of HTML5′s features are tied to the Same Origin Policy rather than a domain policy because the Origin has more robust integration throughout the browser.

Same Origin Policy is a core of browser security. One of its drawback is that it permits pages to request any resource — which is why the web works in the first place, but also why we have problems like CSRF. Another drawback is that sites had to accept its all or nothing approach — which is why problems like XSS are so worrisome.

User Agent sniffing and HTTPS are two other examples of behavior that’s slow to change. Good JavaScript programming patterns prefer feature detection rather than making assumptions based on a single convoluted string. In spite of the problems around SSL/TLS, there’s no reason HTTP should be the default connection type for web sites. Using HTTPS places a site — and its users — in a far stronger security stance.

HTML5 Unbound, part 1 of 4

(This is the first part in a series of articles that accompany my Security Summit presentation, HTML5 Unbound: A Security & Privacy Drama. Check back Friday for part 2.)

The Meaning & Mythology of HTML5

HTML5 is the most comprehensive update in the last 12 years to a technology that’s basically twenty years old. It’s easy to understand the excitement over HTML5 by looking at the scope and breadth of the standard and its related APIs. It’s easy to understand the significance of HTML5 by looking at how many sites and browsers implement something that’s officially still in draft.

It’s also easy to misunderstand what HTML5 means for security. Is it really a whole new world of cross-site scripting? SQL injection in the browser? DoS attacks with Web Workers and WebSockets? Is there something inherent to its design that solves these problems. Or worse, does it introduce new ones?

We arrive at some answers by looking at the history of security design on the web. Other answers require reviewing what HTML5 actually encompasses and the threats we expect it to face. If we forget to consider how threats have evolved over the years, then we risk giving a thumbs up to a design that merely works against hackers’ rote attacks rather than their innovation.

First let’s touch on the meanings of HTML5. A simple definition is a web page with a <!doctype html> declaration. In practice, this means new elements for interacting with audio and video; elements for drawing, layout, and positioning content; as well as APIs that have have their own independent specifications. These APIs emerge from a nod towards real-world requirements of web applications: ways to relax the Same Origin Principle (without resorting to insecure programming hacks like JSONP), bidirectional messaging (without resorting to programming hacks like long-polling), increased data storage for key/value pairs (without resorting to programming hacks that use cookies or plugins). It also includes items like Web Workers to help developers efficiently work with the increasing amount of processing being pushed into the browser.

There’s a mythology building around HTML5 as well. Some of these are innocuous. The web continues to be an integral part of social interaction, business, and commerce because browsers are able to perform with desktop-like behaviors regardless of what your desktop is. So it’s easy to dismiss labels like “social” and “cloud” as imprecise, but mostly harmless. Some mythologies are clearly off mark, neither Flash nor Silverlight are HTML5, but their UI capabilities are easily mistaken for the type of dynamic interaction associated with HTML5 apps. In truth, HTML5 intends to replace the need for plugins altogether.

Then there are counter-productive mythologies that creep into HTML5 security discussions. The mechanics of CSRF and clickjacking are inherent to the design of HTML and HTTP. In 1998, according to Netcraft, there were barely two million sites on the web; today Netcraft counts close to 700 million. It took years for vulns like CSRF and clickjacking to be recognized, described, and popularized in order to appreciate their dangers. Hacking a few hundred million users with CSRF has vastly different rewards than a few hundred thousand, and consequently more appeal. If CSRF is to be conflated with HTML5, it’s because the spec acknowledges security concerns more explicitly that its ancestors ever did. HTML5 mentions security over eighty times in its current draft. HTML4 barely broke a dozen. Privacy? It showed up once in the HTML4 spec. (HTML5 gives privacy a little more attention.) We’ll address that failing in a bit.

So, our stage is set. Our players are design and implementation. Our conflict, security and privacy.

OWASP/ISSA Bletchley Park 2012, Graveyards & Zombies

The May 10th OWASP/ISSA meeting at Bletchley Park was a chance to discuss web security, but the bigger draw was visiting the home of British code-breaking during WWII. It was astonishing to realize how run down the buildings had become. The site’s long-held secrecy ensured disrepair and inattention that is still being remedied. Never the less, it’s one of the most rewarding 30-minute train trips you can take from London.

On a different note, here are the slides for my presentation on Graveyards & Zombies – observations on vulns that should have been quashed by good design, but continue to vex web security.

Security Summit 2012, HTML5 Unbound

Here are the slides for my recent HTML5 Unbound presentation at South Africa’s 2012 Security Summit last week. Slides alone rarely convey the full story and leave many points ambiguous. As I settle back to my home time zone I’ll post accompanying notes that provide more background on the ideas behind this presentation.

Google Darts Back to VBScript

There’s an interesting discussion evolving on the WebKit developer’s mailing list that boils down to adding VBScript support to the project. Well, almost. It’s a discussion between two major contributor camps, Google and Apple, on the framework for integrating Google’s langue du jourDart.

To set the stage, no one on the list is arguing in bad faith. If you’d prefer the troll-baiting titillation of he said/she said threads, look elsewhere. Never the less, keep reading here and you’ll be rewarded with a pontifical comment or two.

So, back to Google’s desire to include VBScript to the WebKit browser engine. I mean Dart; I believe they call it Dart because four fewer letters improves efficiency. The basic idea is that JavaScript is nice, but insufficient to fully replicate certain kinds of desktop apps. For example, JavaScript becomes creaky if you push it to handle anything associated with frame rates — namely games.

There’s clearly self-interest in improving browser computing if your entire platform relies on the browser. For starters, you want a browser that won’t have ad-blocking on by default. And you’ll want to smooth out the wrinkles of something like a Do Not Track header.1,2 Sometimes, it’s even convenient to get other browsers, say Internet Explorer, to catch up on technology by plugging your own browser into them.3 (Never mind the implications of a browser in a browser.4,5) That brouhaha of 2009 enabled users to experience brave, new products with their Chrome/IE chimera — which in hindsight must have been necessary since the product was no longer around by the time IE caught up on HTML5.6

But all of that avoids the fact that JavaScript isn’t perfect. Enter Dart, accompanied by tweaks that make it more bare-metal-compiler friendly

On the other hand, maybe JavaScript (ahem, the ECMAScript standard) just needs its own tweaking to enable performance gains.7,8 And while we’re on this JavaScript tirade, why not throw improve our privacy with some crypto-related capabilities rather than start over with VBDart?9

ECMADart isn’t Google’s sole flirtation with browser extensions. Google also wants to reinvent ActiveX in the form of a plugin called NaCl.10 NaCl is a sort of the arterial bypass of JavaScript in that it provides a way to execute native code (C or C++) in your browser. Instead of relying on the non-standard closed sandbox plugins like Flash or Silverlight you can rely on the non-standard open source sandbox plugin of NaCl.

Words That Start With E

Understand first that reinvention intends to improve upon the original. Hollywood likes to call this “rebooting” a franchise. This brings us cool Batman movies. At the price of yet another Batman movie. Or yet another Superman. Or Spiderman. (Hey, Star Trek was pretty awesome so reboots aren’t out-of-hand a bad idea.) Yet this pushes other, fresher ideas out of the way. In web terms, those other, fresher ideas involve developers embracing HTML5 and JavaScript as the standard deployment model for web apps rather than coding to browser quirks or throwing Flash-driven menus everywhere.

Now fill in the blank: Reinventing a technology is a great way to [ ____ ]

Even desultory readers should notice the biased presentation of choices: Three phrases of cliched meaninglessness and one possibly-too-subtle allusion to the dark times of an almost two decade-old past. It wasn’t until the late 90′s when a Rolling Stones‘ song first graced a t.v. commercial. Their song, “Start Me Up,” played over an ad (this is the dark times part) for Microsoft — the company that created the “embrace, extend, and extinguish” strategy to give Internet Explorer dominance in the browser market.

One great way to embrace and extend is to provide New! Cool! features that work great in one browser, but degrade or don’t exist in any other. A new scripting language is one way to do that, even if it’s as useful as VBDript. To be fair, plugins like Flash and Silverlight need to be pulled into this category. Java counts as cross-platform, but when was the last time you used a Java app in your browser? When was the last time a hacker did? (Hint: Probably more recently than you think.)12

Stepping outside of boundaries isn’t always bad. After all, a foundation of the modern web, the XMLHttpRequest object, arose from an IE-only extension.13 A detraction further compounded by requiring ActiveX. XHR’s adoption into the W3C standards was both acknowledgement of the feature’s widely recognized utility as well as the desire to make the feature equal among all browsers.

All You Need is <!doctype html>

Maybe everything doesn’t have to go into the browser. Yes, I can think of a few reasons why App stores (trademarked ones and not) equally threaten divergence and uncrossable platforms. But at least consider the app+device duo has a better security model than the browser. The browser’s model is mostly a Same Origin Policy affair, whereas you ostensibly have to approve and acknowledge certain behaviors for your sandboxed app.

The worst thing you can do is sign up to the WebKit developers list in order to spam it with flaming, troll-ridden diatribes for or against JavaDart. Let engineers more involved in the browser sausage making sort it out with their constructive conversation.

The best thing you can do is continue to create cool web sites with technology that works in every browser: HTML5 and JavaScript. Let the annoying litter of the Web’s past (pop-up windows, scrolling marquees, even Flash has a terminal diagnosis by now) scatter in what the Scorpions so awesomely sung as the “Wind of Change.”


How web security will change with HTML5

Here’s an article with musings on potential security1 issues of The Web’s favorite new buzzword, HTML5.

Before you get too excited about breaking the spec, consider this bit:

The most dangerous security problems won’t be due to features of HTML5. Too many experienced people have been working on the specs to leave egregious errors in the design or in browsers’ implementation of it. The worst problems will come from developers who rush into new technologies without remembering sins of the past. It’s far too easy to fall into the trap of trusting data from the browser just because some hefty JavaScript routines have been assumed to perform all sorts of security validation on the data.

I can’t post the original article here because Mashable’s evil contract means I no longer have any rights to it. (Give us your content for free and receive Exposure!) I obviously agreed to these terms; hopefully they serve Mashable and me well.

If you’d like to hear more about HTML5 along with more technical details, stick around. There’s plenty to talk about!