(The series continues with a look at the relationship between security and design in web-related technologies prior to HTML5. Look for part 3 on Monday.)
Security From Design
The web has had mixed success with software design and security. Before we dive into HTML5 consider some other web-related examples:
PHP superglobals and the register_globals setting exemplify the long road to creating something that’s “default secure.” PHP 4.0 appeared 12 years ago in May 2000, just on the heels of HTML4 becoming official. PHP allowed a class of global variables to be set from user-influenced values like cookies, GET, and POST parameters. What this meant was that if a variable wasn’t initialized in a PHP file, a hacker could set its initial value just by including the variable name in a URL parameter. (Leading to outcomes like bypassing security checks, SQL injection, and accessing other users’ accounts.) Another problem of register_globals was that it was a run-time configuration controllable by the system administrator. In other words, code secure in one environment (secure, but poorly written) became insecure (and exploitable) simply by a site administrator switching register_globals on in the server’s php.ini file. Security-aware developers tried to influence the setting from their code, but that created new conflicts. You’d run into situations where one app depended on register_global behavior where another one required it to be off.
Secure design is far easier to discuss than it is to deploy. It took two years for PHP to switch the default value to off. It took another seven to deprecate it. Not until this year was it finally abolished. One reason for this glacial pace was PHP’s extraordinary success. Changing default behavior or removing an API is difficult when so many sites depend upon it and programmers expect it. (Keep this in mind when we get to HTML5. PHP drives many sites on the web; HTML is the web.) Another reason for this delay was resistance by some developers who argued that register_globals isn’t inherently bad, it just makes already bad code worse. Kind of like saying that bit of iceberg above the surface over there doesn’t look so big.
Such attitudes allow certain designs, once recognized as poor, to resurface in new and interesting ways. Thus, “default insecure” endures. The Ruby on Rails “mass assignment” feature is a recent example. Mass assignment is an integral part of Ruby’s data model. Warnings about the potential insecurity were raised as early as 2005 — in Rails’ security documentation no less. Seven years later in March 2012 a developer demonstrated the hack against the Rails paragon, GitHub, by showing that he could add his public key to any project and therefore impact its code. The hack provided an exercise for GitHub to embrace a positive attitude towards bug disclosure (eventually). It finally led to a change in the defaults for Ruby on Rails.
SQL injection has to be mentioned if we’re going to talk about vulns and design. Prepared statements are the easy, recommended countermeasure for this vuln. You can pretty much design it out of your application. Sure, implementation mistakes happen and a bug or two might appear here and there, but that’s the kind of programming error that happens because we’re humans who make mistakes. Avoiding prepared statements is nothing more than advanced persistent ignorance of at least six years of web programming. A tool like sqlmap stays alive for so long because developers don’t adopt basic security design. SQL injection should be a thing of the past. Yet “developer insecure” is eternal.
Same Origin Policy is a core of browser security. One of its drawback is that it permits pages to request any resource — which is why the web works in the first place, but also why we have problems like CSRF. Another drawback is that sites had to accept its all or nothing approach — which is why problems like XSS are so worrisome.