This October people who care about being aware of security in the cyberspace of their nation will celebrate the 10th anniversary of National Cyber Security Awareness Month. (Ignore the smug octal-heads claiming preeminence in their 12th anniversary.) Those with a better taste for acronyms will celebrate Security & Privacy Awareness Month.
For the rest of information security professionals it’s just another TUESDAY (That Usual Effort Someone Does All Year).
In any case, expect the month to ooze with lists. Lists of what to do. Lists of user behavior to be reprimanded for. What software to run, what to avoid, what’s secure, what’s insecure. Keep an eye out for inconsistent advice among it all.
Ten years of awareness isn’t the same as 10 years of security. Many attacks described decades ago in places like Phrack and 2600 either still work today or are clear antecedents to modern security issues. (Many of the attitudes haven’t changed, either. But that’s for another article.)
Web vulns like HTML injection and SQL injection have remained fundamentally unchanged across the programming languages that have graced the web. They’ve been so static that the methodologies for exploiting them are sophisticated and mostly automated by now.
Awareness does help, though. Some vulns seem new because of awareness (e.g. CSRF and clickjacking) even though they’ve haunted browsers since the dawn of HTML. Some vulns just seem more vulnerable because there are now hundreds of millions of potential victims whose data slithers and replicates amongst the cyber heavens. We even have entire mobile operating systems designed to host malware. (Or is it the other way around?)
So maybe we should be looking a little more closely at how recommendations age with technology. It’s one thing to build good security practices over time; it’s another to litter our cyberspace with cheap essential scenery.
Here are two web security examples from which a critical eye leads us into a discussion about what’s cheap, what’s essential, and what actually improves security.
Cacheing Can’t Save the Queen
I’ve encountered recommendations that insist a web app should set headers to disable the browser cache when it serves a page with sensitive content. Especially when the page transits HTTP (i.e. an unencrypted channel) as well as over HTTPS.
That kind of thinking is deeply flawed and when offered to developers as a commandment of programming it misleads them about the underlying problem.
If you consider some content sensitive enough to start worrying about its security, you shouldn’t be serving it over HTTP in the first place. Ostensibly, the danger of allowing the browser to cache the content is that someone with access to the browser’s system can pull the page from disk. It’s a lot easier to sniff the unencrypted traffic in the first place. Skipping network-based attacks like sniffing and intermediation to focus on client-side threats due to cacheing ignores important design problems — especially in a world of promiscuous Wi-Fi.
Then you have to figure out what’s sensitive. Sure, a credit card number and password are pretty obvious, but the answer there is to mask the value to avoid putting the raw value into the browser in the first place. For credit cards, show the last 4 digits only. For the password, show a series of eight asterisks in order to hide both its content and length. But what about email? Is a message sensitive? Should it be cached or not? And if you’re going to talk about sensitive content, then you should be thinking of privacy as well. Data security does not equal data privacy.
And if you answered those questions, do you know how to control the browser’s cacheing algorithm? Are you sure? What’s the recommendation? Cache controls are not as straight-forward as they seem. There’s little worth in relying on cache controls to protect your data from attackers who’ve gained access to your system. (You’ve uninstalled Java and Flash, right?)
Browsers used to avoid cacheing any resource over HTTPS. We want sites to use HTTPS everywhere and HSTS whenever possible. Therefore it’s important to allow browsers to cache resources loaded via HTTPS in order to improve performance, page load times, and visitors’ subjective experiences. Handling sensitive content should be approached with more care than just relying on headers. What happens when a developer sets a no-cacheing header, but saves the sensitive content in the browser’s Local Storage API?
HttpOnly Is Pretty Vacant
Web apps litter our browsers with all sorts of cookies. This is how some companies get billions of dollars. Developers sprinkle all sorts of security measures on cookies to make them more palatable to privacy- and security-minded users. (And weaken efforts like Do Not Track, which is how some companies keep billions of dollars.)
The HttpOnly attribute was proposed in an era when security documentation about HTML injection attacks (a.k.a. cross-site scripting, XSS) incessantly repeated the formula of attackers inserting
<img> tags whose
src attributes leaked victims’
document.cookie values to servers under the attackers’ control. It’s not wrong to point out such an exploit method. However, as Stephen King repeated throughout the Dark Tower series, “The world has moved on.” Exploits don’t need to be cross-site, they don’t need
<script> tags in the payload, and they surely don’t need a
document.cookie to be effective.
If your discussion of cookie security starts and ends with HttpOnly and Secure attributes, then you’re missing the broader challenge of designing good authorization, authentication, and session handling mechanisms. If the discussion involves using the
path attribute as a security constraint, then you shouldn’t be talking about cookies or security at all.
HttpOnly is a cheap attribute to throw on a cookie. It doesn’t prevent sniffing — use HTTPS everywhere for that (notice the repetition here?). It doesn’t really prevent attacks, just a single kind of exploit technique. Content Security Policy is a far more essential countermeasure. Let’s start raising awareness about that instead.
Easy security measures aren’t useless. Prepared statements are easy to use and pretty soundly defeat SQL injection; developers just choose to remain ignorant of them.
This month be extra wary of cheap security scenery and stale recommendations that haven’t kept up with the modern web. Ask questions. Look for tell-tale signs like they
- fail to clearly articulate a problem with regard to a security or privacy control (e.g. ambiguity in what the weakness is or what an attack would look like)
- fail to consider the capabilities of an attack (e.g. filtering
alertto prevent HTML injection)
- do not provide clear resolutions or do not provide enough details to make an informed decision (e.g. can’t be implemented)
- provide contradictory choices of resolution (e.g. counter a sniffing attack by applying input validation)
Oh well, we couldn’t avoid a list forever.
Never mind that. I’ll be back with more examples of good and bad. I can’t wait for this month to end, but that’s because Halloween is my favorite holiday. We should be thinking about security every month, every day. Just like the song says, Everyday is Halloween.