Builder, Breaker, Blather, Why.

I recently gave a brief talk that noted how Let’s Encrypt and cloud-based architectures encourage positive appsec behaviors. Check out the slides and this blog post for a sense of the main points. Shortly thereafter a slew of security and stability events related to HTTPS and cloud services (SHA-1, Cloudbleed, S3 outage) seemed to undercut this thesis. But perhaps only superficially so. Rather than glibly dismiss these points, let’s examine these events from the perspective of risk and engineering — in other words, how complex systems and software are designed and respond to feedback loops.

This post is a stroll through HTTPS and cloud services, Trailfollowing a path of questions and ideas that builders and breakers might use to evaluate security; leaving the blather of empty pronouncements behind. It’s about the importance of critical thinking and seeking the reasons why a decision comes about.

Eventually Encrypted

For more than a decade at least two major hurdles have blocked pervasive HTTPS: Certs and configuration. The first was (and remains) convincing sites to deploy HTTPS at all, tightly coupled with making deployment HTTPS-only instead of mixed with unencrypted HTTP. The second is getting HTTPS deployments to use strong TLS configurations, e.g. TLS 1.2 with default ciphers that support forward secrecy.

For apps that handle credit cards, PCI has been a crucial driver for adopting strong HTTPS. Having a requirement to use transport encryption, backed by financial consequences for failure, has been more successful than either asking nicely, raising awareness at security conferences, or shaming. As a consequence, I suspect the rate of HTTPS adoption has been far faster for in-scope PCI sites than others.

The SSL Labs project could also be a factor in HTTPS adoption. It distilled a comprehensive analysis of a site’s TLS configuration into a simple letter score. The publically-visible results could be used as a shaming tactic, but that’s a weaker strategy for motivating positive change. The failure of shaming, especially as it relates to HTTPS, is partly demonstrated by the too-typical disregard of browser security warnings. (Which is itself a design challenge, not a user failure.)

Importantly, SSL Labs provides an easy way for organizations to consistently monitor and evaluate their sites. This is a step towards providing help for migration to HTTPS-only sites. App owners still bear the burden of fixing errors and misconfigurations, but this tool made it easier to measure and track their progress towards strong TLS.

Effectively Encrypted

Where SSL Labs inspires behavioral change via metrics, the Let’s Encrypt project empowers behavioral change by addressing fundamental challenges faced by app owners.

Let’s Encrypt eases the resource burden of managing HTTPS endpoints. It removes the initial cost of certs (they’re free!) and reduces the ongoing maintenance cost of deploying, rotating, and handling certs by supporting automation with the ACME protocol. Even so, solving the TLS cert problem is orthogonal to solving the TLS configuration problem. A valid Let’s Encrypt cert might still be deployed to an HTTPS service that gets a bad grade from SSL Labs.

A cert signed with SHA-1, for example, will lower its SSL Labs grade. SHA-1 has been known weak for years and discouraged from use, specifically for digital signatures. Having certs that are both free and easy to rotate (i.e. easy to obtain and deploy new ones) makes it easier for sites to migrate off deprecated versions. The ability to react quickly to change, whether security-related or not, is a sign of a mature organization. Automation as made possible via Let’s Encrypt is a great way to improve that ability.

The recent work that demonstrated a SHA-1 collision is commendable, but it shouldn’t be the sudden reason you decided to stop using it. If such proof of danger is your sole deciding factor, you shouldn’t be using (or supporting) Flash or most Android phones.

Facebook explained their trade-offs along the way to hardening their TLS configuration and deprecating SHA-1. It was an engineering-driven security decision that evaluated solutions and chose among conflicting optimizations — all informed by measures of risk. Engineering is the key word in this paragraph; it’s how systems get built. Writing down a simple requirement and prototyping something on a single system with a few dozen users is far removed from delivering a service to hundreds of millions of people. WhatsApp’s crypto design fell into a similar discussion of risk-based engineering. Another example of evaluating risk and threat models is this excellent article on messaging app security and privacy.

Exceptional Events

Companies like Cloudflare take a step beyond SSL Labs and Let’s Encrypt by offering a service to handle both certs and configuration for sites. They pioneered techniques like Keyless SSL  in response to their distinctive threat model of handling private keys for multiple entities.

If you look at the Cloudbleed report and immediately think a service like that should be ditched, it’s important to question the reasoning behind such a risk assessment. Rather than make organizations suffer through the burden of building and maintaining HTTPS, they can have a service the establishes a strong default. Adoption of HTTPS is slow enough, and fraught with error, that services like this make sense for many site owners.

Compare this with heartbleed, which also affected TLS sites, could be more targeted, and exposed private keys (among other sensitive data). The cleanup was long, laborious, and haphazard. Cloudbleed had significant potential exposure, although its discovery and remediation likely led to a lower realized risk than heartbleed.

If you’re saying move away from services like that, what in practice are you saying to move towards? Self-hosted systems in a rack in an electrical closet? Systems that will likely degrade over time and, even more likely, never be upgraded to TLS 1.3? That seems ill-advised.

Does the recent Amazon S3 outage raise concern for cloud-based systems? Not to a great degree. Or, at least, not in a new way. If your site was negatively impacted by the downtime, a good use of that time might have been exploring ways to architect fail-over systems or revisit failure modes and service degradation decisions. Sometimes it’s fine to explicitly accept certain failure modes. That’s what engineering and business do against constraints of resource and budget.

Coherently Considered

So, let’s leave a few exercises for the reader, a few open-ended questions on threat modeling and engineering.

Flash has been long rebuked as both a performance hog and security weakness. Like SHA-1, the infosec community has voiced this warning for years. There have even been one or two (maybe more) demonstrated exploits against it. It persists. It’s embedded in Chrome, which you can interpret as a paternalistic effort to sandbox it or (more cynically) an effort to ensure YouTube videos and ad revenue aren’t impacted by an exodus from the plugin — or perhaps somewhere in between.

Browsers have had impactful vulns, many of which carry significant risk and impact as measured by the annual $50K+ rewards from Pwn2Own competitions. The minuscule number of browser vendors carries risk beyond just vulns, affecting influence on standards and protections for privacy. Yet more browsers doesn’t necessarily equate to better security models within browsers.

On the topic of decentralization, how much is good, how much is bad? DNS recommendations go back and forth. We’ve seen huge DDoS against providers, taking out swaths of sites. We’ll see more. But is shifting DNS the right solution, or a matter that misses the underlying threat or cause of such attacks? How much of IoT is new or different (scale?) compared to the swarms of SQL Slammer and Windows XP botnets of yesteryear’s smaller internet population?

Approaching these with ideas around resilience, isolation, authn/authz models, or feedback loops are (just a few) traits of a builder. As much as they might be for a breaker executing attack models against them.

Approaching these by explaining design flaws and identifying implementation errors are (just a few) traits of a breaker. As much as they might be for a builder designing controls and barriers to disrupt attacks against them.

Approaching these by dismissing complexity, designing systems no one would (or could) use, or highlighting irrelevant flaws is often just blather. Infosec has its share of vacuous or overly-ambiguous phrases like military-grade encryption, perfectly secure, artificial intelligence (yeah, I know, blah blah blah Skynet), use-Tor-use-Signal. There’s a place for mockery and snark. This isn’t concern trolling, which is preoccupied with how things are said. This is about the understanding behind what is said — the risk calculation, the threat model, the constraints.

Constructive Closing

I believe in supporting people to self-identity along the spectrum of builder and breaker rather than pin them to narrow roles. (A principle applicable to many more important subjects as well.) This about the intellectual reward of tackling challenges faced by builders and breakers alike, and leaving behind the blather of uninformed opinions and empty solutions.

I’ll close with this observation from Carl Sagan (from his book, The Demon-Haunted World): “It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring.”

Our application universe consists of systems and data and users, each in different orbits. Security should contribute to the gravity that binds them together, not the black hole that tears them apart. Engineering sees the universe as it really is; shed the delusion that one appsec solution in a vacuum is always universal.

Cheap Essential Scenery

Keep Calm and Never MindThis October people who care about being aware of security in the cyberspace of their nation will celebrate the 10th anniversary of National Cyber Security Awareness Month. (Ignore the smug octal-heads claiming preeminence in their 12th anniversary.) Those with a better taste for acronyms will celebrate Security & Privacy Awareness Month.

For the rest of information security professionals it’s just another TUESDAY (That Usual Effort Someone Does All Year).

In any case, expect the month to ooze with lists. Lists of what to do. Lists of user behavior to be reprimanded for. What software to run, what to avoid, what’s secure, what’s insecure. Keep an eye out for inconsistent advice among it all.

Ten years of awareness isn’t the same as 10 years of security. Many attacks described decades ago in places like Phrack and 2600 either still work today or are clear antecedents to modern security issues. (Many of the attitudes haven’t changed, either. But that’s for another article.)

Web vulns like HTML injection and SQL injection have remained fundamentally unchanged across the programming languages that have graced the web. They’ve been so static that the methodologies for exploiting them are sophisticated and mostly automated by now.

Awareness does help, though. Some vulns seem new because of awareness (e.g. CSRF and clickjacking) even though they’ve haunted browsers since the dawn of HTML. Some vulns just seem more vulnerable because there are now hundreds of millions of potential victims whose data slithers and replicates amongst the cyber heavens. We even have entire mobile operating systems designed to host malware. (Or is it the other way around?)

So maybe we should be looking a little more closely at how recommendations age with technology. It’s one thing to build good security practices over time; it’s another to litter our cyberspace with cheap essential scenery.

Here are two web security examples from which a critical eye leads us into a discussion about what’s cheap, what’s essential, and what actually improves security.

Cacheing Can’t Save the Queen

I’ve encountered recommendations that insist a web app should set headers to disable the browser cache when it serves a page with sensitive content. Especially when the page transits HTTP (i.e. an unencrypted channel) as well as over HTTPS.

That kind of thinking is deeply flawed and when offered to developers as a commandment of programming it misleads them about the underlying problem.

If you consider some content sensitive enough to start worrying about its security, you shouldn’t be serving it over HTTP in the first place. Ostensibly, the danger of allowing the browser to cache the content is that someone with access to the browser’s system can pull the page from disk. It’s a lot easier to sniff the unencrypted traffic in the first place. Skipping network-based attacks like sniffing and intermediation to focus on client-side threats due to cacheing ignores important design problems — especially in a world of promiscuous Wi-Fi.

Then you have to figure out what’s sensitive. Sure, a credit card number and password are pretty obvious, but the answer there is to mask the value to avoid putting the raw value into the browser in the first place. For credit cards, show the last 4 digits only. For the password, show a series of eight asterisks in order to hide both its content and length. But what about email? Is a message sensitive? Should it be cached or not? And if you’re going to talk about sensitive content, then you should be thinking of privacy as well. Data security does not equal data privacy.

And if you answered those questions, do you know how to control the browser’s cacheing algorithm? Are you sure? What’s the recommendation? Cache controls are not as straight-forward as they seem. There’s little worth in relying on cache controls to protect your data from attackers who’ve gained access to your system. (You’ve uninstalled Java and Flash, right?)

Browsers used to avoid cacheing any resource over HTTPS. We want sites to use HTTPS everywhere and HSTS whenever possible. Therefore it’s important to allow browsers to cache resources loaded via HTTPS in order to improve performance, page load times, and visitors’ subjective experiences. Handling sensitive content should be approached with more care than just relying on headers. What happens when a developer sets a no-cacheing header, but saves the sensitive content in the browser’s Local Storage API?

HttpOnly Is Pretty Vacant

Web apps litter our browsers with all sorts of cookies. This is how some companies get billions of dollars. Developers sprinkle all sorts of security measures on cookies to make them more palatable to privacy- and security-minded users. (And weaken efforts like Do Not Track, which is how some companies keep billions of dollars.)

The HttpOnly attribute was proposed in an era when security documentation about HTML injection attacks (a.k.a. cross-site scripting, XSS) incessantly repeated the formula of attackers inserting <img> tags whose src attributes leaked victims’ document.cookie values to servers under the attackers’ control. It’s not wrong to point out such an exploit method. However, as Stephen King repeated throughout the Dark Tower series, “The world has moved on.” Exploits don’t need to be cross-site, they don’t need <script> tags in the payload, and they surely don’t need a document.cookie to be effective.

If your discussion of cookie security starts and ends with HttpOnly and Secure attributes, then you’re missing the broader challenge of designing good authorization, authentication, and session handling mechanisms. If the discussion involves using the path attribute as a security constraint, then you shouldn’t be talking about cookies or security at all.

HttpOnly is a cheap attribute to throw on a cookie. It doesn’t prevent sniffing — use HTTPS everywhere for that (notice the repetition here?). It doesn’t really prevent attacks, just a single kind of exploit technique. Content Security Policy is a far more essential countermeasure. Let’s start raising awareness about that instead.

Problems

Easy security measures aren’t useless. Prepared statements are easy to use and pretty soundly defeat SQL injection; developers just choose to remain ignorant of them.

This month be extra wary of cheap security scenery and stale recommendations that haven’t kept up with the modern web. Ask questions. Look for tell-tale signs like they

  • fail to clearly articulate a problem with regard to a security or privacy control (e.g. ambiguity in what the weakness is or what an attack would look like)
  • fail to consider the capabilities of an attack (e.g. filtering script and alert to prevent HTML injection)
  • do not provide clear resolutions or do not provide enough details to make an informed decision (e.g. can’t be implemented)
  • provide contradictory choices of resolution (e.g. counter a sniffing attack by applying input validation)

Oh well, we couldn’t avoid a list forever.

Never mind that. I’ll be back with more examples of good and bad. I can’t wait for this month to end, but that’s because Halloween is my favorite holiday. We should be thinking about security every month, every day. Just like the song says, Everyday is Halloween.