Builder, Breaker, Blather, Why.

I recently gave a brief talk that noted how Let’s Encrypt and cloud-based architectures encourage positive appsec behaviors. Check out the slides and this blog post for a sense of the main points. Shortly thereafter a slew of security and stability events related to HTTPS and cloud services (SHA-1, Cloudbleed, S3 outage) seemed to undercut this thesis. But perhaps only superficially so. Rather than glibly dismiss these points, let’s examine these events from the perspective of risk and engineering — in other words, how complex systems and software are designed and respond to feedback loops.

This post is a stroll through HTTPS and cloud services, Trailfollowing a path of questions and ideas that builders and breakers might use to evaluate security; leaving the blather of empty pronouncements behind. It’s about the importance of critical thinking and seeking the reasons why a decision comes about.

Eventually Encrypted

For more than a decade at least two major hurdles have blocked pervasive HTTPS: Certs and configuration. The first was (and remains) convincing sites to deploy HTTPS at all, tightly coupled with making deployment HTTPS-only instead of mixed with unencrypted HTTP. The second is getting HTTPS deployments to use strong TLS configurations, e.g. TLS 1.2 with default ciphers that support forward secrecy.

For apps that handle credit cards, PCI has been a crucial driver for adopting strong HTTPS. Having a requirement to use transport encryption, backed by financial consequences for failure, has been more successful than either asking nicely, raising awareness at security conferences, or shaming. As a consequence, I suspect the rate of HTTPS adoption has been far faster for in-scope PCI sites than others.

The SSL Labs project could also be a factor in HTTPS adoption. It distilled a comprehensive analysis of a site’s TLS configuration into a simple letter score. The publically-visible results could be used as a shaming tactic, but that’s a weaker strategy for motivating positive change. The failure of shaming, especially as it relates to HTTPS, is partly demonstrated by the too-typical disregard of browser security warnings. (Which is itself a design challenge, not a user failure.)

Importantly, SSL Labs provides an easy way for organizations to consistently monitor and evaluate their sites. This is a step towards providing help for migration to HTTPS-only sites. App owners still bear the burden of fixing errors and misconfigurations, but this tool made it easier to measure and track their progress towards strong TLS.

Effectively Encrypted

Where SSL Labs inspires behavioral change via metrics, the Let’s Encrypt project empowers behavioral change by addressing fundamental challenges faced by app owners.

Let’s Encrypt eases the resource burden of managing HTTPS endpoints. It removes the initial cost of certs (they’re free!) and reduces the ongoing maintenance cost of deploying, rotating, and handling certs by supporting automation with the ACME protocol. Even so, solving the TLS cert problem is orthogonal to solving the TLS configuration problem. A valid Let’s Encrypt cert might still be deployed to an HTTPS service that gets a bad grade from SSL Labs.

A cert signed with SHA-1, for example, will lower its SSL Labs grade. SHA-1 has been known weak for years and discouraged from use, specifically for digital signatures. Having certs that are both free and easy to rotate (i.e. easy to obtain and deploy new ones) makes it easier for sites to migrate off deprecated versions. The ability to react quickly to change, whether security-related or not, is a sign of a mature organization. Automation as made possible via Let’s Encrypt is a great way to improve that ability.

The recent work that demonstrated a SHA-1 collision is commendable, but it shouldn’t be the sudden reason you decided to stop using it. If such proof of danger is your sole deciding factor, you shouldn’t be using (or supporting) Flash or most Android phones.

Facebook explained their trade-offs along the way to hardening their TLS configuration and deprecating SHA-1. It was an engineering-driven security decision that evaluated solutions and chose among conflicting optimizations — all informed by measures of risk. Engineering is the key word in this paragraph; it’s how systems get built. Writing down a simple requirement and prototyping something on a single system with a few dozen users is far removed from delivering a service to hundreds of millions of people. WhatsApp’s crypto design fell into a similar discussion of risk-based engineering. Another example of evaluating risk and threat models is this excellent article on messaging app security and privacy.

Exceptional Events

Companies like Cloudflare take a step beyond SSL Labs and Let’s Encrypt by offering a service to handle both certs and configuration for sites. They pioneered techniques like Keyless SSL  in response to their distinctive threat model of handling private keys for multiple entities.

If you look at the Cloudbleed report and immediately think a service like that should be ditched, it’s important to question the reasoning behind such a risk assessment. Rather than make organizations suffer through the burden of building and maintaining HTTPS, they can have a service the establishes a strong default. Adoption of HTTPS is slow enough, and fraught with error, that services like this make sense for many site owners.

Compare this with heartbleed, which also affected TLS sites, could be more targeted, and exposed private keys (among other sensitive data). The cleanup was long, laborious, and haphazard. Cloudbleed had significant potential exposure, although its discovery and remediation likely led to a lower realized risk than heartbleed.

If you’re saying move away from services like that, what in practice are you saying to move towards? Self-hosted systems in a rack in an electrical closet? Systems that will likely degrade over time and, even more likely, never be upgraded to TLS 1.3? That seems ill-advised.

Does the recent Amazon S3 outage raise concern for cloud-based systems? Not to a great degree. Or, at least, not in a new way. If your site was negatively impacted by the downtime, a good use of that time might have been exploring ways to architect fail-over systems or revisit failure modes and service degradation decisions. Sometimes it’s fine to explicitly accept certain failure modes. That’s what engineering and business do against constraints of resource and budget.

Coherently Considered

So, let’s leave a few exercises for the reader, a few open-ended questions on threat modeling and engineering.

Flash has been long rebuked as both a performance hog and security weakness. Like SHA-1, the infosec community has voiced this warning for years. There have even been one or two (maybe more) demonstrated exploits against it. It persists. It’s embedded in Chrome, which you can interpret as a paternalistic effort to sandbox it or (more cynically) an effort to ensure YouTube videos and ad revenue aren’t impacted by an exodus from the plugin — or perhaps somewhere in between.

Browsers have had impactful vulns, many of which carry significant risk and impact as measured by the annual $50K+ rewards from Pwn2Own competitions. The minuscule number of browser vendors carries risk beyond just vulns, affecting influence on standards and protections for privacy. Yet more browsers doesn’t necessarily equate to better security models within browsers.

On the topic of decentralization, how much is good, how much is bad? DNS recommendations go back and forth. We’ve seen huge DDoS against providers, taking out swaths of sites. We’ll see more. But is shifting DNS the right solution, or a matter that misses the underlying threat or cause of such attacks? How much of IoT is new or different (scale?) compared to the swarms of SQL Slammer and Windows XP botnets of yesteryear’s smaller internet population?

Approaching these with ideas around resilience, isolation, authn/authz models, or feedback loops are (just a few) traits of a builder. As much as they might be for a breaker executing attack models against them.

Approaching these by explaining design flaws and identifying implementation errors are (just a few) traits of a breaker. As much as they might be for a builder designing controls and barriers to disrupt attacks against them.

Approaching these by dismissing complexity, designing systems no one would (or could) use, or highlighting irrelevant flaws is often just blather. Infosec has its share of vacuous or overly-ambiguous phrases like military-grade encryption, perfectly secure, artificial intelligence (yeah, I know, blah blah blah Skynet), use-Tor-use-Signal. There’s a place for mockery and snark. This isn’t concern trolling, which is preoccupied with how things are said. This is about the understanding behind what is said — the risk calculation, the threat model, the constraints.

Constructive Closing

I believe in supporting people to self-identity along the spectrum of builder and breaker rather than pin them to narrow roles. (A principle applicable to many more important subjects as well.) This about the intellectual reward of tackling challenges faced by builders and breakers alike, and leaving behind the blather of uninformed opinions and empty solutions.

I’ll close with this observation from Carl Sagan (from his book, The Demon-Haunted World): “It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring.”

Our application universe consists of systems and data and users, each in different orbits. Security should contribute to the gravity that binds them together, not the black hole that tears them apart. Engineering sees the universe as it really is; shed the delusion that one appsec solution in a vacuum is always universal.

Why You Should Always Use HTTPS

This first appeared on Mashable in May 2011. Five years later, the SSL Pulse notes only 76% of the top 200K web sites fully support TLS 1.2, with a quarter of them still supporting the egregiously insecure SSLv3. While Let’s Encrypt makes TLS certs more attainable, administrators must also maintain their sites’ TLS configuration to use the best protocols and ciphers available. Check out www.ssllabs.com to test your site.


The next time you visit a cafe to sip coffee and surf on some free Wi-Fi, try an experiment: Log in to some of your usual sites. Then, with a smile, hand the keyboard over to a stranger. Now walk away for 20 minutes. Remember to pick up your laptop before you leave.

While the scenario may seem silly, it essentially happens each time you visit a website that doesn’t bother to encrypt the traffic to your browser — in other words, sites using HTTP instead of HTTPS.

The encryption within HTTPS is intended to provide benefits like confidentiality, integrity and identity. Your information remains confidential from prying eyes because only your browser and the server can decrypt the traffic. Integrity protects the data from being modified without your knowledge. We’ll address identity in a bit.

There’s an important distinction between tweeting to the world or sharing thoughts on Facebook and having your browsing activity going over unencrypted HTTP. You intentionally share tweets, likes, pics and thoughts. The lack of encryption means you’re unintentionally exposing the controls necessary to share such things. It’s the difference between someone viewing your profile and taking control of your keyboard.

The Spy Who Sniffed Me

We most often hear about hackers attacking websites, but it’s just as easy and lucrative to attack your browser. One method is to deliver malware or lull someone into visiting a spoofed site (phishing). Those techniques don’t require targeting a specific victim. They can be launched scattershot from anywhere on the web, regardless of the attacker’s geographic or network relationship to the victim. Another kind of attack, sniffing, requires proximity to the victim but is no less potent or worrisome.

Sniffing attacks watch the traffic to and from the victim’s web browser. (In fact, all of the computer’s traffic is visible, but we’re only worried about websites for now.) The only catch is that the attacker needs to be able to see the communication channel. The easiest way for an attacker to do this is to sit next to one of the end points, either the web server or the web browser. Unencrypted wireless networks — think of cafes, libraries, and airports — make it easy to find the browser’s end point because the traffic is visible to anyone who can obtain that network’s signal.

Encryption defeats sniffing attacks by concealing the traffic’s meaning from all except those who know the secret to decrypting it. The traffic remains visible to the sniffer, but it appears as streams of random bytes rather than HTML, links, cookies and passwords. The trick is understanding where to apply encryption in order to protect your data. For example, wireless networks can be encrypted, but the history of wireless security is laden with egregious mistakes. And it’s not necessarily the correct solution.

The first wireless encryption scheme was called WEP. It was the security equivalent of pig latin. It seems secret at first. Then the novelty wears off once you realize everyone knows what ixnay on the ottenray means, even if they don’t know the movie reference. WEP required a password to join the network, but the protocol’s poor encryption exposed enough hints about the password that someone with a wireless sniffer could reverse engineer. This was a fatal flaw, because the time required to crack the password was a fraction of that needed to blindly guess the password with a brute force attack: a matter of hours (or less) instead of weeks.

Security improvements were attempted for Wi-Fi, but many turned out to be failures since they just metaphorically replaced pig latin with an obfuscation more along the lines of Klingon (or Quenya, depending on your fandom leanings). The problem was finding an encryption scheme that protected the password well enough that attackers would be forced to fall back to the inefficient brute force attack. The security goal is a Tower of Babel, with languages that only your computer and the wireless access point could understand — and which don’t drop hints for attackers. Protocols like WPA2 accomplish this far better than WEP ever did.

Whereas you’ll find it easy to set up WPA2 on your home network, you’ll find it sadly missing on the ubiquitous public Wi-Fi services of cafes and airplanes. They usually avoid encryption altogether. Even still, encrypted networks that use a single password for access merely reduce the pool of attackers from everyone to everyone who knows the password (which may be a larger number than you expect).

We’ve been paying attention to public spaces, but the problem spans all kinds of networks. In fact, sniffing attacks are just as feasible in corporate environments. They only differ in terms of the type of threat, and who might be carrying out the sniffing attack. Fundamentally, HTTPS is required to protect your data.

S For Secure

Sites that don’t use HTTPS judiciously are crippling the privacy controls you thought were protecting your data. Websites’ adoption of opt-in sharing and straightforward privacy settings are laudable. Those measures restrict the amount of information about you that leaks from websites (at least they’re supposed to). Yet they have no bearing on sniffing attacks if the site doesn’t encrypt traffic. This is why sites like Facebook and Twitter recently made HTTPS always available to users who care to turn it on — it’s off by default.

If my linguistic metaphors have left you with no understanding of the technical steps to execute sniffing attacks, you can quite easily execute these attacks with readily-available tools. A recent one is a plugin you can add to your Firefox browser. The plugin, called Firesheep, enables mouse-click hacking for sites like Amazon, Facebook, Twitter and others. The creation of the plugin demonstrates that technical attacks can be put into the hands of anyone who wishes to be mischievous, unethical, or malicious.

To be clear, sniffing attacks don’t need to grab your password in order to impersonate you. Web apps that use HTTPS for authentication protect your password. If they use regular HTTP after you log in, they’re not protecting your privacy or your temporary identity.

We need to take an existential diversion here to distinguish between “you” as the person visiting a website and the “you” that the website knows. Websites speak to browsers. They don’t (yet?) reach beyond the screen to know that you are in fact who you say you are. The username and password you supply for the login page are supposed to prove your identity because you are ostensibly the only one who knows them. So that you only need authenticate once, the website assigns a cookie to your browser. From then on, that cookie is your identity: a handful of bits.

These identifying cookies need to be a shared secret — a value known to no one but your browser and the website. Otherwise, someone else could use your cookie value to impersonate you.

Mobile apps are ignoring the improvements that web browsers have made in protecting our privacy and security. Some of the fault lies with the HTML and HTTP that underlies the web. HTTP becomes creaky once you try to implement strong authentication mechanisms on top of it, mostly because of our friend the cookie. Some fault lies with app developers. For example, Twitter provides a setting to ensure you always access the web site with HTTPS. However, third-party apps that use Twitter’s APIs might not be so diligent. While your password might still be protected with HTTPS, the app might fall back to HTTP for all other traffic — including the cookie that identifies you.

Google suffered embarrassment recently when 99% of its Android-based phones were shown to be vulnerable to impersonation attacks. The problem is compounded by the sheer number of phones and the difficulty of patching them. Furthermore, the identifying cookies (authTokens) were used for syncing, which means they’d traverse the network automatically regardless of the user’s activity. This is exactly the problem that comes with lack of encryption, cookies, and users who want to be connected anywhere they go.

Notice that there’s been no mention of money or credit cards being sniffed. Who cares about those when you can compromise someone’s email account? Email is almost universally used as a password reset mechanism. If you can read someone’s email, then you can obtain the password for just about any website they use, from gaming to banking to corporate environments. Most of this information has value.

S For Sometimes

Sadly, it seems that money and corporate embarrassment motivates protective measures far more often than privacy concerns. Some websites have started to implement a more rigorous enforcement of HTTPS connections called HTTP Strict Transport Security (HSTS). Paypal, whose users have long been victims of money-draining phishing attacks, was one of the first sites to use HSTS to prevent malicious sites from fooling browsers into switching to HTTP or spoofing pages. Like any good security measure, HSTS is transparent to the user. All you need is a browser that supports it (most do) and a website to require it (most don’t).

Improvements like HSTS should be encouraged. HTTPS is inarguably an important protection. However, the protocol has its share of weaknesses and determined attackers. Plus, HTTPS only protects against certain types of attacks; it has no bearing on cross-site scripting, SQL injection, or a myriad of other vulnerabilities. The security community is neither ignorant of these problems nor lacking in solutions. Yet the roll out of better protocols like DNSSEC has been glacial. Never the less, HTTPS helps as much today as it will tomorrow. The lock icon on your browser that indicates a site uses HTTPS may be minuscule, but the protection it affords is significant.


Many of the attacks and privacy references noted in the article may seem dated, but they are no less relevant.

DNSSEC has indeed been glacial. It took Google until January 2013 to support it. Cloudflare reported in October 2014 that wide adoption remained “in its infancy.”

And perhaps a final irony? At the time this article first appeared, WordPress didn’t support HTTPS for custom domain names, e.g. deadliestwebattacks.com. To this day, Mashable still won’t serve the article over HTTPS without a hostname mismatch error.

I’ll ne’er look you i’ the plaintext again

 

Look at this playbill: air fresheners, web security, cats. Thanks to Let’s Encrypt, this site is now accessible via HTTPS by default. Even better, WordPress serves the Strict-Transport-Security header to ensure browsers adhere to HTTPS when visiting it. So, whether you’re being entertained by odors, HTML injection, or felines, your browser is encrypting traffic.

deadliestwebattacks TLS

Let’s Encrypt makes this possible for two reasons. The project provides free certificates, which addresses the economic aspect of obtaining and managing them. Users who blog, create content, or set up their own web sites can do so with free tools. But the HTTPS certificates were never free and there was little incentive for them to spend money. To further compound the issue, users creating content and web sites rarely needed to know the technical underpinnings of how those sites were set up (which is perfectly fine!). Yet the secure handling and deployment of certificates requires more technical knowledge.

Most importantly, Let’s Encrypt addressed this latter challenge by establishing a simple, secure ACME protocol for the acquisition, maintenance, and renewal of certificates. Even when (or perhaps especially when) certificates have lifetimes of one or two years, site administrators would forget to renew them. It’s this level of automation that makes the project successful.

Hence, WordPress can now afford — both in the economic and technical sense — to deploy certificates for all the custom domain names it hosts. That’s what brings us to the cert for this site, which is but one domain in a list of SAN entries from deadairfresheners to a Russian-language blog about, inevitably, cats.

Yet not everyone has taken advantage of the new ease of encrypting everything. Five years ago I wrote about Why You Should Always Use HTTPS. Sadly, the article itself is served only via HTTP. You can request it via HTTPS, but the server returns a hostname mismatch error for the certificate, which breaks the intent of using a certificate to establish a server’s identity.

Intermission.

As with things that are new, free, and automated, there will be abuse. For one, malware authors, phishers, and the like will continue to move towards HTTPS connections. The key point there being “continue to”. Such bad actors already have access to certs and to compromised financial accounts with which to buy them. There’s little in Let’s Encrypt that aggravates this.

Attackers may start looking for letsencrypt clients in order to obtain certs by fraudulently requesting new ones. For example, by provisioning a resource under a well-known URI for the domain (this, and provisioning DNS records, are two ways of establishing trust to the Let’s Encrypt CA).

Attackers may start accelerating domain enumeration via Let’s Encrypt SANs. Again, it’s trivial to walk through domains for any SAN certificate purchased today. This may only be a nuance for hosting sites or aggregators who are jumbling multiple domains into a single cert.

Such attacks aren’t proposed as creaky boards on the Let’s Encrypt stage. They’re merely reminders that we should always be reconsidering how old threats and techniques apply to new technologies and processes. For many “astounding” hacks of today (namely the proliferation of Named-Ones-Who-I-Shall-Not-Name), there are likely close parallels to old Phrack articles or basic security principles awaiting clever reinterpretation for our modern times.

Finally, I must leave you with some sort of pop culture reference, or else this post wouldn’t befit the site. This is the 400th anniversary of Shakespeare’s death. So I shall leave you with yet another quote. May it take us far less time to finally bury HTTP and praise HTTPS in ubiquity.

Nay, an I tell you that, Ill ne’er look you i’ the
face again: but those that understood him smiled at
one another and shook their heads; but, for mine own
part, it was Greek to me. I could tell you more
news too: Marullus and Flavius, for pulling scarfs
off Caesar’s images, are put to silence. Fare you
well. There was more foolery yet, if I could
remember it. (Julius Caesar. I.ii.278-284)

 

Cheap Essential Scenery

Keep Calm and Never MindThis October people who care about being aware of security in the cyberspace of their nation will celebrate the 10th anniversary of National Cyber Security Awareness Month. (Ignore the smug octal-heads claiming preeminence in their 12th anniversary.) Those with a better taste for acronyms will celebrate Security & Privacy Awareness Month.

For the rest of information security professionals it’s just another TUESDAY (That Usual Effort Someone Does All Year).

In any case, expect the month to ooze with lists. Lists of what to do. Lists of user behavior to be reprimanded for. What software to run, what to avoid, what’s secure, what’s insecure. Keep an eye out for inconsistent advice among it all.

Ten years of awareness isn’t the same as 10 years of security. Many attacks described decades ago in places like Phrack and 2600 either still work today or are clear antecedents to modern security issues. (Many of the attitudes haven’t changed, either. But that’s for another article.)

Web vulns like HTML injection and SQL injection have remained fundamentally unchanged across the programming languages that have graced the web. They’ve been so static that the methodologies for exploiting them are sophisticated and mostly automated by now.

Awareness does help, though. Some vulns seem new because of awareness (e.g. CSRF and clickjacking) even though they’ve haunted browsers since the dawn of HTML. Some vulns just seem more vulnerable because there are now hundreds of millions of potential victims whose data slithers and replicates amongst the cyber heavens. We even have entire mobile operating systems designed to host malware. (Or is it the other way around?)

So maybe we should be looking a little more closely at how recommendations age with technology. It’s one thing to build good security practices over time; it’s another to litter our cyberspace with cheap essential scenery.

Here are two web security examples from which a critical eye leads us into a discussion about what’s cheap, what’s essential, and what actually improves security.

Cacheing Can’t Save the Queen

I’ve encountered recommendations that insist a web app should set headers to disable the browser cache when it serves a page with sensitive content. Especially when the page transits HTTP (i.e. an unencrypted channel) as well as over HTTPS.

That kind of thinking is deeply flawed and when offered to developers as a commandment of programming it misleads them about the underlying problem.

If you consider some content sensitive enough to start worrying about its security, you shouldn’t be serving it over HTTP in the first place. Ostensibly, the danger of allowing the browser to cache the content is that someone with access to the browser’s system can pull the page from disk. It’s a lot easier to sniff the unencrypted traffic in the first place. Skipping network-based attacks like sniffing and intermediation to focus on client-side threats due to cacheing ignores important design problems — especially in a world of promiscuous Wi-Fi.

Then you have to figure out what’s sensitive. Sure, a credit card number and password are pretty obvious, but the answer there is to mask the value to avoid putting the raw value into the browser in the first place. For credit cards, show the last 4 digits only. For the password, show a series of eight asterisks in order to hide both its content and length. But what about email? Is a message sensitive? Should it be cached or not? And if you’re going to talk about sensitive content, then you should be thinking of privacy as well. Data security does not equal data privacy.

And if you answered those questions, do you know how to control the browser’s cacheing algorithm? Are you sure? What’s the recommendation? Cache controls are not as straight-forward as they seem. There’s little worth in relying on cache controls to protect your data from attackers who’ve gained access to your system. (You’ve uninstalled Java and Flash, right?)

Browsers used to avoid cacheing any resource over HTTPS. We want sites to use HTTPS everywhere and HSTS whenever possible. Therefore it’s important to allow browsers to cache resources loaded via HTTPS in order to improve performance, page load times, and visitors’ subjective experiences. Handling sensitive content should be approached with more care than just relying on headers. What happens when a developer sets a no-cacheing header, but saves the sensitive content in the browser’s Local Storage API?

HttpOnly Is Pretty Vacant

Web apps litter our browsers with all sorts of cookies. This is how some companies get billions of dollars. Developers sprinkle all sorts of security measures on cookies to make them more palatable to privacy- and security-minded users. (And weaken efforts like Do Not Track, which is how some companies keep billions of dollars.)

The HttpOnly attribute was proposed in an era when security documentation about HTML injection attacks (a.k.a. cross-site scripting, XSS) incessantly repeated the formula of attackers inserting <img> tags whose src attributes leaked victims’ document.cookie values to servers under the attackers’ control. It’s not wrong to point out such an exploit method. However, as Stephen King repeated throughout the Dark Tower series, “The world has moved on.” Exploits don’t need to be cross-site, they don’t need <script> tags in the payload, and they surely don’t need a document.cookie to be effective.

If your discussion of cookie security starts and ends with HttpOnly and Secure attributes, then you’re missing the broader challenge of designing good authorization, authentication, and session handling mechanisms. If the discussion involves using the path attribute as a security constraint, then you shouldn’t be talking about cookies or security at all.

HttpOnly is a cheap attribute to throw on a cookie. It doesn’t prevent sniffing — use HTTPS everywhere for that (notice the repetition here?). It doesn’t really prevent attacks, just a single kind of exploit technique. Content Security Policy is a far more essential countermeasure. Let’s start raising awareness about that instead.

Problems

Easy security measures aren’t useless. Prepared statements are easy to use and pretty soundly defeat SQL injection; developers just choose to remain ignorant of them.

This month be extra wary of cheap security scenery and stale recommendations that haven’t kept up with the modern web. Ask questions. Look for tell-tale signs like they

  • fail to clearly articulate a problem with regard to a security or privacy control (e.g. ambiguity in what the weakness is or what an attack would look like)
  • fail to consider the capabilities of an attack (e.g. filtering script and alert to prevent HTML injection)
  • do not provide clear resolutions or do not provide enough details to make an informed decision (e.g. can’t be implemented)
  • provide contradictory choices of resolution (e.g. counter a sniffing attack by applying input validation)

Oh well, we couldn’t avoid a list forever.

Never mind that. I’ll be back with more examples of good and bad. I can’t wait for this month to end, but that’s because Halloween is my favorite holiday. We should be thinking about security every month, every day. Just like the song says, Everyday is Halloween.

HTTPS Nowhere

Add an “S” to the end of HTTP and you get an encrypted channel using HTTPS. Add an “S” to the front of SSL and you get a Sometimes Secure Sockets Layer. Mix in a mobile app. Watch how HTTPS everywhere becomes HTTPS nowhere.

One important piece of the “secure” in the Secure Sockets Layer protocol is the trust it establishes in identifying an end point. If a server’s certificate has an error — it’s improperly signed, it’s expired, the domain names are mismatched — then the browser cries out in alarm that something is amiss. Actually, this cry tended to be an annoying peep for the longest time, but browsers have since turned it into a raging klaxon.

Now consider mobile apps that make their own HTTPS connections, or web apps that need to call out to other servers over HTTPS. The code to do this has to come from somewhere other than a browser. And far too many developers fail the Ice Cube test:

Check yo self before you wreck yo self

That’s right, they don’t bother to verify the remote certificate, throwing away the entire purpose of the identity check for an HTTPS connection. What’s the point of requiring an encrypted connection if you don’t care who intercepts the connection securely? Those developers have ignored the benefits of Ice Cube’s uncomplicated wisdom. Make them listen to The Predator, track 13. And toss in a movie night with Friday or, my preference, Ghosts of Mars (after all, I’m partial to John Carpenter movies) — which also features the ever-badass Pam Grier.

I’ve alluded to this non-verification problem in the past, both in the book and in presentations. One collection of examples came from a now long-removed list of OAuth example code on Twitter’s developer resources. (The page pointed to third-party libraries with usual disclaimers of non-endorsement and non-guarantees. Twitter’s only to blame for poor vetting and a lack of quality reference implementations.) My observation was that this was a problem endemic to programming, not a specific programming language. And that the growth of mobile apps and “browser-like” clients that aren’t, in fact, browsers would compound the problem.

Recently, researchers have more explicitly quantified the problem in a well-written paper, The Most Dangerous Code in the World. (Hyperbole, perhaps, but attention-grabbing.) The abstract lays out the serious danger they enumerated from various source code:

We demonstrate that SSL certificate validation is completely broken in many security-critical applications and libraries. Vulnerable software includes Amazon’s EC2 Java library and all cloud clients based on it; Amazon’s and PayPal’s merchant SDKs responsible for transmitting payment details from e-commerce sites to payment gateways; integrated shopping carts such as osCommerce, ZenCart, Ubercart, and PrestaShop; AdMob code used by mobile websites; Chase mobile banking and several other Android apps and libraries; Java Web-services middleware—including Apache Axis, Axis 2, Codehaus XFire, and Pusher library for Android—and all applications employing this middleware. Any SSL connection from any of these programs is insecure against a man-in-the-middle attack.

In addition to the paper, the link includes brief background information on the problem and some tools to help identify this kind of error.

I’ll close with a short exercise for the reader. Consider the problem posed by the libcurl API as mentioned in the paper. Two functions share a similar naming scheme, but take very different argument types. One accepts a Boolean, the other accepts an integer. Easy question: How would you modify the API? Hard question: How would you modify the API without breaking (admittedly broken) software that relies on a pervasive and popular library?

I’ve mentioned libcurl favorably in the past, especially the helpfulness of its mailing list. It has a smart developer community. It’s one of the longer-running open source efforts that’s actively maintained and growing. They went through a long discussion on the best way to resolve the problem given their restraints of preserving compatibility.

To prompt your own thoughts on the subject, look at the problem as a confusion within design and language (as in written, not programming, language). A characteristic of a good API is preciseness in naming. In this case, the “verify” in “verifyhost” implies a binary action, especially when another “verify” function (verifypeer) is explicitly Boolean.

If the API intended the verification method to offer a choice among several options, a better name might have been “verifyhoststrategy”. That’s a pretty awkward name and a pedantic arguer might claim “verify” still implies a true/false choice.

A minor tweak could be “verificationstrategy”, although it turns the verb into a noun. If your API demands every method begin with a verb, then “useverificationstrategy” may suffice. In any case, you’re relying on developers to make a minimal effort to understand the method and its arguments.

It’s hard to beat browsers at the security game. HTTP Strict Transport Security (HSTS) rejects invalid certs. It’s a great way to make your site more secure from sniffing and interception attacks with burdening the user with decisions. HTML5’s WebSocket API refuses to establish wss: (i.e. SSL/TLS) connections if it encounters cert errors; it doesn’t even give the user a chance to declare an exception.

If an extra “S” in SSL means Sometimes, then it’s probably because you’ve invoked an “L” that means library. And think about what Ice Cube might say about to developers using a third-party library: RTFM.

Malware Is Software

My article on trends in malware has finally appeared on the Safari Books Online blog.

Malware is a nasty threat to everyone, whether you’re trying to enrich Uranium with fancy centrifuges in Iran or enrich your bank account with fancy craft projects on Etsy. The really menacing examples are named like characters lifted from fan fiction based on William Gibson books or The Matrix: Flame, Stuxnet, Duqu, Gauss.

I’ll use this post to fill out some background for the original article. For example, there’s good reason to believe that anti-virus is less useful against malware authors who spend a little effort to evade detection (or attack the AV itself). We can’t even get web sites to deploy HTTPS everywhere, but malware authors are smart enough to use encrypted channels to successfully evade analysis. Malware is rife in mobile applications. But even “safe” applications are poorly written — I made the observation in my book (and here, slides 37 & 38) how few apps bother to actually verify the certificate used for an HTTPS connection.

The point is that good software design should reduce the kinds of vulnerabilities that malware exploits, but there’s nothing preventing malware authors from adopting those same design principles — leading to better malware that’s more difficult to analyze. And poor software design (e.g. not verifying certs) makes an app insecure in the first place.

I’ll plug my book again, mostly because you should be looking at Hacking Web Apps (HWA) instead of Seven Deadliest Web Application Attacks that’s mentioned at the end of the article. HWA is the updated, expanded version. There’s no point in purchasing the other unless you like collecting whole sets. The other titles focus on malware and will give you better insight into that world of software.

Hacking Web Apps: Detecting and Preventing Web Application Security Problems
Malware Forensics: Investigating and Analyzing Malicious Code
Malware Forensics Field Guide for Windows Systems: Digital Forensics Field Guides
Mobile Malware Attacks and Defense