The Harry Callahan Postulate

What kind of weight do you put in different browser defenses?

  • Process separation?
  • Plugin isolation and sandboxes?
  • Tab isolation?
  • X-Frame-Options, X-XSS-Protection? Built-in reflected XSS protection? NoScript?
  • HSTS, HPKP?
  • Automatic updates?
  • Anti-virus? Safe browsing lists?

Instead of creating a matrix to compare browsers, versions, and operating systems try adopting the Harry Callahan Postulate:

Launch your browser. Open one tab for your web-based email, another for your online bank. Login to both. Then click on one of the shortened links below. Being as this is the world wide web, the most dangerous web in the world, and would blow your data clean apart, you’ve got to ask yourself one question: Do I feel lucky?

Well, do ya punk?

http://bit.ly/SAFEST

. . .

Clicking on links is how the web works. It’s a default assumption that users are expected to click links, and it’s a disproportionate security burden to expect them to scrutinize the characters, TLS hygiene, or provenance of links.

If the presence or absence of a single lock icon conveys ambiguous meaning about “security”, then attempting to discern multiple characters will be even harder. That lock icon is more about identity, i.e. “this is the app your browser is talking to”, than security in the sense of, “it is safe to give information to this app.”

In an ideal world, we should be able to click on any link without risk of that action impacting the security context or relationship with an app unassociated with the origin in that link.

Think of this like an Other Origin Policy for the persona associated with each app you use. Other origins shouldn’t have an unintended effect on the security context of another. When it does have an intended effect, it should be an interactive one that requires an explicit approval from the user.  It shouldn’t be a silent killer. (Well-informed approval is yet another challenge.)

Even so, CSRF countermeasures can’t protect against social engineering attacks and many effective XSS exploits happily work within the Same Origin Policy.

AlchemistIn this real world, users must keep their browsers up to date, they should remove historically vulnerable and ever-contaminated plugins like Flash and Java. But they must also rely on browser vendors to build software with strong isolation. They must rely on app developers to implement resilient designs like enforcing HTTPS connections, implementing pervasive anti-CSRF tokens, and offering multi-factor authentication.

With such distributed responsibility, it’s not hard to see why errors happen. The Other Origin Policy is an aspirational goal. With effective appsec, clicking on a malicious link should lead to nothing worse than an, “Oops!”.

Eventually, you’ll feel comfortable enough to click on any link. Until then, we’ll have to continue educating users, creating safe default behaviors and safe default decisions within browsers, and improving the security architecture of apps.

Factor of Ultimate Doom

Vulnerability disclosure presents a complex challenge to the information security community. A reductionist explanation of disclosure arguments need only present two claims. One end of the spectrum goes, “Only the vendor need know so no one else knows the problem exists, which means no one can exploit it.” The information-wants-to-be-free diametric opposition simply states, “Tell everyone as soon as the vulnerability is discovered”.

The Factor of Ultimate Doom (FUD) is a step towards reconciling this spectrum into a laser-focused compromise of agreement. It establishes a metric for evaluating the absolute danger inherent to a vulnerability, thus providing the discoverer with guidance on how to reveal the vulnerability.

The Factor is calculated by simple addition across three axes: Resources Expected, Protocol Affected, and Overall Impact. Vulnerabilities that do not meet any of the Factor’s criteria may be classified under the Statistically Irrelevant Concern metric, which will be explored at a later date.

Resources Expected
(3) Exploit doesn’t require shellcode; merely a JavaScript alert() call
(2) Exploit shellcode requires fewer than 12 bytes. In other words, it must be more efficient than the export PS1=# hack (to which many operating systems, including OS X, remain vulnerable)
(1) Exploit shellcode requires a GROSS sled. (A GROSS sled uses opcode 144 on Intel x86 processors, whereas the more well-known NOP sled uses opcode 0x90.)

Protocol Affected
(3) The Common Porn Interchange Protocol (TCP/IP)
(2) Multiple online rhetorical opinion networks
(1) Social networks

Overall Impact
(3) Control every computer on the planet
(2) Destroy every computer on the planet
(1) Destroy another planet (obviously, the Earth’s internet would not be affected — making this a minor concern)

The resulting value is measured against an Audience Rating to determine how the vulnerability should be disclosed. This provides a methodology for verifying that a vulnerability was responsibly disclosed.

Audience Rating (by Factor of Ultimate Doom*)
(> 6) Can only be revealed at a security conference
(< 6) Cannot be revealed at a security conference
(< 0) Doesn’t have to be revealed; it’s just that dangerous

(*Due to undisclosed software patent litigation, values equal to 6 are ignored.)