Design vs. Implementation
An exposition on the first of the Twelve Web (In)Security Truths
#1 – Software execution is less secure than software design, but running code has more users.
A running site is infinitely more useable than one that only exists in design. Talk all you want, but eventually someone wants you to deliver that design.
Sure, you could describe Twitter as a glorified event loop around an echo server. You might even replicate it in a weekend with a few dozen lines of Python and an EC2 instance. Now scale that napkin design to a few hundred million users preserving security and privacy controls. That’s a testament to implementing a complex design – or scaling a simple design if you boil it down to sending and viewing tweets.
It’s possible to have impressive security through careful design. A prominent example in cryptography is the “perfect secrecy”1 of the One-Time Pad (OTP). The first OTP appeared in 1882, designed in an era without the codified information theory or cryptanalysis of Claude Shannon and Alan Turing.2 Never the less, its design understood the threats to confidential communications when telegraphs and Morse code carried secrets instead of fiber optics and TCP/IP. Sadly, good designs are sometimes forgotten or their importance unrecognized. The OTP didn’t gain popular usage until its re-invention in 1917, along with a more rigorous proof of its security.
But security also suffers when designs have poor implementations or have complex requirements. The OTP fails miserably should a pad be reused or is insufficiently random. The pad must be as long as the input to be ciphered. So, if you’re able to securely distribute a pad (remember, the pad must be unknown to the attacker), then why not just distribute the original message? Once someone introduces a shortcut in the name of efficiency or cleverness the security breaks. (Remember the Debian OpenSSL debacle?) This is why it’s important to understand the reasons for a design rather than treat it as a logic table to be condensed like the singularity of a black hole. Otherwise, you might as well use two rounds of ROT13.
Web security has its design successes. Prepared statements are a prime example of a programming pattern that should have relegated SQL injection to ancient CVEs. Only devotees of Advanced Persistent Ignorance continue to glue SQL statements together with string concatenation. SQL injection is so well-known (at least in appsec) and studied that a venerable tool like sqlmap has been refining exploitation for over six(teen) years.
Yet the Internet loves to re-invent vulns. Whether or not SQL injection is in its death throes, NoSQL injection promises to reanimate its bloated corpse. Herbet West would be proud.
Sometimes software repeats the mistakes, intentionally or not, of other projects without understanding the underlying reasons for those mistakes. The Ruby on Rails Mass Assignment feature is reminiscent of PHP’s register_globals
issues. Both are open source projects with large communities. It’s unfair to label the entire group as ignorant of security. But the question of priorities has to be considered. Do you have a default stance of high or low security? Do you have language features whose behavior changes based on configuration settings outside the developer’s control, or that always have predictable behavior?
Secure design is never easy. Apache’s reverse proxy/mod_rewrite bug went through a few iterations and several months of discussion before Apache developers arrived at an effective solution. You might argue that the problem lies with users who created poor rewrite rules that omitted a path component. But I prefer to see it as a design flaw because users had difficulty understanding its nuances and mistakes lead to security issues. Either way, the vuln proved how difficult it is to choose between trade-offs in security decisions.
HTML injection is another bugbear of web security. (Which makes SQL injection the owlbear?) For the longest time there was no equivalent to prepared statements for building HTML on the fly. Developers had to create bespoke solutions for their programming language and web architecture. Then came frameworks like React. It did away with the string concatenation that lead to HTML injection and cross-site scripting. The framework knew how to correctly place arbitrary content so that it remained a text node rather than become a JavaScript execution context.
React even preserved the ability to write insecure HTML, but it did so in a way that was obvious in its design as dangerouslySetInnerHTML
. This also made it easier to run linters for identifying areas of risk in the code. Alas, React didn’t even appear until a year after the original version of this article.
The Content Security Policy has been trying to bring a secure design against HTML injection with mechanisms that restrict how a page may load resources. CSP doesn’t prevent HTML injection – it mitigates its exploitability. So, developers must still invest in frameworks like React or other ways of preventing these XSS in the first place. CSP feels like a well-intentioned design, but it suffers from being placed at the point of exploitation (the browser) as opposed to the point where flaws are introduced (the app). Despite more than ten years of design and an iteration to CSP Level 3, this is the type of security design that places a lot of burden on developers to implement without a lot of compelling benefits to justify that burden.
Secure design should be how we send whole groups of vulns to the graveyard. Good security models understand the threats a design counters as well as those it does not. Spend too much time on design and the site will never be implemented. Spend too much time on piecemeal security and you risk blocking obscure exploits rather than fundamental threats.
As the ancient Fremen saying goes, “Truth suffers from too much analysis.”3 Design also suffers when its scrutiny is based on nonspecific or unreasonable threats. It’s important to question the reasons behind a design and the security claims it makes. Yes, HSTS relies on the frail security of DNS – it’s a trust on first use (TOFU) model where the browser assumes the header comes from an authentic source.
However, HSTS improved the reliability of maintaing HTTPS connections and minimizing the impact of malicious CAs, but it also introduced risk. A misconfigured HSTS header could create a self-induced DoS by preventing browsers from connecting. And that misconfiguration might come from a developer mistake or an attacker with the ability to set headers from a compromised server.
Design your way to a secure concept, code your way to a secure site. When vulns appear determine if they’re due to flaws in the design or mistakes in programming. A design that anticipates vulns, like parameterized queries, should be easy to implement and serve the developer’s needs. Vulns that surprise developers should lead to design changes that provide more flexibility for resolving the problem.
Inflexibility, whether in design or implementation, is dangerous to security. As the Bene Gesserit say, “Any road followed precisely to its end leads precisely nowhere.”4
-
In the sense of Claude Shannon’s “Communication Theory of Secrecy Systems”. ↩
-
As Steven Bellovin notes in his paper, an 1882 codebook contains an amusingly familiar phrase regarding identity questions, “Identity can be established if the party will answer that his or her mother’s maiden name is…“ It seems identity proofs haven’t changed in 140 years! ↩
-
Frank Herbert. Dune Messiah. p. 81. ↩