• Today is the fourth anniversary of the fourth edition of Anti-Hacker Tool Kit. Technology changes quickly, but many of the underlying principles of security remain the same. The following is an excerpt from the introduction.

    AHT4

    Welcome to the fourth edition of the Anti-Hacker Tool Kit. This is a book about the tools that hackers use to attack and defend systems. Knowing how to conduct advanced configuration for an operating system is a step toward being a hacker. Knowing how to infiltrate a system is a step along the same path. Knowing how to monitor an attacker’s activity and defend a system are more points on the path to hacking. In other words, hacking is more about knowledge and creativity than it is about having a collection of tools.

    Computer technology solves some problems; it creates others. When it solves a problem, technology may seem wonderful. Yet it doesn’t have to be wondrous in the sense that you have no idea how it works. In fact, this book aims to reveal how easy it is to run the kinds of tools that hackers, security professionals, and hobbyists alike use.

    A good magic trick amazes an audience. As the audience, we might guess at whether the magician is performing some sleight of hand or relying on a carefully crafted prop. The magician evokes delight through a combination of skill that appears effortless and misdirection that remains overlooked. A trick works not because the audience lacks knowledge of some secret, but because the magician has presented a sort of story, however brief, with a surprise at the end. Even when an audience knows the mechanics of a trick, a skilled magician may still delight them.

    The tools in this book aren’t magical; and simply having them on your laptop won’t make you a hacker. But this book will demystify many aspects of information security. You’ll build a collection of tools by following through each chapter. More importantly, you’ll build the knowledge of how and why these tools work. And that’s the knowledge that lays the foundation for being creative with scripting, for combining attacks in clever ways, and for thinking of yourself as a hacker.

    I chose magic as a metaphor for hacking because it resonates with creative thinking and combining mundane elements to achieve extraordinary effects. Hacking (in the sense of information security) involves knowing how protocols and programs are constructed, plus the tools to analyze and attack them. I don’t have a precise definition of a hacker because one isn’t necessary. Consider it a title to be claimed or conferred – your choice.

    Another reason the definition is nebulous is that information security spans many topics. You might be an expert in one, or a dabbler in all. In this book you’ll find background information and tools for most of those topics. You can skip around to chapters that interest you.

    The Anti- prefix of the title originated from the first edition’s bias towards forensics that tended to equate Hacker with Attacker. It didn’t make sense to change the title for a book that’s made its way into a fourth edition. Plus, I wanted to keep the skull-themed cover.

    Consider the prefix as an antidote to the ego-driven, self-proclaimed hacker who thinks knowing how to run canned exploits out of Metasploit makes them an expert. They only know how to repeat a simple trick. Hacking is better thought of as understanding how a trick is constructed or being able to create new ones of your own.

    Each chapter sets you up with some of that knowledge. And even if you don’t recognize an allusion to Tenar or Gaius Helen Mohiam, there should be plenty of technical content to keep you entertained along the way. I hope you enjoy the book.

    • • •
  • The summer conference constellation rises over Las Vegas for about one week every year. The trio of Black Hat, BSidesLV, and DEF CON historically generates loud, often muddled, concerns about personal device security. Sometimes the concern is expressed through hyperbole in order to point out flawed threat models. Sometimes it’s based on ignorance tainted with misapplied knowledge. Either way, perform the rituals and incantations that make you feel better. Enjoy the conferences, have fun, share knowledge, learn new skills.

    Hubble Captures View of Mystic Mountain

    Whatever precautions you take, ask why they’re necessary for one special week of the year. If the current state of security for devices and web sites can’t handle that week, I find that a failure of infosec and an indictment of appsec’s effectiveness after three decades.

    It’s another way of asking why a device’s current “default secure” is insufficient, or asking whether we need multi-page hardening guides vs. a default hardened configuration.

    Keep in mind there are people with security concerns all 52 weeks of the year. People who are women. People in minority groups. People in abusive relationships. People without political representation, let alone power. Most often these are people who can’t buy a “burner phone” for one week to support their daily needs. Their typical day isn’t the ambiguous threat of a hostile network. It’s the direct threat from hostile actors – those with physical access to their devices, or knowledge of their passwords, or possibly just knowledge of their existence. In each case they may be dealing with a threat who desires access to their identity, data, and accounts.

    There are a few steps anyone can take to improve their baseline security profile. However, these are just a starting point. They can change slightly depending on different security scenarios.

    (1) Turn on automatic updates.

    (2) Review authentication and authorization for all accounts.

    • Use a password manager to assign a unique password to every account.
    • Enable multi-factor authentication (MFA), aka two-factor authentication (2FA) or two-step verification (2SV), for all accounts that support it.
    • Prioritize enabing MFA for all accounts used for OAuth or social logins (e.g. Apple, Facebook, Google, LinkedIn).
    • Prefer WebAuthn authentication flows. It cryptographically binds credentials between the user device and server. This prevents replay attacks if the traffic is intercepted and reuse attacks if the server’s credential store is compromised.
    • Review third-party app access (usually OAuth grants) and remove any that feel unnecessary or that have more permissions than desired.

    (3) Review MFA support (or activation factors as NIST 800-63B calls them)

    • Prefer factors that rely on FIDO2 hardware tokens, biometrics, or authenticator apps.
    • Only use factors based on SMS or email if no better option is available.
    • For authenticator apps, enable backups or multi-device support in order to preserve access in case of a lost device.
    • Record and save recovery codes associated with enabling MFA. Choose a storage mechanism sufficient for your needs, such as printed and placed somewhere safe or in a password-protected keychain.

    Talk to someone who isn’t in infosec. Find out what their concerns. Help them translate those concerns into ways of protecting their accounts and data.

    Apple recently released Lockdown Mode in iOS 16, iPadOS 16, and macOS Ventura. It provides users with increased protection for their system by ensuring a secure default as well as disabling features that typically have security issues. It’s effectively a one-click hardening guide and attack surface reduction. By disabling feeatures prone to abuse, it carries a useability cost. But ultimately it’s an easy way for any user to have more security when they need it.

    Not everyone has an iPhone and not everyone has threats limited to account takeover.

    One resource with technical recommendations in non-technical jargon is Speak Up & Stay Safe(r).

    The EFF has a wide collection of practices and tools in its Surveillance Self-Defense guide. Notably, it lists different security scenarios you might find yourself in and how to adapt practices to each of them.

    The expectation for modern devices and modern web sites should be that they’re safe to use, even on the hostile network of an infosec conference. If an industry can’t create a safe environment for itself, why should it be relied on to create a safe environment for anyone else.

    • • •
  • Builder

    At the beginning of February 2017 I gave a brief talk that noted how Let’s Encrypt and cloud-based architectures encourage positive appsec behaviors. Over a span of barely three weeks, several security events seemed to undercut that thesis – Cloudbleed, SHAttered, S3 outage.

    Coincidentally, those events also covered the triad of confidentiality, integrity, and availability.

    So, let’s revisit that thesis and how we should view those events through a perspective of risk and engineering.

    Eventually Encrypted

    For well over a decade, at least two major hurdles blocked pervasive HTTPS. The first was convincing sites to deploy HTTPS in the first place and take on the cost of purchasing certificates. The second was getting HTTPS deployments to use strong configurations that enforced TLS for all connections and only used recommended ciphers.

    Setting aside the distinctions between security and compliance, PCI was a crucial driver for adopting strong HTTPS. Having a requirement for transport encryption, backed by financial consequences for failure, has been more successful than asking nicely, raising awareness at security conferences, or shaming. I suspect the rate of HTTPS adoption has been far faster for in-scope PCI sites than others.

    The SSL Labs project might also be a factor, but it straddles that line of encouragement through observability and shaming. It distilled a comprehensive analysis of a site’s TLS configuration into a simple letter score. The publically-visible results could be used as a shaming tactic, but that’s a weaker strategy for motivating positive change. Plus, doing so doesn’t address any of the HTTPS hurdles, whether convincing sites to shoulder the cost of obtaining certs or dealing with the overhead of managing them.

    Still, SSL Labs provides an easy way for organizations to consistently monitor and evaluate their sites. This is a step towards providing help for migration to HTTPS-only sites. App owners still bear the burden of fixing errors and misconfigurations, but this tool made it easier to measure and track their progress towards strong TLS.

    Effectively Encrypted

    Where SSL Labs inspires behavioral change via metrics, the Let’s Encrypt project empowers behavioral change by addressing fundamental challenges faced by app owners.

    Let’s Encrypt eases the resource burden of managing HTTPS endpoints. It removes the initial cost of certs (they’re free!) and reduces the ongoing maintenance cost of deploying, rotating, and handling certs by supporting automation with the ACME protocol. Even so, solving the TLS cert problem is orthogonal to solving the TLS configuration problem. A valid Let’s Encrypt cert might still be deployed to an HTTPS service that gets a bad grade from SSL Labs.

    A cert signed with SHA-1, for example, will lower its SSL Labs grade. SHA-1 has been known weak for years and discouraged from use, specifically for digital signatures. Having certs that are both free and easy to rotate (i.e. easy to obtain and deploy new ones) makes it easier for sites to migrate off deprecated versions. The ability to react quickly to change, whether security-related or not, is a sign of a mature organization. Automation as made possible via Let’s Encrypt is a great way to improve that ability.

    Breaker

    Facebook explained their trade-offs along the way to hardening their TLS configuration and deprecating SHA-1. It was an engineering-driven security decision that evaluated solutions and chose among conflicting optimizations – all informed by measures of risk. Engineering is the key word in this paragraph; it’s how systems get built.

    Writing down a simple requirement and prototyping something on a single system with a few dozen users is far removed from delivering a service to hundreds of millions of people. WhatsApp’s crypto design fell into a similar discussion of risk-based engineering1. This excellent article on messaging app security and privacy is another example of evaluating risk through threat models.

    Exceptional Events

    Companies like Cloudflare take a step beyond SSL Labs and Let’s Encrypt by offering a service to handle both certs and configuration for sites. They pioneered techniques like Keyless SSL in response to their distinctive threat model of handling private keys for multiple entities.

    If you look at the Cloudbleed report and immediately think a service like that should be ditched, it’s important to question the reasoning behind such a risk assessment. Rather than make organizations suffer through the burden of building and maintaining HTTPS, they can have a service the establishes a strong default. Adoption of HTTPS is slow enough, and fraught with error, that services like this make sense for many site owners.

    Compare this with Heartbleed. It also affected TLS sites, could be more targeted, and exposed private keys (among other sensitive data). The cleanup was long, laborious, and haphazard. Cloudbleed had significant potential exposure, although its discovery and remediation likely led to a lower realized risk than Heartbleed.

    If you’re saying move away from services like that, what in practice are you saying to move towards? Self-hosted systems in a rack in an electrical closet? Systems that will degrade over time and, most likely, never be upgraded to TLS 1.3? That seems ill-advised.

    Blather

    Does that S3 outage raise concern for cloud-based systems? Not to a great degree. Or, at least, not in a new way. If your site was negatively impacted by the downtime, a good use of that time might have been exploring ways to architect fail-over systems or revisit failure modes and service degradation decisions. Sometimes it’s fine to explicitly accept certain failure modes. That’s what engineering and business do against constraints of resource and budget.

    Coherently Considered

    So, let’s leave a few exercises for the reader, a few open-ended questions on threat modeling and engineering.

    Flash has been long rebuked for its security weaknesses. As with SHA-1, the infosec community voiced this warning for years. There have even been one or two (ok, lots more than two) demonstrated exploits against it. It persists. It’s embedded in Chrome2, which you can interpret as a paternalistic effort to sandbox it or, more cynically, an effort to ensure YouTube videos and ad revenue aren’t impacted by an exodus from the plugin.

    Browsers have had impactful vulns, many of which carry significant risk and impact as measured by the annual $50K+ rewards from Pwn2Own competitions. The minuscule number of browser vendors carries risk beyond just vulns, affecting influence on standards and protections for privacy. Yet more browsers doesn’t necessarily equate to better security models within browsers.

    Approaching these kinds of flaws with ideas around resilience, isolation, authn/authz models, or feedback loops are just a few traits of a builder. They can be traits for a breaker as well, in creating attacks against those designs.

    Approaching these by explaining design flaws and identifying implementation errors are just a few traits of a breaker. They can be traits for a builder as well, in designing controls and barriers to disrupt attacks.

    Approaching these by dismissing complexity, designing systems no one would (or could) use, or highlighting irrelevant flaws is often just blather. Infosec has its share of vacuous and overly-ambiguous phrases like military-grade encryption, perfectly secure, artificial intelligence (yeah, I know, blah blah blah Skynet), use-Tor-use-Signal, and more.

    There’s a place for mockery and snark. This isn’t concern trolling, which is preoccupied with how things are said. This is about understanding the underlying foundation of what is being said about designs – the risk calculations, the threat models, the constraints.

    Constructive Closing

    Pour

    I believe in supporting people to self-identity along the spectrum of builder and breaker rather than pin them to narrow roles – a principle applicable to many more important subjects as well. This is about the intellectual reward of tackling challenges faced by builders and breakers alike, and discarding the blather of uninformed opinions and empty solutions.

    I’ll close with this observation from Carl Sagan in The Demon-Haunted World:

    It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring.

    Our appsec universe consists of systems and data and users, each in different orbits.

    Security should contribute to the gravity that binds them together, not the black hole that tears them apart. Engineering works within the universe as it really is. Shed the delusion that one appsec solution in a vacuum is always universal.


    1. Whatsapp provides great documentation on their designs for end-to-end encryption. 

    2. In 2017 Chrome announced they’d remove Flash by the end of 2020. 

    • • •