OU[tf-]812

Music has a universal appeal uninhibited by language. A metal head in Istanbul, Tokyo, or Oslo instinctively knows the deep power chords of Black Sabbath — it takes maybe two beats to recognize a classic like “N.I.B.” or “Paranoid.” The same guitars that screamed the tapping mastery of Van Halen or led to the spandex hair excess of ’80s metal also served The Beatles, Pink Floyd, and Eric Clapton. And before them was Chuck Berry, laying the ground work with the power chords of “Roll Over Beethoven”.

And all this with six strings and five notes: E – A – D – G – B – E. Awesome.

And then there’s the writing on the web. Thousands of symbols, 8 bits, 16 bits, 32 bits. With ASCII, or US-ASCII as RFC 2616 puts it. Or rather ISO-8859-1. But UTF-8 is easier because it’s like an extended ASCII. On the other hand if you’re dealing with GB2312 then UTF-8 isn’t necessarily for you. Of course, in that case you should really be using GBK instead of GB2312. Or was it supposed to be GB18030? I can’t remember.

What a wonderful world of character encodings can be found on the web. And confusion. Our metal head friends like their own genre of müzik / 音楽 / musikk. One word, three languages, and, in this example, one encoding: UTF-8. Programmers need to know programming languages, but they don’t need to know different spoken languages in order to work them into their web sites correctly and securely. (And based on email lists and flame wars I’ve seen, rudimentary knowledge in one spoken language isn’t a prerequisite for some coders.)

You don’t need to speak the language in order to work with its characters, words, and sentences. You just need Unicode. As some random dude (not really) put it, “The W3C was founded to develop common protocols to lead the evolution of the World Wide Web. The path W3C follows to making text on the Web truly global is Unicode. Unicode is fundamental to the work of the W3C; it is a component of W3C Specifications, from the early days of HTML, to the growing XML Family of specifications and beyond.”

Unicode has its learning curve. With Normalization Forms. Characters. Code Units. Glyphs. Collation. And so on. The gist of Unicode is that it’s a universal coding scheme to represent all that’s to come of the characters used for written language; hopefully never to be eclipsed.

The security problems of Unicode stem from the conversion from one character set to another. When home-town fans of 少年ナイフ want to praise their heroes in a site’s comment section, they’ll do so in Japanese. Yet behind the scenes, the browser, web site, or operating systems involved might be handling the characters in UTF-8, Shift-JIS, or EUC.

The conversion of character sets introduces the chance for mistakes and breaking assumptions. The number of bytes might change, leading to a buffer overflow or underflow. The string may no longer be the C-friendly NULL-terminated array. Unsupported characters cause errors, possibly causing an XSS filter to skip over a script tag. A lot of these concerns have been documented (and here). Some even demonstrated as exploitable vulns in the real world (as opposed to conceptual problems that run rampant through security conferences, but never see a decent hack).

Unicode got more popular scrutiny when it was proposed for Internationalized Domain Names (IDN). Researchers warned of “homoglyph” attacks, situations where phishers or malware authors would craft URLs that used alternate characters to spoof popular sites. The first attacks didn’t need IDNs, using trivial tricks like dead1iestattacks.com (replacing the letter L with a one, 1). However, IDNs provided more sophistication by allowing domains with harder-to-detect changes like deạdliestattacks.com.

What hasn’t been well documented (or hasn’t where I could find it) is the range of support for character set encodings in security tools. The primary language of web security seems to be English (at least based on the popular conferences and books). But useful tools come from all over. Wivet originated from Türkiye (here’s some more UTF-8: Web Güvenlik Topluluğu), but it goes easy on scanners in terms of character set support. Sqlmap and w3af support Unicode. So, maybe this is a non-issue for modern tools.

In any case, it never hurts to have more “how to hack” tools in non-English languages or test suites to verify that the latest XSS finder, SQL injector, or web tool can deal with sites that aren’t friendly enough to serve content as UTF-8. Or you could help out with documentation projects like the OWASP Development Guide. Don’t be afraid to care. It would be disastrous if an anti-virus, malware detector, WAF, or scanner was tripped up by encoding issues.

Sometimes translation is really easy. The phrase for “heavy metal” in French is “heavy metal” — although you’d be correct to use “Métal Hurlant” if you were talking about the movie. Character conversion can be easy, too. As long as you stick with a single representation. Once you start to dabble in the Unicode conversions from UTF-8, UTF-16, UTF-32, and beyond you’ll be well-served by keeping up to date on encoding concerns and having tools that spare you the brain damage of implementing everything from scratch.

p.s. Sorry, Canada, looks like I’ve hit my word count and neglected to mention Rush. Maybe next year.

p.p.s. And eventually I’ll work in a reference to all 10 tracks of DSotM in a single post.

2 thoughts on “OU[tf-]812

  1. Pingback: Bringin’ on the Heartbreak | Deadliest Web Attacks

  2. Pingback: The Wrong Location for a Locale | Deadliest Web Attacks

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s