XSS war: a Java HTML sanitizer
Yesterday I was testing one of our forthcoming online services, in order to check XSS (Cross Site Scripting) robustness.
XSS and CSRF (Cross Site Request Forgery) are the two scary “black beasts” of online services in good company with scalability, but while even nerd-programmers think about scalability, almost nobody takes care of XSS and CSRF, at least until the first attack
The XSS becomes a serious problem when you allow your users to add contents on your site/service; typical situations are blogs, forums, online chats etc.: when you are about to “print” user contributions you must do it conscientiously, you may found some “easter egg”.
Of course if you limit user contributions to plain text you will solve the problem in minutes, by just encoding every html tag (the idea is to change every “>” in “>” and “<” in “<” and so on). But things quickly get harder when you try to accept some basic html tags.
In our case, Patapage is a service that allows to add comments, threads and more options to existing web sites, it is tailored to appealing interfaces so it is a MUST to allow users to insert a wide set of html tags. We have found on stackOverflow.com a good balance from admitted tags and refused ones; as usual Jeff Atwood is prodigal of precious hints, but in our case while his set is fine for “patacomments” it is too restrictive for the scope of “patacontents”.
As you can imagine, I’ve found a lots of holes, so going upstream as usual and not giving a damn on, probably, the best Jeff’ suggestion (“do not re-write you own implementation”, an advice which he too is not following) I started writing code basing my implementation on these articles.
Another aspect of the problem is the conservation of layout: users can break the page by inserting unclosed or misplaced tags. If you hope that these flaws do not impact on security you are on the wrong path. A well planned css attack can, for instance, layer your application for click stealing or simply “switch” two buttons of your application with funny(?) results.
Of course I’ve looked around to find something matching my needs, but I found only two kind of approaches:
The first approach isvery basic focused on removing “< s c r i p t>” tags from your html; obviously this is a silly approach, you can inject js code without a “script” tag: you have a bunch of ways to do this by using dom events (onclick, onload etc.).
The second approach is to parse HTML to understand exactly what is happening in the code in order to remove un-allowed tags; this approach is logically the right one, you cannot expect to do better. Your code analyses the HTML, extracts the tags, and then you walk down the tree dropping out unwanted ones. Unluckily this approach requires a HTML parser library really “strong” with respect to malicious code, and usually these libraries are built for a different scope. Another aspect is that parser, lexer and walker are quite complicated pieces of code, so it is not a joke to test them completely. I’ve tested a couple of parsers with unsatisfying results.
This is why we wrote our own sanitizer by hand. Our approach is to remove unwanted tags and properties without testing HTML correctness in deep.
First step is to tokenize the code: a token could be one of : tag start (), comment (), tag content (blah blah), a tag closing ().
For instance <p style=”color:red” align=”center”>test</p> generates three tokens:
- <p style=”color:red” align=”center”>
The tokenize method looks for <…> pairs or comments, and that its fine for our scope of restricting accepted tags: if a <…> pair is badly closed the tag will be html-encoded.
Having the tokens list, we will test every single token whether it is acceptable; again, we do not perform tag matching at first, so for us <b>test</i> its fine, we are working on security, not on syntax – we’ll also fix that afterward, as it is easy in any case to add a tag-pair counter to close at the end unclosed tags, which you actually find done in our code.
We loop for every single token and we test it with regular expressions. The flow is:
- if token is a comment discard it.
- if token is a start tag (<p style=”color:red” > ) extract the tag (p) and attributes (style=”color:red” align=”center”)
- if tag is forbidden it will be removed
- if tag is allowed we will extract every attribute performing a check
- check “href” and “src” for admitted tags (a, img, embed only) and check url validity (only http or https)
- check “style” attribute looking for “url(…)” parameter, and eventually discarding it
- remove every “on…” attribute – e.g. onClick, onLoad, …
- encode attribute’ value for unknown ones
- push the tag on the stack of open tags
- else the tag is unknown and will be removed
- if the token is an end tag (</p> ) extract the tag (p) and check if the corresponding tag is already open. Eventually close those that are still open.
- else it is not a tag and we will encode it
Finished my hard work coding the shelter, I give the happy news to our design department that the sanitizer was done, and after a first minute of excitement Matteo (http://pupunzi.open-lab.com) told me that they have three different usages of user input: for displaying an HTML page on the front office, for displaying a textual abstract in lists in the backoffice, and of course for storing contents on the database.
So a sanitizer needs three different outputs, html-encoded with tag, text-only without tag, and the “original” version for the database. This is why the latest version of the sanitizer returns “.html”, “.text” and “.val”. Why you should store “.val” instead of the original input or “.html”? Because the original input may be “dangerous”, and may mislead the user in believing that all tags are allowed. The encoded value is not suitable in case of subsequent modification because of double encodings (e.g. “>” –> “>” –> “>” and so on). On the other side “.val” removes only forbidden tags maintaining all other user oddities (strange tags, comments, etc.).
We have set-up a public playground for testing our sanitization code: http://patapage.com/applications/pataPage/site/test/testSanitize.jsp. This page allows you to input a text and by pressing “test” your input will be printed (sanitized) on the page.
Source of our sanitizer are released under MIT license (i.e. free as free beer, just keep the attribution); see the complete code here.
These tags will be accepted, others will be encoded, and printed. If you like challenges try to inject a js in your text and, for instance, get an alert. Tell us about your victories, if any.
BTW, I’ve tested my code using XSS Me plugin for Firefox, and it passed all (about 15o) tests