On 22/11/05, Robert Wittams <[EMAIL PROTECTED]> wrote: > No. The developer finds out that if he wants to create an XSS attack > catalog, he needs to be explicit about it. If a developer fails to read > the documentation, he is not going to get the most out of the framework.
We are both wanting to protect against developer flaws - carelessness, laziness and forgetfulness in particular. However, if a developer exhibits these characteristics with respect to code that checks inputs, you can hardly assume they are going to be more reliable with respect to documentation. I think you could forgive a developer for thinking that request.GET was a simple dictionary of the request's HTTP 'GET' parameters without checking the documentation first! Making request.GET do some heavy filtering magic by default really breaks the principle of least surprise as far as I can see. We have to remember as well that code spends an awful lot of time in maintenance mode, and very often a developer will be given other people's code to maintain and fix. In these circumstances, you do not (and cannot) read all the documentation about all the frameworks and libraries that are being used -- very often people will just dive in. I personally have done an awful lot of maintaining other people's code, and have had to do so without the luxury of time to read reams of documentation. For these kind of developers, which may well make up the majority on a given Django project, APIs really need to do the obvious thing. > The ASP.net thing works on a per-page rather than per request variable > setting as far as I can remember, this is what makes it useless. Point taken. > >>The point is that *when, not if, you fail* the system should let you > >>fail *as safely as possible*, rather than as dangerously as possible. > > > > Data loss due to parts of input being stripped out by a safety mechanism > > (when the developer had already built adequate checks into the code) is > > also failure, and so is a user being unable to input the required data > > due to some error message saying 'malicious data detected'. > > What part of "let you fail as safely as possible" did you not understand? My point is that there is no such thing as failing safely. Filtering against XSS eliminates one problem. But why is XSS bad in the first place? Because it can cause data loss or data theft, which ultimately affects someone's bottom line, either in terms of time or money, and either the web site user or the web site creator. But XSS is not the only thing that can damage that bottom line. Data loss that occurs from inputs being munged unexpectedly can affect the bottom line just as much, whether due to corrupted data, rejected data or the time taken to get the code fixed. If Django was a tool for writing blog applications and message boards only, I would agree that this would be a good precaution to take -- effectively a refactoring. But I am sure Django will be used much more widely than that, and there are lots of cases where people will have innocent input data that looks like data that might be dangerous for output on a web page. I don't know what you were trying to input when you came across the ASP.NET 'feature', but in my case it was purely innocent -- I wasn't trying to test any XSS protection, but somehow managed to trigger it. Now Microsoft might have written the filter badly, but I strongly suspect any filter we come up with that is actually useful will also catch/munge data that accidentally looks like a XSS attack. The filter would have to be impossibly complex (not to mention clairvoyent) to catch only real XSS attempts. What I'm saying here is that the filtering will not be infallible, and will therefore be a nuisance in some cases, and as such could be just as dangerous to someone's bottom line as letting some XSS vulnerabilities through. We don't really have a way of calculating the relative risks or odds between these two, but on other considerations (including the other ones I mentioned and Simon as well), I think it is unfortunately a flawed idea. Regards, Luke