All,

I've opened a PR for this:

https://github.com/apache/tomcat/pull/681

Comments welcome.

-chris

On 12/15/23 10:20, Christopher Schultz wrote:
Mark,

On 12/15/23 04:53, Mark Thomas wrote:
On 13/12/2023 22:05, Christopher Schultz wrote:
All,

I've been playing with this Filter recently, and I have some concerns for its practical use. I'm considering adding some features to it in order to make it more practical to use, and I'm interested to see what others think about these "problems" and whether my potential additions have merit.

These are in no particular order (e.g. importance).

1. Killing caching

If used as prescribed, all URLs will have CSRF tokens added to them even when they are unnecessary. Specifically, I'm thinking about static resources such as images, scripts, and CSS.

The don't map the filter to those resources? Ah, I've now read you comment below.

It sounds like we do need a more sophisticated functionality to URL mapping than we can get with just the filter mapping.

Yep. My initial code, here, was to add some configuration to the filter to simple direct the response-wrapper to NOT adjust certain URLs sent to encodeURL and encodeRedirectURL. I'm open to other suggestions.

2. <form>s with either GET or POST. Or, really... "Forms"

When using <form method="get" action="/foo?csrf=blahblah">, different browsers do different things with the query string. I'm mostly testing on Firefox and I don't have time to check this on every browser. It's enough that Firefox behaves in this way.

<form method="get" action="/foo?csrf=blahblah">
   <input type="submit" name="btn" value="Go!" />
</form>

Submitting this form causes a 403 response. The browser shows that the URL requested from the server is /foo?btn=Go! and the CSRF token has been removed from the action. (Surprise!) I don't see anything in WHATWG's description of how form-submission is supposed to work that indicates that GET versus POST should behave differently with respect to the 'action' attribute, but this is what the current version of Firefox actually does.

There are three options I can think of to get around this:

a. Switch to POST:

<form method="post" action="/foo?csrf=blahblah">
   <input type="submit" name="btn" value="Go!" />
</form>

You get HTTP POST /foo?csrf=blahblah with an application/x-www-urlencoded entity containing the form <input> element(s).

b. Manually add the CSRF token to the form:

<form method="get" action="/foo?csrf=blahblah">
[ if request.getAttribute("org.apache.catalina.filters.CSRF_NONCE_PARAM_NAME") ]    <input type="hidden" name="csrf" value="[ request.getAttribute("org.apache.catalina.filters.CSRF_NONCE_PARAM_NAME") ]" />
[ end-if]
   <input type="submit" name="btn" value="Go!" />
</form>

In this case, you get a GET request with the csrf parameter in there because, well, you asked for it.

c. Hack the CsrfPreventionFilter to allow all POST methods. This is one of my proposals: allow users to configure the CSRF prevention filter to always-allow HTTP POSTs. This cannot be done in web.xml using e.g. <http-method>GET</http-method> for the <filter> because Filters cannot be configured in this way. And if they could be, the Filter would not be invoked and all the URLs on your page would lack the required CSRF tokens to make /those/ links work.

Doesn't option c) defeat the purpose of the CSRF Filter? It looks like a) or b) are the better choices here.

I don't think (c) defeats the purpose of CSRF -- at least not completely. Here's my thought process.

First, the biggest threat from CSRF is not tricking a user on one site into submitting a form to the victim application (which would be a POST, which always requires user interaction from a browser). The biggest threat is triggering a non-idempotent GET request from one site to the victim site. This can be done in a variety of ways, but the simplest is something like this:

https://attacker.example.com/evilpage.html

...
<img src="https://victim.example.com/user/delete?id=1"; />
...

Assuming that CSP, etc. hasn't been configured for these sites (or it has been configured intentionally for an attack), the browser will make that request to the victim application without any user interaction.

So, without protecting POST, we still get a lot of mileage out of CSRF protections only on GET (or at least non-POST).

Why not protect POST? Well, this gets back to the "practical" part.

It would be great to tell people "just enable CSRF protection and you are good". But the reality is that you have to enable it and then test every corner of your application. This process for me has been agonizing. We have lots of places in our nearly 20-year-old application (which has been continually maintained, improved, etc. and not just left to rot) where CSRF breaks everything.

By ignoring POST, we can almost just enable it and fix each instance of problems as they come up, because the number of times we have forgotten to use response.encodeURL() is infinitesimal. This gets us a huge bang for our buck even if it isn't 100% protection. I'll take 95% protection over 0% any day.

Had Tomcat's CSRF protection allowed this, it would probably be in production already instead of sitting in an "we'll see how this works out" pile of things to do.

This is very related to the "non-enforcement" suggestions below, because the idea is that you'd /eventually/ get all the POSTs working in your application, and then remove the "unsafe" non-enforcement of CSRF protection for POST.

3. Default token-cache size

The default token cache size is 5. If you use applications like I do, you often open multiple links into separate tabs for various reasons. Do a search of products in your application, and there are several of them you'd like to get details for. So you middle-click (or whatever does it in your environment) on each of those links to open separate tabs, leaving the search-results page visible in the current tab.

Then you want to click the "next" button to view the next page of the results. An HTTP 403 response is returned because, if you opened more than 4 extra tabs, the tokens used on the search-results page have already expired, and you can't use your page anymore.

I'm not sure what the best token cache size is. It might depend upon the behavior of your users. If I use your site, you might want a very large token cache size. But I think maybe 5 isn't quite big enough. At least it's trivially settable, so I don't really have a recommendation here other than "maybe let's set the default to something higher"?

There is a trade-off here with memory usage but I wouldn't object to a higher default. Users with memory pressure concerns can always lower it.

I think this may be more of a documentation issue than anything else. The CSRF protection filter basically says "enable this to protect against CSRF" but doesn't explain what the practical implications are of the configuration settings.

There is enough here that putting it into the configuration guide is probably a little too verbose. I'd be happy to write-up a page all about this. In fact, it might make a good presentation. I did one a while back at an Apache RoadShow in the Washington, DC area but didn't put it up on the presentations page. It was mostly a survey of security-related things you can do with Tomcat, but configuration, etc. was mostly left as an exercise for the reader.

4. No "test mode"

It might be really helpful to have a non-enforcement mode that generates logging without any errors. This would allow developers to enable the CSRF prevention filter in a mode where the nonces are generated and used, but only checked and logged. Any time a token would have been rejected, it is logged but the request is allowed.

The logging is already there, but requests are rejected.

I am having a problem at $work where anytime I enable this in a development environment, I get loads of complaints from users being locked-out of things that I haven't had the time to test and/or mitigate (such as <form> submissions). If there were a non-enforcement mode, I could enable it, then look at the logs after a week and fix anything I find in there without disturbing anybody.

+1

Thanks for the feedback.

5. All or nothing

This may be 4(b) instead of "5", but...

When enabling the CSRF prevention filter, it's impractical to configure it for anything but <url-pattern>/*</url-pattern>. If you choose any other filter-pattern, you will find that you need numbers of "entry points" which are request URLs that are allowed to be requested /without/ providing a CSRF token.

This means that your *entire application* needs to be working perfectly in order to enable the CSRF prevention filter.

It would be nice to allow enforcement of, for example, subsets of the URL space. Maybe I only want to enforce the CSRF tokens in my /admin/ and /product/ URL spaces, allowing the /public/ and /whatever/ spaces to remain unprotected.

But if I visit /public/index and want to click on a link pointing to /admin/index, it's got to have that CSRF token in it otherwise I get an error. So I can't just use <url-pattern>/product/*</url-pattern> and <url-pattern>/product/</url-pattern> because I will not only get non-enforcement for those other URLs, I will also get no tokens.

Providing one or more non-enforcement URL spaces (prefixes?) would allow users to enable CSRF mitigations on parts of their web applications, test and fix anything that isn't working, then move on to other parts of the application, adding enforcement a chunk at a time.

I'm not sure what others' experiences of using the CSRF prevention filter have been like, but my experience so far has been very rocky as you can see from the issues I'm raising above.

I'd be very happy to receive feedback on anything I've written above.

I'm not clear how this would work. Don't you "just" need to define entry points for the parts that are protected? You'd need to do that anyway wouldn't you?

This is another "practical implementation" kind of thing. If your whole application is ready for CSRF, then just go for it and all is well. But if your admin-facing interface is the one you need to protect most critically and you want to leave the non-admin stuff unprotected (just as a stop-gap), then as-is your only option is NOT to map the filter to the public-facing URL space.

There are times where you may want to have admin-oriented links on a public-facing page that are only shown to admins. They can't be "entry points" because you want to actually protect them, put the initial request needs to go to a non-protected page.

So the filter needs to be smart enough to apply CSRF tokens to the URLs in the page, but if you make a request to the page without a valid CSRF token, it's still okay.

One option would be to list every single URL in your whole public URL space as an "entry point". That would work, but you'd end up with a huge number of URLs in there (there's no globbing, etc. available, so you have to list every single one).

Additionally, the idea would be - over time - to remove those entry points which are only there because your dev team hasn't gotten-around to locating and fixing all non-compatible pages. If they are thrown-in with the "entry points" then it's not clear that they /shouldn't really be there in the first place/. If we have a "non-enforcement URL space", then the goal would be to reduce that over time to zero items.

Does that make sense?

I think I'll prepare an issue+PR for this, and we can comment on the implementation. I'll keep things in separate commits so we can pick and choose what we like and don't like.

-chris

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to