On Fri, Jul 25, 2025 at 01:41:40PM +0100, Mark Thomas wrote:
> I do have some questions / comments:
> 
> 1. Why not use threadPriority on each of the connectors? I assume because
> that doesn't allow a single thread pool to be shared across multiple
> connectors.
Correct

> 
> 2. Why not use HTTP/2 + RFC 9218? I assume because that operates within a
> single connection and you don't want to put all the reverse proxy traffic in
> a single HTTP/2 connection.
Also correct. 

> 3. What other approaches did you consider, reject and why?

I considered using RFC 9218 —specifically, adding new priority headers
(as described in section 4.3) — because we needed a much higher number 
of urgency levels than the eight levels defined in the generic "priority" 
header. 

However, I have given up on that idea because, until now, this was only 
meant to be an out-of-tree implementation. Therefore, it was in our best 
interest to make as few changes (ideally none) to Tomcat internals as 
possible, so as not to make it harder for us to upgrade to new Tomcat versions.

Our current implementation can obtain the information it needs from 
SocketProcessorBase<E> which contains the socket info and because of that, 
it didn't require changes to  org.apache.tomcat.util.threads.ThreadPoolExecutor
or org.apache.tomcat.util.threads.VirtualThreadExecutor

> 
> 4. How do you define priority for each connector element?
> 

In our case, we used the port number as the criteria for
ordering/priority - as in, the smaller the port, the higher the
priority.

For instance:
        Connector port 8080
        Connector port 8081
        Connector port 8082

Assuming 3 connections arrive for those 3 ports and are waiting to be
served, the worker thread would pick up the socket/connection going to
8080 then 8081 then 8082


> 5. I'd be concerned that some folks may try to do too much with this. Simple
> is good (c.f HTTP/2 prioritization from RFC 7540 with the HTTP/2 PRIORITY
> frame). If implemented, the documentation would need to be clear that this
> was intended as a (very) coarse-grained solution.
> 
100%. Happy to invest sometime to write docs to make that abundantly
clear.

> 6. It requires a new connection for each request from the reverse proxy
> (i.e. disabled keep-alive). That should be fine.
> 
Correct.

> 7. How much care needs to be taken in the reverse proxy to ensure pages
> aren't part high priority resources and part low priority. Instinctively
> that seems like it would be problematic. Is it?
>
That's a good question. In our case this wasn't much of a problem
because I'm dealing with an API-only application, so 1 request is
exactly 1 API operation. The worst that can happen is for us to
misclassify an API operation so we would know this using some of our
internal automatition that look into the HTTP access logs and request
latency.

I can see the problem that you are mentioning though, if that's a web page
that spans a bunch of requests - we wanna make sure that all of them
have the exact same priority.

I wonder if this can be addressed with a very clear disclaimer in the
docs around the do's/dont's and common pitfalls to avoid.

--
Paulo A.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to