Context:
--------
At Atlassian we often catch ourselves discussing if, _at the time of
early saturation_, we could prioritise requests that are connected with
the company's SLOs instead of the Tomcat's default (first come, first
serve).

Some time ago, we developed our own ThreadExecutor that proved useful
for our use case and I was wondering if Tomcat would be interested in
getting our implementation as a patch?

How our implementation works?
-----------------------------
Our ThreadExecutor implementation (PriorityThreadExecutor) relies on
storing the requests-to-be-served in a PriorityBlockingQueue so when
a worker becomes free to work on another request, it will get the one
which has a higher priority.

When number of requests per second is smaller than the number of free
worker threads then no measurable time-to-serve is observed. When number
of requests per second are higher (a.k.a initial saturation), the
ordering of the PriorityQueue will ensure that worker threads pick up
the most important requests to be served - (while we wait for
autoscaling to kick in).

What's the ordering criteria of the Priority Queue?
---------------------------------------------------
The current Tomcat's implementation relies on the OS socket wrapper
(SocketProcessorBase<?> extends Runnable) which only exposes layer 4
information like IP and port.

So in our implementation we relied on the port number to indicate the
priority of the request while 1) using a reverse proxy for dictating
where a given HTTP URL would be redirected to 2) sharing the
ThreadExecutor across multiple <Connector> like this:

=== server.xml ===
<!-- Connectors can use a shared executor -->
<Executor name="tomcatThreadPool"
    namePrefix="tomcat-prioexec-"
    maxThreads="10" minSpareThreads="4"
    maxQueueSize="100"
    className="my.custom.package.name.PriorityThreadExecutor"
/>

<!-- A "Connector" using the shared thread pool-->
<!-- High priority -->
<Connector executor="tomcatThreadPool"
           port="8080" protocol="HTTP/1.1"
           connectionTimeout="1000"
           redirectPort="8443" />

<!-- Medium priority -->
<Connector executor="tomcatThreadPool"
           port="8081" protocol="HTTP/1.1"
           connectionTimeout="1000"
           redirectPort="8443" />

=== reverse proxy _pseudo_ configuration ===

server {
    listen 80;

    # /pathA requests have higher priority than /pathB
    location /pathA/ {
        proxy_pass http://localhost:8080/;
        proxy_set_header <..>
    }

    location /pathB/ {
        proxy_pass http://localhost:8081/;
        proxy_set_header <..>
    }

}


Question for the tomcat maintainers
-----------------------------------

Is that something that you would be interested in getting merged into
Tomcat's source code? I'm happy to make changes to the approach of
course... just wanna know if there is any appetite for it

thanks!
- Paulo A.



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to