Eric,
On 6/25/24 20:10, Eric Robinson wrote:
No - Tomcat passes the acceptCount value to the TCP/IP stack of the
OS as part of listener socket initialization.
I thought of that after I sent my previous message.
the OS won't log this, since it's considered to be an application
error.
Assuming the problem is the acceptCount value, then it's technically
the app's fault for improperly initializing the socket listener.
Nevertheless, it would make sense for the OS to log a neighborly
message (perhaps optionally) along the lines of, "I rejected a
connection attempt because of the acceptCount setting you told me to
use." It would sure help with troubleshooting. As you pointed out,
the app can't log errors it does not know about, so nothing is logged
anywhere.
What is impact on memory utilization if we increase the acceptCount
value? There are 100 tomcat instances on the server. And would
maxThreads have to be increased to accommodate the extra
connections?
I suspect that your problem lies in the neighborhood of the acceptCount
value, so I think this is worthwhile pursuing.
You said that your clients have very different usage patterns. Perhaps
you can identify those that get the biggest bursts of traffic and
reconfigure only those to have higher acceptCount values? There is no
particular reason that you have to raise them all to 5000 or whatever.
The memory impact is likely negligible, especially with that insane
amount of RAM. I'm no Linux kernel expert, but I think there are
essentially no limits (other than physical RAM and avoiding swapping) on
the amount of memory that the kernel can use for this kind of thing. A
TCP backlog entry can't be THAT big. A few dozens of bytes or maybe even
a whole kilobyte and you have scads of RAM to spare.
maxThreads does not need to be increased. Think of acceptCount and
maxThreads as sympathetic settings. If you never want any client to be
turned-away, then set acceptCount/backlog to something "high". If you
never want any client to wait a long time, set maxThreads to something
"high". But of course your CPU(s) can only do so much work at once, so
at some point setting maxThreads too high will just make things worse.
Do you have any reverse-proxy or similar out in front of any/all of
these services? If so, you could turn "connection refused" into a HTTP
503 response to clients. Of course, then you have the problem of tuning
the reverse-proxy so that it doesn't turn anyone away...
"Connection refused" is one of TCP's ways of telling a client that the
service isn't available. That can be because it's down completely or
because it's being heavily-used. IMHO, "under heavy use" is an
acceptable reason to return "connection refused" but your (paying)
clients may disagree. How often are you getting reports of "connection
refusals"? You should be able to monitor your processes to see the
number of concurrent connections being processed, and see if they are
reaching the various limits you have.
Any time your server's concurrent-request-count and max-thread-count are
the same, it means your acceptCount/backlog is being used. Look for
instances of those situations and re-tune those services to have a
higher thread count.
Can you post an example <Connector> configuration? Are most of your
servers configured with similar <Connector> configurations?
There are a lot of variables that go into these calculations, including
connector type (BIO, NIO/2, APR, etc.), usage pattern (short requests
versus long-running ones), use of async/websocket, etc. So more
information would certainly help us help you.
-chris
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org