https://bz.apache.org/bugzilla/show_bug.cgi?id=58565

Remy Maucherat <r...@apache.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
         Resolution|---                         |WORKSFORME
             Status|NEW                         |RESOLVED

--- Comment #6 from Remy Maucherat <r...@apache.org> ---
Well, it's a network stack thing so it is transparent. I suppose it is dynamic
depending on the memory condition. The network stack has an incentive to make
buffers huge since it will vastly improve performance as everything is non
blocking.

Luckily, it is exposed in NIOx (NIO2 has fewer socket options compared to NIO1,
but it has that one) and I could find socket.txBufSize allows configuration in
Tomcat, and probably there's something at the OS level as well. So a low value
for socket.txBufSize allows this to run, I just verified it. By default I found
out it is 1313280 for me [AKA memory is cheap :) ].

Anyway, thanks for the hint to get me to look at the socket options, it is now
determined it is only a configuration issue. Given the low speeds needed for
this to happen and the possible performance impact, I would be -1 for trying to
override the network stack default without explicit user configuration.

-- 
You are receiving this mail because:
You are the assignee for the bug.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to