https://issues.apache.org/bugzilla/show_bug.cgi?id=51232

             Bug #: 51232
           Summary: Heavy CPU usage caused by broken browser connection
           Product: Tomcat Connectors
           Version: 1.2.31
          Platform: PC
        OS/Version: FreeBSD
            Status: NEW
          Severity: major
          Priority: P2
         Component: mod_jk
        AssignedTo: dev@tomcat.apache.org
        ReportedBy: ari.suut...@syncrontech.com
    Classification: Unclassified


After upgrading mod_jk from 1.2.30 to 1.2.31 we started experiencing huge CPU
usage by apache (2.2.18) processes randomly. Usually the apache server is very
lightly loaded, but now load levels were almost constantly over 10.

It seems that problem occurs if tomcat is serving a page which takes some time
to complete (some kind of long report for example). If user isn't patient
enought
to wait that page completes but presses stop in browser instead the
apache server process serving the request starts spinning for 30 seconds.

Somebody else has also noticed this (in different environment however):
http://osdir.com/ml/users-tomcat.apache.org/2011-05/msg00350.html

Here are some relevant lines from jk.log:

[Thu May 19 13:02:31.440 2011] [1164:34382815408] [debug]
ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 03e0    61 0A 0A 61 61
61 61 61 61 61 61 61 61 61 61 61  - a..aaaaaaaaaaaaa
[Thu May 19 13:02:31.440 2011] [1164:34382815408] [debug]
ajp_connection_tcp_get_message::jk_ajp_common.c (1329): 03f0    61 61 61 61 61
61 61 61 61 61 61 61 61 61 61 61  - aaaaaaaaaaaaaaaa
[Thu May 19 13:02:31.440 2011] [1164:34382815408] [debug] ws_write::mod_jk.c
(508): written -1 out of 8184
[Thu May 19 13:02:31.440 2011] [1164:34382815408] [info]
ajp_process_callback::jk_ajp_common.c (1885): Writing to client aborted or
client network problems
[Thu May 19 13:02:31.440 2011] [1164:34382815408] [info]
ajp_service::jk_ajp_common.c (2543): (varnish_tc-demo) sending request to
tomcat failed (unrecoverable), because of client write error
 (attempt=1)
[Thu May 19 13:02:31.440 2011] [1164:34382815408] [debug]
ajp_reset_endpoint::jk_ajp_common.c (757): (varnish_tc-demo) resetting endpoint
with socket 13 (socket shutdown)
[Thu May 19 13:02:31.440 2011] [1164:34382815408] [debug]
jk_shutdown_socket::jk_connect.c (726): About to shutdown socket 13
[192.168.2.99:64615 -> 192.168.9.24:8123]
[Thu May 19 13:03:01.000 2011] [1164:34382815408] [debug]
jk_shutdown_socket::jk_connect.c (808): Shutdown socket 13 [192.168.2.99:64615
-> 192.168.9.24:8123] and read -374766112 lingering by
tes in 30 sec.
[Thu May 19 13:03:01.001 2011] [1164:34382815408] [debug]
ajp_done::jk_ajp_common.c (3013): recycling connection pool slot=0 for worker
varnish_tc-demo
[Thu May 19 13:03:01.001 2011] [1164:34382815408] [info]
service::jk_lb_worker.c (1388): service failed, worker varnish_tc-demo is in
local error state
[Thu May 19 13:03:01.001 2011] [1164:34382815408] [info]
service::jk_lb_worker.c (1407): unrecoverable error 200, request failed. Client
failed in the middle of request, we can't recover to a
nother instance.
[Thu May 19 13:03:01.001 2011] [1164:34382815408] [info] jk_handler::mod_jk.c
(2620): Aborting connection for worker=lb_tc-demo

I think that when log says "About to shutdown socket 13".... the heavy CPU
usage starts and it ends when the next line in log says "Shutdown socket...
.read <a negative number> lingering".

We have now reverted to mod_jk 1.2.30 which works ok. 1.2.31 made our system
almost unusable.

This problem is easy to reproduce with apache JMeter (I can provide the test
configuration and test page used on tomcat side if necessary).

-- 
Configure bugmail: https://issues.apache.org/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to