https://issues.apache.org/bugzilla/show_bug.cgi?id=56733
--- Comment #9 from Andy Wang <do...@moonteeth.com> ---
(In reply to Christopher Schultz from comment #2) 
> The call mod_jk makes to ap_rflush looks fairly innocuous. That call is also
> made in ws_write if the request is for headers only (e.g. a HEAD request).
> Can you try this with and without +FlushPackets and try a huge number of
> HEAD requests to see if you can also trigger a memory leak?

I ran ab -c 60 -n 6000 -i about a half dozen times and memory slowly grew to
around 38mb, but after that another dozen times and the memory didn't change.

This is with the following workers.properties config:
worker.tomcat1.type=ajp13
worker.tomcat1.host=localhost
worker.tomcat1.port=8010
worker.tomcat1.connection_pool_size=120
worker.tomcat1.connection_pool_minsize=8
worker.tomcat1.connection_pool_timeout=900

So I pulled out the packet size option since I agree it's unlikely to be the
cause.  I actually only intended to pull in the connection_pool_* ones but my
mouse slipped and I cut and pasted 4 lines :)

This is with +FlushPackets so I'm assuming that since it stopped growing a test
without FlushPackets probably wouldn't be meaningful right?  And just to
confirm, this still had memory growth with a regular
ab -c 10 -n 6000 (without -i).  I had to drop concurrency to 10 because my poor
little VM disk i/o didn't like sending out 60 concurrent 600mb streams.

I'm going to play with the pool_timout and pool_minsize to see if there's
something there.  

Is it possible that timing connections out from the pool could "lose" the
handle to the allocated memory preventing it from being freed?

-- 
You are receiving this mail because:
You are the assignee for the bug.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to