Tino Schwarze wrote:
Hi Costin,

On Sun, Dec 18, 2005 at 10:01:08AM -0800, Costin Manolache wrote:


One of the problems is that jasper is a tricky piece of code, and
usually reducing memory this way can have unexpected impact. Maybe a
more 'moderate' approach would be more acceptable, like checking the
size of the buffers, and only reset it if it is indeed 'huge'. This
would take care of the worse case, yet still allow keeping the buffers
around for normal use.

Don't know what testing you did  (to evaluate the performance impact
of the change), but try with a very large number of concurent
requests. That's where avoiding buffer allocations is most visible,
Remy spent a lot of time making sure no buffer is allocated in the
critical path under load.

Well, performance is probably affected by my proposed patch. I just had
a setup to hunt the memory "leaks".

Maybe we could introduce a threshold like 4 times the default buffer
size. If a buffer is above this, it will be reinitialized when the
pageContext is released. It might also be worthwile to not throw away
these buffers but to pool them (if they don't get too large), so only
the arrays containing the buffers need to be (re)allocated. In the
current implementation you might end up with huge buffers hanging around
which are unlikely to be used again (e.g. they may have been allocated
during a high-load period and are now referenced by tags deeply "hidden"
in the tag pools).

I don't think tags which use large bodies are going to perform well at all. The API isn't actually designed for that. Discarding the char array will cause every single tag invocation to create a very large amount of garbage arrays and copying around of data. If it works for you, then good, but I think configuring the VM's memory settings appropriately is better.

Rémy

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to