... and I definitely want to choose the size of one chunk not to small,
e.g. one of the will be able to grow until 64KB, so that usually you will only need one of them. Only for jumbo pages we will start gluing them together...

So I'll dig my head into BodyContentImpl ...

Filip Hanik - Dev Lists wrote:
Remy Maucherat wrote:

Rainer Jung wrote:

I'm wondering if we should split the (possibly huge) char arrays in BodyContentImpl into smaller chunks of char arrays. Each chunk will be able to grow big enough to handle the usual cases efficiently (e.g. 64KB). Whenever a bigger size is needed we allocate more of these chunks from a pool. After using the BodyContentImpl we give back all chunks except for the first to the chunk pool.

This way performance should not really suffer, but the char arrays can be efficiently shrinked for apps needing generating large responses every now and then.


This could be a good idea, but performance would suffer, I think: if one request needs 100 buffers, then you'll have 100 synced operations to retrieve them from the pool (only one to put them back, hopefully ;)). It could then be cheaper to always allocate new objects.

yes, but in TC6 we can use java.util.concurrent, and get a little bit more juice out of it, there is a pretty big difference between the lock free thread safe operations and the lock algo ones we use today (synchronized)


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to