On 01/02/2016 23:46, Rémy Maucherat wrote:
> 2016-02-01 20:47 GMT+01:00 <ma...@apache.org>:
> 
>> Author: markt
>> Date: Mon Feb  1 19:47:13 2016
>> New Revision: 1727992
>>
>> URL: http://svn.apache.org/viewvc?rev=1727992&view=rev
>> Log:
>> Fix a consistent unit test failure on OSX (no idea why it started to
>> appear now)
>> Handle the case where the required TLS buffer increases after the
>> connection has been initiated.
>>
> 
> Well, the design is so wrong.

I'm not a fan of the solution either but I couldn't see a better way to
resize the buffer.

I thought about:
- custom exception - rejected since exceptions are slow and flow control
  via exception is bad
- custom return value (-2, Integer.MIN_VALUE or similar) - rejected
  because it is non-standard and would require changes up the call-stack
  to handle it.
- increasing the default buffer size - rejected as the smaller buffer
  is enough in nearly all cases

Anything else I thought of required invasive API changes. A related
issue is the read(ByteBuffer) provides no way to expose the expanded
ByteBuffer to the caller but that method is part of the ByteChannel API.

Suggestions welcome.

> BTW, what is the
> getSession().getApplicationBufferSize() value here ? And that's with
> OpenSSL or JSSE ?

Roughly 16k or 32k for JSSE with current Oracle Java 8 as far as I can
tell. Something, I didn't figure out what, was triggering a switch to
the larger buffer size after we had initialised the buffers.

The OpenSSL implementation only ever uses 16k so it shouldn't hit this code.

Mark


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to