Violeta,

On 2/14/17 4:43 PM, Violeta Georgieva wrote:
> Hi,
> 
> 2017-02-10 11:07 GMT+02:00 Mark Thomas <ma...@apache.org>:
>>
>> On 09/02/17 22:08, Violeta Georgieva wrote:
>>>
>>> 2017-02-08 11:25 GMT+02:00 Violeta Georgieva <violet...@apache.org>:
>>>>
>>>> 2017-02-08 1:51 GMT+02:00 Mark Thomas <ma...@apache.org>:
>>
>>
>> <snip/>
>>
>>>>> I guess that makes me reluctantly in favour of it in principle but I'd
>>>
>>> very much prefer to review a patch proposal minus the reformatting.
>>>>>
>>>>>
>>>>
>>>> There is a new patch
>>>> - no formatting noise
>>>> - Martin's comments included
>>>
>>>
>>> There is a new patch:
>>> - With a fix for the Martin's comment (StringManager)
>>> - I switched from AtomicBoolean to AtomicIntegerFieldUpdater in order to
>>> minimize the memory usage
>>
>>
>> Thanks. Much easier to read.
>>
>> Having reviewed the patch, I'm concerned about thread-safety on resume.
> I'll use NIO terminology but I believe the same issues apply to all three
> connectors.
>>
>> Consider the case where the client is sending data as fast as it can.
>>
>> On suspension, the socket will be added to the poller. More data will
> arrive, the socket will be processed, no data will be read (because
> processing is suspended) and the socket will be added to the poller again.
> I'm fairly sure (but haven't confirmed with a test) that when more data
> arrives the poller will trigger socket processing again. This loop will
> continue until the network buffers are full. (Even if I am wrong on the
> poller firing again immediately, there is still a problem.)
>>
>> On resume, the backlog of data needs to be processed. As currently
> implemented, this backlog will be processed on the thread that calls
> resume(). That may be undesirable for several reasons:
>> - it might not be a container thread;
>> - processing the backlog may take time impacting on other work the
>>   thread expects to do
>> - when the poller triggers socket processing again there could be
>>   two threads processing the same socket (very bad)
>>
>> Therefore, I think resume needs to call
> socketWrapper.processSocket(SocketEvent.OPEN_READ, true)
>>
>> That will solve the concurrent threads processing the same socket problem
> but it could cause another problem. When that container thread completes,
> it will add the socket to the poller again. The problem is that the socket
> will already have been added to the poller. Adding a socket to the poller
> more than once has caused problems in the past.
>>
>> That brings me to the conclusion that a different approach is needed. I
> think we need a new SocketState value SUSPEND. Currently returning UPGRADED
> from upgradeDispatch() registers the socket for read. SUSPEND would
> essentially be a NO-OP. When resume() is called, it would trigger a call to
> socketWrapper.processSocket(SocketEvent.OPEN_READ, true) during which when
> upgradeDispatch() completes it would return UPGRADE, adding the socket to
> the poller and allowing processing to continue.
>>
> 
> A new patch is available based on the provided comments.
> Can you please review it.
> 
>> This approach would mean some internal API changes but that is fine for
> 9.0.x and I don't see a problem with 8.5.x either. Whether this is
> back-ported to 8.0.x and 7.0.x is TBD. It also opens up the possibility of
> being able to suspend/resume other protocols but I haven't thought a great
> deal about how that might work.
> 
> I need this functionality only for 9.0.x and 8.5.x.

I think we need to come to an agreement about what it's okay to change
in a non-backward-compatible way in terms of internal APIs. It's become
clear lately that API breakages really affect users, and we shouldn't
take that lightly.

8.5.x is already "stable" and users are actively being encouraged to
move from 8.0 to it. I think that qualifies it as being something we
should be very careful about changing.

I'm not trying to say that APIs can never change; I just want to do it
in a way that is minimally-breaking to downstream users.

-chris

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to