Thomas,

I see a connection:close header when the connection is closed after sending
a 400 or 500.

However, nothing is sent if the connection is closed due to an abort while
giving up reading unconsumed content, which can happen before/during/after
a response hence we keep that simple.

So are you sure you are seeing a 400/500 response without connection:close ?




On Wed, 26 Sep 2018 at 14:33, Greg Wilkins <[email protected]> wrote:

>
> It is more about how the response was generated and less about the
> response code itself.
> If the application throws and exception to Jetty during request handling,
> we now always make the connection non persistent before trying to send a
> response. If the request input is terminated early or is not fully consumed
> and would block, then we also abort the connection.
>
> Interesting that you say we don't set the Connection: close header.  There
> is actually no requirement to do so as the server can close a connection at
> any time, but I thought we would do so as a courtesy.... checking....
>
> cheers
>
>
>
> On Wed, 26 Sep 2018 at 10:25, Tommy Becker <[email protected]> wrote:
>
>> Thanks Greg. Just so I’m clear, what does Jetty key on to know whether to
>> close the connection? Just the 4xx/5xx response code? I’m trying to
>> understand the difference between this case and the “normal unconsumed
>> input” case you describe. Also, I did notice that Jetty does not set the
>> Connection: close header when it does this, is that intentional?
>>
>>
>> On Sep 25, 2018, at 6:37 PM, Greg Wilkins <[email protected]> wrote:
>>
>>
>> Thomas,
>>
>> There is no configuration to avoid this behaviour.  If jetty sees and
>> exception in the application it will send the 400 and close the connection.
>>
>> However, as Simone says, your application can be setup to avoid this
>> situation by catching the exception and consuming any input.  You can do
>> this in a filter that catches Throwable, it can then check the request
>> input stream (and/or reader) for unconsumed input and read & discard to end
>> of file.   If the response is not committed, it can then send a 400 or any
>> other response that you like.
>>
>> Just remember that this may make your application somewhat vulnerable to
>> DOS attacks as it will be easy to hold a thread in that filter slowly
>> consuming data.  I would suggest imposing a total time and total data limit
>> on the input consumption.
>>
>> Note that for normal unconsumed input, jetty 9.4 does make some attempt
>> to consume it... but if the reading of that data would block, it gives up
>> and closes the connection, as there is no point blocking for data that will
>> be discarded.
>>
>> regards
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> On Wed, 26 Sep 2018 at 07:35, Thomas Becker <[email protected]> wrote:
>>
>>> Thanks so much again for your response, this is great information. What
>>> you say makes sense, but I now see I failed to mention the most critical
>>> part of this problem. Which is that the client never actually sees the 400
>>> response we are sending from Jetty. When varnish sees the RST, it considers
>>> the backend request failed and returns 503 Service Unavailable to the
>>> client, effectively swallowing our application’s response. We can pursue a
>>> solution to this on the Varnish side, but in the interim I’m guessing there
>>> is no way to configure this behavior in Jetty?
>>>
>>>
>>>
>>> On Sep 25, 2018, at 4:28 PM, Simone Bordet <[email protected]> wrote:
>>>
>>> Hi,
>>>
>>> On Tue, Sep 25, 2018 at 8:34 PM Tommy Becker <[email protected]> wrote:
>>>
>>>
>>> Update: we setup an environment with the old Jetty 9.2 code and this
>>> does not occur. 9.2 does not send the FIN in #5 above, and seems happy to
>>> receive the rest of the content, despite having sent a response already.
>>>
>>> On Tue, Sep 25, 2018 at 10:01 AM Tommy Becker <[email protected]>
>>> wrote:
>>>
>>>
>>> Thanks for your response. I managed to snag a tcp dump of what's going
>>> on in this scenario. From what I can see the sequence of events is the
>>> following. Recall that our Jetty server is fronted by a Varnish cache.
>>>
>>> 1) Varnish sends the headers and initial part of the content for a large
>>> POST.
>>> 2) On the Jetty server, we use a streaming parser and begin validating
>>> the content.
>>> 3) We detect a problem with the content and throw an exception that
>>> results in a 400 Bad Request to the client (via JAX-RS exception mapper)
>>> 4) An ACK is sent for the segment containing the 400 error.
>>> 5) The Jetty server sends a FIN.
>>> 6) An ACK is sent for the FIN
>>> 7) Varnish sends another segment that continues the content from #1.
>>> 8) The Jetty server sends a RST.
>>>
>>> In the server logs, we see an Early EOF from our JAX-RS resource that is
>>> parsing the content. This all seems pretty ok from the Jetty side, and it
>>> certainly seems like Varnish is misbehaving here (I'm thinking it may be
>>> this bug https://github.com/varnishcache/varnish-cache/issues/2332).
>>> But I'm still unclear as to why this started after our upgrade from Jetty
>>> 9.2 -> 9.4. Any thoughts?
>>>
>>>
>>> This is normal.
>>> In Jetty 9.4 we are more aggressive in closing the connection because
>>> we don't want to be at the mercy of a possible nasty client sending us
>>> GiB of data when we know the application does not want to handle them.
>>> Varnish behavior is correct too: it sees the FIN from Jetty but does
>>> not know that Jetty does not want to read until it tries to send more
>>> content and gets a RST.
>>> At that point, it should relay the RST (or FIN) back to the client.
>>>
>>> So you have 2 choices: you catch the exception during your validation,
>>> and finish to read (and discard) the content in the application; or
>>> you ignore the early EOFs in the logs.
>>> I don't think that those early EOFs are logged above DEBUG level, is
>>> that correct?
>>>
>>> --
>>> Simone Bordet
>>> ----
>>> http://cometd.org
>>> http://webtide.com
>>> Developer advice, training, services and support
>>> from the Jetty & CometD experts.
>>> _______________________________________________
>>> jetty-users mailing list
>>> [email protected]
>>> To change your delivery options, retrieve your password, or unsubscribe
>>> from this list, visit
>>> https://dev.eclipse.org/mailman/listinfo/jetty-users
>>>
>>>
>>> _______________________________________________
>>> jetty-users mailing list
>>> [email protected]
>>> To change your delivery options, retrieve your password, or unsubscribe
>>> from this list, visit
>>> https://dev.eclipse.org/mailman/listinfo/jetty-users
>>
>>
>>
>> --
>> Greg Wilkins <[email protected]> CTO http://webtide.com
>> _______________________________________________
>> jetty-users mailing list
>> [email protected]
>> To change your delivery options, retrieve your password, or unsubscribe
>> from this list, visit
>> https://dev.eclipse.org/mailman/listinfo/jetty-users
>>
>>
>> _______________________________________________
>> jetty-users mailing list
>> [email protected]
>> To change your delivery options, retrieve your password, or unsubscribe
>> from this list, visit
>> https://dev.eclipse.org/mailman/listinfo/jetty-users
>
>
>
> --
> Greg Wilkins <[email protected]> CTO http://webtide.com
>


-- 
Greg Wilkins <[email protected]> CTO http://webtide.com
_______________________________________________
jetty-users mailing list
[email protected]
To change your delivery options, retrieve your password, or unsubscribe from 
this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

Reply via email to