Rémy and Mark,

On 6/3/16 10:11 AM, Mark Thomas wrote:
> On 03/06/2016 14:36, Rémy Maucherat wrote:
>> Hi,
>>
>> With direct connect having been hacked in (err, I mean, "implemented"), it
>> is (a lot) easier to do meaningful performance tests. h2load is a drop in
>> replacement of ab that uses HTTP/2, and it allowed doing some easy
>> profiling.
>>
>> The good news is that the code seems to be well optimized already with few
>> visible problems. The only issue is a very heavy sync contention on the
>> socket wrapper object in Http2UpgradeHandler.writeHeaders and
>> Http2UpgradeHandler.writeBody.
> 
> I suspect that is inevitable given the nature of the test. There is only
> one connection and if you have 100 streams all trying to write to the
> one connection at the same time you have to synchronise on something.

Can we use separate monitors for read versus write operations?

>> The reason for that is when you do:
>> h2load -c 1 -n 100 http://127.0.0.1:8080/tomcat.gif
>> It ends up being translated in Tomcat into: process one hundred concurrent
>> streams over one connection. Although h2load is not real world use, that's
>> something that would need to be solved as a client can use of a lot of
>> threads.
> 
> Hmm. We might be able to do something if we buffer writes on the server
> side (I'm thinking a buffer for streams to write into with a dedicated
> thread to do the writing) but I suspect that the bottleneck will quickly
> switch to the network in that case.

You can't really do any better than filling the network, right?

>> There are two main issues in HTTP/2 that could be improved:
>> 1) Ideally, there should be a way to limit stream concurrency to some
>> extent and queue. But then there's a risk to stall a useful stream (that's
>> where stream priority comes in of course). Not easy.
> 
> That should already be supported. Currently the default for concurrent
> streams is unlimited but we can make it whatever we think is reasonable.
> The HTTP/2 spec suggests it should be no lower than 100.

I was thinking about this the other day, too... a single H2 connection
can theoretically use every request-processing thread. That's not much
different than a single client making maxConnections HTTP/1.1
connections to the server, except that once the H2 connection is open,
there's no way to prevent it from monopolizing all the available
threads. With H1, it's theoretically possible to throttle the
connection-rate (or count) using mod_security, mod_qos, etc.

>> 2) All reads/writes are blocking mid frame. It's not too bad in practice,
>> but it's a useless risk, that's where async IO can provide an "easy"
>> solution using a dedicated NIO2 implementation.
> 
> They are blocking mid-frame but given the flow control provided by
> HTTP/2 the risk should be zero unless the client advertises a larger
> window than it can handle which would be the client's problem in my view.

+1

Also proxies and network gear can interfere with all protocol-level
attempts to manage packet sizes, so everything becomes very chaotic very
quickly. I'm not sure there are many opportunities to really tune things
reliably, here.

-chris

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to