2016-06-03 17:36 GMT+02:00 Mark Thomas <ma...@apache.org>:

> On 03/06/2016 15:59, Rémy Maucherat wrote:
> >> I am not talking about a limit on concurrent streams where things are
> being
> > refused (and this is exposed through the settings), rather on streams
> which
> > are effectively being processed concurrently (= for example, in
> headersEnd,
> > we put the StreamProcessor in a queue rather than executing it
> immediately
> > ? unless it's a high priority stream, right ?). h2load allows comparing
> > with other servers, and JF told me httpd has a lower HTTP/2 performance
> > impact compared to Tomcat. Given the profiling, the problem is the heavy
> > lock contention (no surprise, this is something that is very expensive)
> and
> > we could get better performance by controlling the contention. JF's
> > original "HTTP/2 torture test" HTML page with 1000 images probably also
> > runs into this. IMO we will eventually need a better execution strategy
> > than what is in place at the moment, since all dumb benchmarks will run
> > into that edge case. But I agree that it's only partially legitimate, the
> > client has the opportunity to control it.
>
> Ah. Got it.
>
> I added some rudimentary concurrency control for testing, the h2load
results are immediately up to 30% better when using low concurrency levels
(8 streams concurrently executed per connection). When allowing a larger
but still reasonable amount of concurrency, like 32 [that would correspond
to 32 connections for a single client for HTTP/1.1, so that's a lot], the
performance is up 20% over the default. As a sync contention gets higher,
the performance degrades fast and I verified using VM monitoring most of
the threads in the pool spend most of the time blocked while the test runs.
This depends on the thread pool size and the client behavior obviously, so
the impact can be arbitrarily large.

My conclusion is that some sort of optional mechanism should be added.

Rémy

Reply via email to