Thanks, I eagerly look forward to 9.4.8 and hopefully it'll improve the 
situation :)
Setting up a load generator harness is still on my TODO list, but I'll make 
sure to test this out as well.
Thanks for the guidance.

> On Oct 31, 2017, at 2:28 PM, Greg Wilkins <[email protected]> wrote:
> 
> 
> Steven,
> 
> 
> yes we have not done a great job of documenting nor auto-tuning the reserved 
> thread pool.     The reserved thread pool was something forced onto us be the 
> more difficult scheduling demands of HTTP2 and we didn't fully understand all 
> the implication when it was introduced, thus we could not well document it.
> 
> However, with the next release (9.4.8) we are reaching a happier point where 
> lazy allocation of reserved threads means a natural level is reached, plus we 
> have a ThreadPoolBudget mechanism now to warn if a pool is over allocated.    
> So we do need to spend more time writing these solutions.
> 
> With regards to sharing threadpools and schedulers.  Yes there can be 
> benefits of sharing these between multiple clients.  There can even be 
> benefits of using the server pools/schedulers.      But there also can be 
> contention issues if you have large machines with many CPUs.   So there is no 
> silver bullet solution that will work for all applications.    I strongly 
> advise trying some alternate configurations and benchmarking them.
> 
> cheers
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> On 1 November 2017 at 04:34, Steven Schlansker <[email protected]> 
> wrote:
> I'm nearing delivery of my new awesome Jetty Based Proxy Thing 2.0 and
> deployed it to one of our test environments.
> 
> Due to braindead vendor load balancers and some internal testing needs,
> the Jetty when deployed to this environment needs to have 6 (!) connectors.
> 
> At first, it ran fine -- but after serving (light) load for a few minutes the
> server would start to hang more and more connections indefinitely, until 
> eventually
> idle timeout.  Eventually the entire thing wedges and only the most trivial 
> of requests
> finish.
> 
> After quite a bit of debugging, I realized that each connector seems to be
> starting up a ManagedSelector and ReservedThreadExecutor.  These each 
> "reserve"
> threads from the pool in that they block waiting for work that needs to get 
> handled
> immediately.
> 
> I'd started with 20 threads, figuring that would be enough for a test 
> environment with
> only a couple of requests per second.
> 
> Thus, very few (or no) pool threads are ever available to do the actual 
> normal priority work.
> However the EWYK scheduler still seems to make partial progress (as long as 
> tasks do
> not hit the queue, they keep going) -- but anything that gets queued hangs 
> forever.
> The server ends up in a very confusing partially working state.
> 
> Reading through the docs, all I could find is this one liner:
> "Configure with goal of limiting memory usage maximum available. Typically 
> this is >50 and <500"
> 
> No mention of this pretty big pitfall.  If the tunable was strictly 
> performance, that might be just
> fine -- but the fact that there's liveness concerns makes me think that we 
> could do better.
> 
> At least a documentation tip -- "each connector reserves 1 ManagedSelector + 
> 1/8 * #CPU reserved threads by default"
> would be welcome, but even better would be if the QueuedThreadPool could 
> somehow assert that the configuration
> is not writing reserved thread checks that it can't cash.
> 
> WDYT?
> 
> 
> Relatedly, my proxy reaches out to many backends.  Each backend may have its 
> own configuration for some
> high level tuneables like timeouts and maximum number of connections.  So, 
> each one gets its own HttpClient.
> 
> This ends up creating a fair number of often "global" resources -- threads 
> mostly.
> 
> Is it recommended practice to share these?  i.e. create one Executor and 
> Scheduler to share among
> HttpClient instances, or go even further and just take them from the Server 
> itself?
> 
> Seems like it might have some benefits -- why incur a handoff from the server 
> thread to the client thread
> when you're just ferrying data back and forth -- however given my adventures 
> with sizing just the server
> thread pool I worry I might get myself in trouble.  But if it's just a matter 
> of sizing it appropriately...
> 
> 
> Thanks for any guidance!
> Steven
> 
> _______________________________________________
> jetty-users mailing list
> [email protected]
> To change your delivery options, retrieve your password, or unsubscribe from 
> this list, visit
> https://dev.eclipse.org/mailman/listinfo/jetty-users
> 
> 
> 
> --
> Greg Wilkins <[email protected]> CTO http://webtide.com
> _______________________________________________
> jetty-users mailing list
> [email protected]
> To change your delivery options, retrieve your password, or unsubscribe from 
> this list, visit
> https://dev.eclipse.org/mailman/listinfo/jetty-users

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
jetty-users mailing list
[email protected]
To change your delivery options, retrieve your password, or unsubscribe from 
this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

Reply via email to