Wow, it seems --thunder-lock solved my problem.

With proccesses=128 my load average is now less that it was with processes=32 
without --thunder-lock.


Thanks for your help.



On 21.08.2013, at 19:06, "Roberto De Ioris" <[email protected]> wrote:

> 
>> 
>> On 21.08.2013, at 18:42, "Roberto De Ioris" <[email protected]> wrote:
>> 
>>> 
>>> Best candidate could be your server swapping out as 128 uwsgi processes
>>> can consume a huge amount of ram.
>> 
>> 
>> Of course not!  I have 24GB of RAM on that system.
>> 
>> 
>>> 
>>> Less common could be thundering herd
>>> (http://uwsgi-docs.readthedocs.org/en/latest/articles/SerializingAccept.html)
>>> but on virtualized kernels and sysvipc locking i have received some
>>> report
>>> in the past.
>>> 
>> 
>> 
>> I do not use any of the virtualization techniques.
>> 
>> 
>> Why I am trying to bump workers:  I need to add some python code uwsgi
>> runs on every request to make another HTTP request from it.  It will not
>> consume CPU power (sleep most of the time), but it will block a worker for
>> some (relatively) long time.
>> 
>> So I need more workers to handle that.
>> 
>> Can you please advise me any solution how can I run 100+ (better 300+) of
>> workers.
>> 
>> Will switching to threads defeat that issue?
>> 
> 
> If you can use threads, use them, your scenario is one of those for which
> python threads are not "bad". With 4 threads per workers you will
> accomplish almost the same result.
> 
> I still find quite strange the load is caused by thundering herd, have you
> checked it ? (--thunder-lock should be enough to check if the problem is
> there)
> 
> -- 
> Roberto De Ioris
> http://unbit.it
> _______________________________________________
> uWSGI mailing list
> [email protected]
> http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to