My current thinking is: one way I could simulate this is to divide the
> workers into two uwsgi services - one only serving processing intensive
> tasks, and the other the rest (this is easy, since we use nginx in front of
> uwsgi). The problem is that whether a request is going to be processing
> intensive does not [only] depend on the request URL, but also the state of
> the DB, so it's only a partial solution.
>

What I would probably do, without knowing the specifics of the task, is
that I'd move CPU intensive tasks totally away from a web server. I'd use
Celery or similar task queue system with a worker poll to run the tasks

- uWSGI gets a request to start intensive task

- A task is posted to Celery queue

- uWSGI returns

- Celery worker process reads the queue, runs the task, writes result to
database. Task can be even run on a different physical machine if needed.

- If the client must know the task is finished Celery can use pubsub
(PostgreSQL / Redis) to inform about this

- Client makes another request to uWSGI and gets results

This way CPU intensive tasks don't count against with HTTP timeouts or
similar more HTTP'ish settings.

Celery has more tools to fine-tune intensive tasks, distribute them across
process, monitor them, queue them, etc. It's pretty heavy with initial
switching cost, but makes certainly sense with some workloads. Right tool
for the right job (though without knowing details I would not claim Celery
is the right tool here).

Cheers,
-Mikko



>
> Thanks for a great server and hopefully some insights ;)
> -Clemens
>
> _______________________________________________
> uWSGI mailing list
> [email protected]
> http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
>
>


-- 
Mikko Ohtamaa
http://opensourcehacker.com
http://twitter.com/moo9000
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to