I've just tested to see if offload threads are really async as advertised
and it seems they are, great ;)

What I've done (zero is 1GB file with zeros):

$ uwsgi --http :8080 --static-map="/zero=zero" --stats :4444
--offload-threads 2
$ ab -c 10 -n 100 http://localhost:8080/zero

With only 1 worker and 2 offload threads I had 10 concurrent connections
(not queued but running).


2013/6/3 Roberto De Ioris <[email protected]>

>
> > I'll give it a try once 1.9.12 is out.
> >
> > AFAIK uWSGI is blocking and this is the cause of offload threads, this is
> > fine for dynamic requests that needs to run app code, but it also means
> > that uWSGI will probably do worse that lighttpd or nginx in real world
> > contest with serving static files under a lot of load and few thousands
> > client connections. AFAIK both lighttpd and nginx are asynchronous. It
> > isn't big issue since we can put uWSGI behind nginx and use it only for
> > non-static requests, but since HTTP frontend is getting more features I
> > wonder what's the goal here, is uWSGI intended to be as fast as others
> (or
> > maybe it is already), or nginx will always be required when maximum
> > possible performance is required?
>
>
> Since 1.9 it is no more blocking, each write must end in --socket-timeout
> and if you enable a coroutine engine (like ugreen or coroae or gevent or
> ...) it will automatically start to manage another request.
>
> Offloading is a way to free "cores" (it could be a worker, a thread a
> coroutine...) delegating common task to a pool of threads that can manage
> them without using too much resources (even coroutines are a finite
> resource while offloading is only limited by file descriptor and memory,
> and each offload task consume only 128 bytes)
>
> So speaking at higher level, offload threads can be seen as a little
> nginx/lighttpd embedded in uWSGI that can do simple task using all of your
> cpu cores)
>
> I like to compare offload threads with hardware DMA, it can do only simple
> tasks (transfer memory) freeing the CPU from them.
>
> Having said that, speed in serving static is better (even if probably we
> talk about microseconds difference) in pure-webservers as uWSGI need to be
> customizable for very specific uses (and this has a cost).
>
> I suppose when you start adding caching of path resolutions and similar
> micro-optimizations you can build a uWSGI file-server faster than the
> others, but it requires a very deep knowledge of your specific case, so
> for "general-purpose" serving, a pure-webserver is a better bet.
>
>
> Obviously this is the current status, i do not know what will happen next
> :)
>
> --
> Roberto De Ioris
> http://unbit.it
> _______________________________________________
> uWSGI mailing list
> [email protected]
> http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi
>



-- 
Łukasz Mierzwa
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to