> Yes, but it is useless to buffer a long polling connection in a file.
Buffering of some data on Web-server is fine as long as client receives
whatever server has sent or client gets closed connection. If sending is not
possible after buffers are full dropping client connection and aborting
reque
> it is useless to buffer a long polling connection in a file.
For Nginx there is no any difference between long-polling or other request.
It would't even know. All it should care is how much to buffer and for how
long to keep those buffers until droping them and aborting request. I do not
see any
"abort backend" meant "abort request"
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241063#msg-241063
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
What do you mean by "stop readning"? Oh, you just stop checking if anything
is ready for reading. I see. Well, this is rude flow control I'd say.
Proxied server could unexpectedly drop connection because it would think
Nginx is dead.
There is a nice feature I don't remember how exactly it's called
If it's time to close backend connection in non-multiplexed configuration
just send FCGI_ABORT_REQUEST for that particular request, and start dropping
records for that request received from the backend.
Please shoot me any other questions about problems with implementing that
feature.
Posted at N
What proxy module does in that case? You said earlier HTTP lacks flow conrol
too. So what is the difference?
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241059#msg-241059
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.o
Well, there is supposed to be one FCGI_REQUEST_COMPLETE set in reply to
FCGI_ABORT_REQUEST but it can be ignored in this particular case.
I can see Nginx drops connections before receiving final
FCGI_REQUEST_COMPLETE at the end of normal request processing in some cases.
And that's something about
And, possible 3) if there is no other requests for that connection, just
close it like it never existed
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241055#msg-241055
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/ma
It's my next task to implement connection multiplexing feature in Nginx's
FastCGI module. I haven't looked at recent sources yet and I am not familiar
with Nginx architecture so if you could give me some pointers on where I
could to start it would be great. Sure thing anything I produce would be
av
Actually 2) is natural since there is supposed to be de-multiplexer on Nginx
side and it should know where to dispatch the record received from backend
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241053#msg-241053
___
nginx mailing
OK, it probably closes connection to backend server. Well, in case of
multiplexed FastCGI Nginx should do two things:
1) send FCGI_ABORT_REQUEST to backend for given request
2) start dropping records for given request if it still receives records
from backend for given request
Posted at Nginx Foru
> The main issue with FastCGI connection multiplexing is lack of flow
control.
Suppose a client stalls but a FastCGI backend continues to send data to it.
At some point nginx should say the backend to stop sending to the client
but the only way to do it is just to close all multiplexed connections
You clearly... err.
> 32K simultaneous active connections to the same service on a single
machine? I suspect the bottleneck is somewhere else...
I don't know what exactly "service" means in context of our conversation but
if that means server then I did not say that everything should be handled b
Funny thing is that resistance to implement that feature is so dence that it
feels like its about breaking compatibility. It is all about more complete
protocol specification implementation without any penalties beside making
some internal changes.
Posted at Nginx Forum:
http://forum.nginx.org/re
Many projects would kill for 100% performance or scalability gain.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241044#msg-241044
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Another scenario. Consider application that takes few seconds to process
single request. In non-multiplexing mode we're still limited to roughly 32K
simultaneous requests even though we could install enough backend servers to
handle 64K such requests per second.
Now, imagine we can use FastCGI con
Consider Comet application (aka long-polling Ajax requests). There is no
CPU-load since most of the time application just waits for some event to
happen and nothing is being transmitted. Something like chat or stock
monitoring Web application used by thousands of users simultaneously.
Every reques
You clearly do not understand what the biggest FastCGI connection
multiplexing advantage is. It makes it possible to use much less TCP
connections (read "less ports"). Each TCP connection requires separate port
and "local" TCP connection requires two ports. Add ports used by
browser-to-Web-server c
18 matches
Mail list logo