On Jul 20, 2013, at 9:23 , momyc wrote:
> What do you mean by "stop readning"? Oh, you just stop checking if anything
> is ready for reading. I see. Well, this is rude flow control I'd say.
> Proxied server could unexpectedly drop connection because it would think
> Nginx is dead.
TCP will say to
"abort backend" meant "abort request"
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241063#msg-241063
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
What do you mean by "stop readning"? Oh, you just stop checking if anything
is ready for reading. I see. Well, this is rude flow control I'd say.
Proxied server could unexpectedly drop connection because it would think
Nginx is dead.
There is a nice feature I don't remember how exactly it's called
If it's time to close backend connection in non-multiplexed configuration
just send FCGI_ABORT_REQUEST for that particular request, and start dropping
records for that request received from the backend.
Please shoot me any other questions about problems with implementing that
feature.
Posted at N
On Jul 20, 2013, at 9:05 , momyc wrote:
> What proxy module does in that case? You said earlier HTTP lacks flow conrol
> too. So what is the difference?
The proxy module stops reading from backend, but it does not close backend
connection.
It reads again from backend when some buffers will send
What proxy module does in that case? You said earlier HTTP lacks flow conrol
too. So what is the difference?
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241059#msg-241059
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.o
Well, there is supposed to be one FCGI_REQUEST_COMPLETE set in reply to
FCGI_ABORT_REQUEST but it can be ignored in this particular case.
I can see Nginx drops connections before receiving final
FCGI_REQUEST_COMPLETE at the end of normal request processing in some cases.
And that's something about
On Sat, 2013-07-20 at 00:50 -0400, momyc wrote:
> It's my next task to implement connection multiplexing feature in Nginx's
> FastCGI module. I haven't looked at recent sources yet and I am not familiar
> with Nginx architecture so if you could give me some pointers on where I
> could to start it w
On Jul 20, 2013, at 8:41 , momyc wrote:
> OK, it probably closes connection to backend server. Well, in case of
> multiplexed FastCGI Nginx should do two things:
> 1) send FCGI_ABORT_REQUEST to backend for given request
> 2) start dropping records for given request if it still receives records
> f
And, possible 3) if there is no other requests for that connection, just
close it like it never existed
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241055#msg-241055
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/ma
It's my next task to implement connection multiplexing feature in Nginx's
FastCGI module. I haven't looked at recent sources yet and I am not familiar
with Nginx architecture so if you could give me some pointers on where I
could to start it would be great. Sure thing anything I produce would be
av
Actually 2) is natural since there is supposed to be de-multiplexer on Nginx
side and it should know where to dispatch the record received from backend
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241053#msg-241053
___
nginx mailing
OK, it probably closes connection to backend server. Well, in case of
multiplexed FastCGI Nginx should do two things:
1) send FCGI_ABORT_REQUEST to backend for given request
2) start dropping records for given request if it still receives records
from backend for given request
Posted at Nginx Foru
On Jul 20, 2013, at 8:36 , momyc wrote:
>> The main issue with FastCGI connection multiplexing is lack of flow
> control.
> Suppose a client stalls but a FastCGI backend continues to send data to it.
> At some point nginx should say the backend to stop sending to the client
> but the only way to d
> The main issue with FastCGI connection multiplexing is lack of flow
control.
Suppose a client stalls but a FastCGI backend continues to send data to it.
At some point nginx should say the backend to stop sending to the client
but the only way to do it is just to close all multiplexed connections
On Jul 20, 2013, at 5:02 , momyc wrote:
> You clearly do not understand what the biggest FastCGI connection
> multiplexing advantage is. It makes it possible to use much less TCP
> connections (read "less ports"). Each TCP connection requires separate port
> and "local" TCP connection requires two
On Fri, Jul 19, 2013 at 11:55 PM, momyc wrote:
> You clearly... err.
>
Hmmm?
>
> > 32K simultaneous active connections to the same service on a single
> machine? I suspect the bottleneck is somewhere else...
>
> I don't know what exactly "service" means in context of our conversation
> but
>
You clearly... err.
> 32K simultaneous active connections to the same service on a single
machine? I suspect the bottleneck is somewhere else...
I don't know what exactly "service" means in context of our conversation but
if that means server then I did not say that everything should be handled b
Scenario 1:
With long-polling requests, each client uses only one port since the same
connection is continuously used, HTTP being stateless. The loss of the
connection would mean potential loss of data.
32K simultaneous active connections to the same service on a single
machine? I suspect the bott
Funny thing is that resistance to implement that feature is so dence that it
feels like its about breaking compatibility. It is all about more complete
protocol specification implementation without any penalties beside making
some internal changes.
Posted at Nginx Forum:
http://forum.nginx.org/re
Many projects would kill for 100% performance or scalability gain.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,237158,241044#msg-241044
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Another scenario. Consider application that takes few seconds to process
single request. In non-multiplexing mode we're still limited to roughly 32K
simultaneous requests even though we could install enough backend servers to
handle 64K such requests per second.
Now, imagine we can use FastCGI con
Consider Comet application (aka long-polling Ajax requests). There is no
CPU-load since most of the time application just waits for some event to
happen and nothing is being transmitted. Something like chat or stock
monitoring Web application used by thousands of users simultaneously.
Every reques
You clearly do not understand what the biggest FastCGI connection
multiplexing advantage is. It makes it possible to use much less TCP
connections (read "less ports"). Each TCP connection requires separate port
and "local" TCP connection requires two ports. Add ports used by
browser-to-Web-server c
It is yet to prove that C10K-related problems are based on sockets/ports
exhaustion...
The common struggling points on a machine involve multiples locations and
your harddisks, RAM & processing capabilities will be quickly overwelmed
before you lack sockets and/or ports...
If you are tempted of u
On 19.07.2013, at 16:45, Jan Algermissen wrote:
> Hi,
>
> I am writing a handler that checks a request signature during the access
> phase.
>
> When there is URI rewriting, the URI the client used when signing does not
> match the URI the handler sees when checking the signature.
>
> Questi
Hi,
I am looking for guidance on how best to configure Nginx Proxy Cache in a
multi-disk drive environment. Our typical server setup is such that each drive
is its own partition, for example, if we have a 10 drive server we may setup
drives 4-10 for storage such as:
/dev/sdd1 /nginx/cached
/de
On Fri, Jul 19, 2013 at 10:55:57AM -0400, David | StyleFlare wrote:
Hi there,
> I know this may not be safe, but how can I set the hostname in the root
> directive
> ie; root /www/$hostname/static;
By using a variable, just like you've done there.
Two things you need to decide:
what exact va
Hi,
I am writing a handler that checks a request signature during the access phase.
When there is URI rewriting, the URI the client used when signing does not
match the URI the handler sees when checking the signature.
Question: How can I access the original request URI during the access phase?
I know this may not be safe, but how can I set the hostname in the root
directive
location /static {
ie; root /www/$hostname/static;
}
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hello!
On Wed, Jul 17, 2013 at 06:18:03PM +0200, Kate F wrote:
> On 15 July 2013 19:32, Maxim Dounin wrote:
> > Hello!
> >
> > On Sat, Jul 13, 2013 at 12:19:51PM +0200, Kate F wrote:
> >
> >> Hi,
> >>
> >> I'm trying to use EXSLT's with nginx's xslt filter
> >> module. The effect I think I'm se
Hello!
On Thu, Jul 18, 2013 at 08:54:48PM -0400, feanorknd wrote:
> Thanks thanks so much!!! :D
>
> I even saw that ticket before posting, but I figured out it was not the
> problem just because I do use XFS for my nginx_caches at 4 servers without
> this problem, and also I did test changing th
MISS means the ressouce is not found in the cache
btw, do you see any requests getting cahced / your caching-dir is filling
or do you see 100% MISS?
maybe: http://forum.nginx.org/read.php?11,163400,163695
do you use cache-control-headerrs?
Posted at Nginx Forum:
http://forum.nginx.org/read.php
33 matches
Mail list logo