Wanted to follow up to say that you were spot on.
The file was ~800mb and it was indeed caching the whole thing. We could see
the file get served to nginx and then nginx continued to serve to the
client.
We were able to show that it was indeed doing least_conn by downloading a
file that was mult
Oh. Ok, good to know about the default temp file and buffers.
Just checked and I think the 'large' file we are downloading is 800mb.
We don't have proxy_cache or proxy_store set. We do have
proxy_temp_file_write_size 250m;
We ended up doing a test where 9 of those large files were all on server1,
Hello!
On Wed, Dec 23, 2020 at 04:42:49PM -0500, Kenneth Brooks wrote:
> We did think that perhaps it was buffering.
> However, in our case, the "large" request is gigs in size, so there is no
> way that it is buffering that whole thing. I think our buffers are pretty
> small.
> Unless there is s
Unsubscribe
On Wed, 23 Dec 2020, 21:43 Kenneth Brooks,
wrote:
> Thanks for the detailed response and taking the time to show some test
> examples.
>
> We did think that perhaps it was buffering.
> However, in our case, the "large" request is gigs in size, so there is no
> way that it is bufferin
Thanks for the detailed response and taking the time to show some test
examples.
We did think that perhaps it was buffering.
However, in our case, the "large" request is gigs in size, so there is no
way that it is buffering that whole thing. I think our buffers are pretty
small.
Unless there is so
Hello!
On Wed, Dec 23, 2020 at 12:36:16PM -0500, kenneth.s.brooks wrote:
> Thanks for the response.
>
> I understand what you are saying about the worker processes. We have only a
> single worker process.
>
> I have 2 upstream servers.
>
> To validate:
> I am sending a request for a LARGE file
From a shell on your nginx host you can run something like netstat -ant | egrep
“ESTAB” to see all the open TCP connections. If you run your command line with
watch you will see it update each two seconds, etc ..
FWIW A long time ago I did a bunch of experiments with different load balancer
str
Perhaps another question that might help me debug it. Is there a way to see
active connection counts to upstream servers? I have the status endpoint
enabled, but that just shows me total active connections for the worker
process as a whole, correct?
Posted at Nginx Forum:
https://forum.nginx.org/
Thanks for the response.
I understand what you are saying about the worker processes. We have only a
single worker process.
I have 2 upstream servers.
To validate:
I am sending a request for a LARGE file. I see it hit server1. Server1 is
now serving that request for the next couple of minutes.
I
Hello!
On Wed, Dec 23, 2020 at 10:27:44AM -0500, Kenneth Brooks wrote:
> I have a fully working config doing loadbalancing over 2 upstream servers.
> I want to use "least_conn"
>
> When I put least_conn in, it still is doing round robin.
> I can confirm that other configs like "weight' and "ip_h
Small update: Moved to 1.18.0 and still seeing the same results.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,290285,290287#msg-290287
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
I have a fully working config doing loadbalancing over 2 upstream servers.
I want to use "least_conn"
When I put least_conn in, it still is doing round robin.
I can confirm that other configs like "weight' and "ip_hash" are working as
expected.
Is there some other configuration/setting that also
12 matches
Mail list logo