Re: Why does nginx always send content encoding as gzip

2016-10-05 Thread c0nw0nk
You should check your application sounds like that is compressing its pages. A simple test is this create a empty html file and serve that from a location and check the headers. location = /test.html { root "path/to/html/file"; } if the headers on that have no gzip compression as set in your ngi

RE: Uneven High Load on the Nginx Server

2016-10-05 Thread Reinis Rozitis
> Actually, its not the case that More number of Clients are trying to get the > content from One of Server as Server Throughput shows equal load on all > interfaces of Server which is around 4 Gbps. This contradicts (or I understood it differently) a bit what you wrote previously. But what I g

Re: proxying to upstream port based on scheme

2016-10-05 Thread Edho Arief
Hi, On Thu, Oct 6, 2016, at 00:32, Valentin V. Bartenev wrote: > On Wednesday 05 October 2016 18:34:06 Anoop Alias wrote: > > I have an httpd upstream server that listen on both http and https at > > different port and want to send all http=>http_upstream and https => > > https_upstream > > > >

Re: proxying to upstream port based on scheme

2016-10-05 Thread Valentin V. Bartenev
On Wednesday 05 October 2016 18:34:06 Anoop Alias wrote: > I have an httpd upstream server that listen on both http and https at > different port and want to send all http=>http_upstream and https => > https_upstream > > The following does the trick > > # > if ( $scheme = http

Re: 400 bad request for http m-post method

2016-10-05 Thread Maxim Dounin
Hello! On Wed, Oct 05, 2016 at 10:40:05AM -0400, nixcoder wrote: > Hi, > I'm getting the below error in nginx reverse proxy server. It seems the > proxy server does not recognize the http method: "M-POST" ? Is there a way i > can allow these incoming requests ? > > nginx.1| .xxx.xxx 10.x

400 bad request for http m-post method

2016-10-05 Thread nixcoder
Hi, I'm getting the below error in nginx reverse proxy server. It seems the proxy server does not recognize the http method: "M-POST" ? Is there a way i can allow these incoming requests ? nginx.1| .xxx.xxx 10.x.xx.x - - [05/Oct/2016:10:31:57 +] "M-POST /cimom HTTP/1.1" 400 166 "-" "-"

Re: Uneven High Load on the Nginx Server

2016-10-05 Thread anish10dec
Actually, its not the case that More number of Clients are trying to get the content from One of Server as Server Throughput shows equal load on all interfaces of Server which is around 4 Gbps. So Do I expect , Writing will Increase with more number of Active Connections. Is it so that Nginx is no

Re: nginx worker process exited on signal 7

2016-10-05 Thread smaig
Thanks Maxim, we'll try this. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270043,270084#msg-270084 ___ nginx mailing list nginx@nginx.org http://mailman.nginx.org/mailman/listinfo/nginx

Re: Ngnix wont cache woff

2016-10-05 Thread Francis Daly
On Wed, Oct 05, 2016 at 10:39:36AM +0200, Brent Clark wrote: Hi there, > Im struggling to get nginx to cache woff and woff2 file. > > It would appear its the particular wordpress theme is set to not cache. > But I would like to override that. > bclark@bclark:~$ curl -I > http://$REMOVEDDOMAIN/

Re: Uneven High Load on the Nginx Server

2016-10-05 Thread Reinis Rozitis
Load is ditributed on the basis of URI, with parameter set in haproxy config as "balance uri". This has been done to achieve maximum Cache Hit from the Server. While the cache might be more efficient this way this can lead to one server always serving some "hot" content while others stay idle.

proxying to upstream port based on scheme

2016-10-05 Thread Anoop Alias
I have an httpd upstream server that listen on both http and https at different port and want to send all http=>http_upstream and https => https_upstream The following does the trick # if ( $scheme = https ) { set $port 4430; } if ( $scheme = http ) { set $port ; }

Re: Uneven High Load on the Nginx Server

2016-10-05 Thread anish10dec
We are using Haproxy to distribute the load on the Servers. Load is ditributed on the basis of URI, with parameter set in haproxy config as "balance uri". This has been done to achieve maximum Cache Hit from the Server. Does high number of Writing is leading to increase in response time for deli

Re: Uneven High Load on the Nginx Server

2016-10-05 Thread Reinis Rozitis
On some of the Severs Waiting is increasing in uneven way like if we have 3 Set of Servers on all of them Active Connections is around 6K and Writing on two of the Server its around 500 -600 while on third ts 3000 as on stopping the Nginx , the same behaviour shifts to Other Two of them. Wha

Re: Uneven High Load on the Nginx Server

2016-10-05 Thread anish10dec
On some of the Severs Waiting is increasing in uneven way like if we have 3 Set of Servers on all of them Active Connections is around 6K and Writing on two of the Server its around 500 -600 while on third ts 3000 . On this server response time is increasing in delivering the content. This happens

Re: Use individual upstream server name as host header

2016-10-05 Thread Ruslan Ermilov
On Wed, Oct 05, 2016 at 11:07:04AM +, Cox, Eric S wrote: > Is anyone aware of a way to pass the upstream server name as the host header > per individual server instead of setting it at the location level for all the > upstream members? Without using a lua script that is. This is currently impo

Use individual upstream server name as host header

2016-10-05 Thread Cox, Eric S
Is anyone aware of a way to pass the upstream server name as the host header per individual server instead of setting it at the location level for all the upstream members? Without using a lua script that is. Thanks This e-mail message, including any attachments

Re: Nginx old worker process not exiting on reload

2016-10-05 Thread Philip Walenta
The only thing I ever experienced that would hold an old worker process open after a restart (in my case config reload) were websocket connections. Sent from my iPhone > On Oct 5, 2016, at 5:59 AM, Sharan J wrote: > > Hi, > > While reloading nginx, sometimes old worker process are not exiting

Nginx old worker process not exiting on reload

2016-10-05 Thread Sharan J
Hi, While reloading nginx, sometimes old worker process are not exiting thereby, entering into "uninterrupted sleep" state. Is there a way to kill such abandoned worker process? How can this process be avoided? We are using nginx-1.10.1 Thanks, Santhakumari.V _

Why does nginx always send content encoding as gzip

2016-10-05 Thread sobuz
I have set gzip to off user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid/var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remot

Ngnix wont cache woff

2016-10-05 Thread Brent Clark
Good day Guys Im struggling to get nginx to cache woff and woff2 file. It would appear its the particular wordpress theme is set to not cache. But I would like to override that. Nothing I seem to do, works. If someone could please review my work it would be appreciated. bclark@bclark:~$ curl