You should check your application sounds like that is compressing its
pages.
A simple test is this create a empty html file and serve that from a
location and check the headers.
location = /test.html {
root "path/to/html/file";
}
if the headers on that have no gzip compression as set in your ngi
> Actually, its not the case that More number of Clients are trying to get the
> content from One of Server as Server Throughput shows equal load on all
> interfaces of Server which is around 4 Gbps.
This contradicts (or I understood it differently) a bit what you wrote
previously.
But what I g
Hi,
On Thu, Oct 6, 2016, at 00:32, Valentin V. Bartenev wrote:
> On Wednesday 05 October 2016 18:34:06 Anoop Alias wrote:
> > I have an httpd upstream server that listen on both http and https at
> > different port and want to send all http=>http_upstream and https =>
> > https_upstream
> >
>
>
On Wednesday 05 October 2016 18:34:06 Anoop Alias wrote:
> I have an httpd upstream server that listen on both http and https at
> different port and want to send all http=>http_upstream and https =>
> https_upstream
>
> The following does the trick
>
> #
> if ( $scheme = http
Hello!
On Wed, Oct 05, 2016 at 10:40:05AM -0400, nixcoder wrote:
> Hi,
> I'm getting the below error in nginx reverse proxy server. It seems the
> proxy server does not recognize the http method: "M-POST" ? Is there a way i
> can allow these incoming requests ?
>
> nginx.1| .xxx.xxx 10.x
Hi,
I'm getting the below error in nginx reverse proxy server. It seems the
proxy server does not recognize the http method: "M-POST" ? Is there a way i
can allow these incoming requests ?
nginx.1| .xxx.xxx 10.x.xx.x - - [05/Oct/2016:10:31:57 +] "M-POST
/cimom HTTP/1.1" 400 166 "-" "-"
Actually, its not the case that More number of Clients are trying to get the
content from One of Server as Server Throughput shows equal load on all
interfaces of Server which is around 4 Gbps.
So Do I expect , Writing will Increase with more number of Active
Connections.
Is it so that Nginx is no
Thanks Maxim, we'll try this.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,270043,270084#msg-270084
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
On Wed, Oct 05, 2016 at 10:39:36AM +0200, Brent Clark wrote:
Hi there,
> Im struggling to get nginx to cache woff and woff2 file.
>
> It would appear its the particular wordpress theme is set to not cache.
> But I would like to override that.
> bclark@bclark:~$ curl -I
> http://$REMOVEDDOMAIN/
Load is ditributed on the basis of URI, with parameter set in haproxy
config as "balance uri".
This has been done to achieve maximum Cache Hit from the Server.
While the cache might be more efficient this way this can lead to one server
always serving some "hot" content while others stay idle.
I have an httpd upstream server that listen on both http and https at
different port and want to send all http=>http_upstream and https =>
https_upstream
The following does the trick
#
if ( $scheme = https ) {
set $port 4430;
}
if ( $scheme = http ) {
set $port ;
}
We are using Haproxy to distribute the load on the Servers.
Load is ditributed on the basis of URI, with parameter set in haproxy config
as "balance uri".
This has been done to achieve maximum Cache Hit from the Server.
Does high number of Writing is leading to increase in response time for
deli
On some of the Severs Waiting is increasing in uneven way like if we have
3 Set of Servers on all of them Active Connections is around 6K and
Writing on
two of the Server its around 500 -600 while on third ts 3000
as on stopping the Nginx , the same behaviour shifts to Other Two of them.
Wha
On some of the Severs Waiting is increasing in uneven way like if we have 3
Set of Servers on all of them Active Connections is around 6K and Writing on
two of the Server its around 500 -600 while on third ts 3000 . On this
server response time is increasing in delivering the content.
This happens
On Wed, Oct 05, 2016 at 11:07:04AM +, Cox, Eric S wrote:
> Is anyone aware of a way to pass the upstream server name as the host header
> per individual server instead of setting it at the location level for all the
> upstream members? Without using a lua script that is.
This is currently impo
Is anyone aware of a way to pass the upstream server name as the host header
per individual server instead of setting it at the location level for all the
upstream members? Without using a lua script that is.
Thanks
This e-mail message, including any attachments
The only thing I ever experienced that would hold an old worker process open
after a restart (in my case config reload) were websocket connections.
Sent from my iPhone
> On Oct 5, 2016, at 5:59 AM, Sharan J wrote:
>
> Hi,
>
> While reloading nginx, sometimes old worker process are not exiting
Hi,
While reloading nginx, sometimes old worker process are not exiting
thereby, entering into "uninterrupted sleep" state. Is there a way to kill
such abandoned worker process? How can this process be avoided?
We are using nginx-1.10.1
Thanks,
Santhakumari.V
_
I have set gzip to off
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid/var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remot
Good day Guys
Im struggling to get nginx to cache woff and woff2 file.
It would appear its the particular wordpress theme is set to not cache.
But I would like to override that.
Nothing I seem to do, works.
If someone could please review my work it would be appreciated.
bclark@bclark:~$ curl
20 matches
Mail list logo