I'm somewhat unclear about how the keepalive functionality works within the
upstream module. My nginx install currently handles several hundred domains
all of which point to different origin servers. I would imagine I can
improve performance by enabling keepalive, however the documentation says
"Th
On 07/04/2014 16:45, Steve Wilson wrote:
> A quick read at
> http://dev.mysql.com/doc/refman/4.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit
> suggests there's a possibility of losing 1s worth of data. I'm not sure
> if we'd still have a problem with this now we've moved page ca
Thx!
I resolve my miss configuration.
Only changing of position my include to the final of file.
2014-04-07 12:13 GMT-05:00 Maxim Dounin :
> Hello!
>
> On Mon, Apr 07, 2014 at 11:17:40AM -0500, Raul Hugo wrote:
>
> > Hey Maxim, thx for your answer.
> >
> > On my /etc/nginx/nginx.conf I put thi
Hello!
On Mon, Apr 07, 2014 at 11:17:40AM -0500, Raul Hugo wrote:
> Hey Maxim, thx for your answer.
>
> On my /etc/nginx/nginx.conf I put this:
>
> limit_conn_zone $binary_remote_addr zone=one:63m;
>
> And on my .conf of my project located on /etc/nginx/vhost.d/myproject.conf
>
> I put this :
So does anyone know how to edit the SYSTEM account privileges if not i have
a way around it anyway.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,249008,249082#msg-249082
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailma
Hey Maxim, thx for your answer.
On my /etc/nginx/nginx.conf I put this:
limit_conn_zone $binary_remote_addr zone=one:63m;
And on my .conf of my project located on /etc/nginx/vhost.d/myproject.conf
I put this :
on the server configuration:
location / {
limit_conn one 10;
}
Nginx r
Hello!
On Mon, Apr 07, 2014 at 09:51:31AM -0500, Raul Hugo wrote:
> What am I doing wrong here?
>
> http {
> limit_conn_zone $binary_remote_addr zone=one:63m;
>
> server {
> location /downloads/ {
> limit_conn one 10;}
>
> [root@b
A quick read at
http://dev.mysql.com/doc/refman/4.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit
[2]
suggests there's a possibility of losing 1s worth of data. I'm not sure
if we'd still have a problem with this now we've moved page caching to
memcache as that was causing a lo
Thanks Steve for your update. We are using separate mysql server and in
this innodb_flush_log_at_trx_commit = 1. This site is running money
transactions applications so is it safe to change this option.
Also to this mysql server other servers with default nginx and PHP5-fpm
configuration is con
I've just done a drupal7 site under nginx+php-fpm on debian.
One thing I noticed was that the php process wasn't closing fast enough,
this was tracked down to an issue with mysql. Connections were sitting
idle for a long time which basically exhausted the fpm workers on both
the web servers.
What am I doing wrong here?
http {
limit_conn_zone $binary_remote_addr zone=one:63m;
server {
location /downloads/ {
limit_conn one 10;}
[root@batman1 ~]# service nginx configtest
nginx: [emerg] the size 66060288 of shared memory zo
Nginx is proxying requests to my custom tcp server. I have my proxy handler to
create the right request format and process headers etc.
The trouble started when I started using keepalive handler. I have to add a
custom protocol header bytes for every new keepalive connection and skip the
he
We are facing a strange issue on our servers. We have servers with 1GB RAM
and some drupal sites are running on it.
Generally all sites are loading fine but sometimes we are unable to access
any sites. After waiting for 10mts we are getting a 502 gateway timeout
error. In middle when we restart
Hello!
On Mon, Apr 07, 2014 at 07:38:00AM -0400, zajca wrote:
> I'm trying to make work nginx 1.4.7 with nodejs websockets
> but I'm getting 502 bad gateway
>
> NGINX Error:
> [error] 2394#0: *1 upstream prematurely closed connection while reading
> response header from upstream, client: 127.0.0
I'm trying to make work nginx 1.4.7 with nodejs websockets
but I'm getting 502 bad gateway
NGINX Error:
[error] 2394#0: *1 upstream prematurely closed connection while reading
response header from upstream, client: 127.0.0.1, server: xxx.cz, request:
"GET / HTTP/1.1", upstream: "http://127.0.0.1:8
c0nw0nk Wrote:
> Its a interesting issue maybe the SYSTEM user group in windows does
> not have access to the mapped hard drives ?
This is default behavior, ea:
http://stackoverflow.com/questions/13178892/access-file-from-shared-folder-from-windows-service
http://stackoverflow.com/questions/659013
16 matches
Mail list logo