On 14 Apr 2015, at 09:20, nitesh narayan lal
wrote:
> Hi,
>
> I am using a single worker process and master_process off in nginx.conf.
> Now as per my understanding, the flow of operation would be something like:
> NGiNX master process will be created which will spawn a single
> worker_process
Hi,
I am using a single worker process and master_process off in nginx.conf.
Now as per my understanding, the flow of operation would be something like:
NGiNX master process will be created which will spawn a single
worker_process using fork and then master process gets killed.
Is that correct ?
sorry.
my server is windows server.
windows + nginx1.7.10 + tomcat
Openssl 1.02 updates have been completed.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,257984,258012#msg-258012
___
nginx mailing list
nginx@nginx.org
http://mailman.ngi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello!
What linux distribution do you use?
On el6 I use openssl-1.0.1e-30.el6
On el7 I use openssl-1.0.1e-42.el7.4.x86_64
https://kb.iweb.com/entries/90860777-Security-vulnerabilities-in-OpenSSL
- -FREAK-CVE-2015-0204-and-more
Red hat using their o
same error.
site is vulnerable to the SSL FREAK attacks.
openssl version is the problem?
openssl version is 1.02
what's problem?
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,257984,258010#msg-258010
___
nginx mailing list
nginx@nginx.org
It is true that the best way to rate limit is the sender. But in the event
where the sender is a myriad different browsers, that isn't an option.
There is no control at the POST level to throttle an upload. There isn't
really any good firewall tool for traffic shaping incoming data per tcp
stream e
Hello NGINX Community,
- It is my understanding 499 is a client side response code indicating the
remote user prematurely closed the connection without finishing the
transaction.
- I have a production environment which is reporting 0.3%-0.5% of total
traffic accounting to this 499 response code
On Sunday 12 April 2015 14:41:37 numroo wrote:
> Hello
>
> I'm running Nginx installed from the nginx.org repos on a Ubuntu Server
> 14.04.
> There are about a dozen different sites running on this server, mostly using
> PHP-FPM backend.
>
> Since the update to 1.7.12 I had frequent core dumps (e
On Mon, Apr 13, 2015 at 09:13:22AM +0200, Abdelouahed Haitoute wrote:
Hi there,
> Currently we’ve got the following situation in our production environment:
>
> Clients —HTTP—> Apache —HTTPS TWO-WAY SSL VIA PROXY —> HTTPS SERVERS
> We’re trying to replace the apache service by using nginx.
ngi
I do not get (aha) where you saw limit_rate only applies to the GET
method...
But yeah limit_rate applies to resposnes.
Rate limiting only properly applies to sender, in your case the client,
which is the sole entity ablte to properly craft its requests to contain a
specified amount of data/time p
http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html
---
*B. R.*
On Mon, Apr 13, 2015 at 11:53 AM, rolf1316 wrote:
> is it possible to edit the limit of active connections in nginx? Like
> change
> it to 5 active connections at a time ?
>
> Posted at Nginx Forum:
> http://forum.nginx.or
Hi Maxim,
Thanks for your answer. I'll rather do as you said rather than changing the
method from POST to GET.
As per your recommended example, I never managed to make it work (proxy_pass
stuff): I went into some resolver issue, and then into some infinite loop on
internal requests. So I gave up.
Hello!
On Mon, Apr 13, 2015 at 04:16:56AM -0400, HUMing wrote:
> For a simple Nginx configuration like this:
> For a simple Nginx configuration like this:
>
> upstream myservers {
> server 127.0.0.1:3000;
> server 127.0.0.1:3001;
> }
> server {
> listen 80;
> location / {
>
Hello!
On Sat, Apr 11, 2015 at 09:21:30AM -0400, Arno0x0x wrote:
> Hi,
>
> I'm using the auth_request module to enable custom (2fa) authentication to
> protect my whole website, no matter the various applications I host on this
> website. So the auth_request directive is set at the "server" leve
Hello!
On Sun, Apr 12, 2015 at 12:21:19PM -0400, numroo wrote:
> >> Yes, I ran the s_client command multiple times to account for the nginx
> >> responder delay. I was testing OCSP stapling on just one of my domains.
> >> Then I read that the 'default_server' SSL server also has to have OCSP
> >>
jinwon42 Wrote:
---
> my site is vulnerable to the SSL FREAK attacks.
>
> ssl_protocols SSLv3 TLSv1;
> ssl_ciphers AES256-SHA:HIGH:!EXPORT:!eNULL:!ADH:RC4+RSA;
Try these;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers
ECDH+A
is it possible to edit the limit of active connections in nginx? Like change
it to 5 active connections at a time ?
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,240989,257987#msg-257987
___
nginx mailing list
nginx@nginx.org
http://mailman.
Hello a quick question, ( I'm a newbie in this forum and in Nginx )
is there a way for nginx to limit connections per workstation? lets say I
allow only 5 workstations at a time to connect among 20 workstations, is
that possible?
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,257572,25
For a simple Nginx configuration like this:
For a simple Nginx configuration like this:
upstream myservers {
server 127.0.0.1:3000;
server 127.0.0.1:3001;
}
server {
listen 80;
location / {
proxy_pass http://myservers;
}
I have two questions related zero downtime
my site is vulnerable to the SSL FREAK attacks.
i have a setting problem.
my setting is
I want all request "http" --> "https"
But, some location is "https" --> "http".
ALL Location : https
/companyBrand.do : http only
What's problem?
Hello,
Currently we’ve got the following situation in our production environment:
Clients —HTTP—> Apache —HTTPS TWO-WAY SSL VIA PROXY —> HTTPS SERVERS
Just to be clear, the following services are used during this flow:
http client (firefox, chrome, curl, wget, etc.) —> Apache —> Squid —> HTTPS
21 matches
Mail list logo