You could consider adding a CSP header to cause clients to automatically
fetch those resources over HTTPS:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/upgrade-insecure-requests
On Wed, 16 Oct 2024 at 00:06, Nikolaos Milas via nginx
wrote:
> On 16/10/2024 12
This is like reading a book, not understanding some words and then
complaining to the author to fix their spelling. Please don't rely on SAST
analysis without understanding the code. I would expect the vast majority
of these are false positives - provide evidence that these are real bugs if
you wan
Thanks for the patch! I've been running it for about an hour and
haven't seen the preallocated memory alert since, so it's looking good
here.
On Fri, 24 Mar 2023 at 03:07, Maxim Dounin wrote:
>
> Hello!
>
> On Thu, Mar 23, 2023 at 09:33:19PM +0100, Richard Stanway
ly relates to that error, I
> thought I’d ask in case you have still been seeing that error with newer
> Nginx versions that have come out since that patch was implemented.
>
>
> --
> Lance
>
> On Mar 22, 2023 at 5:28 PM -0500, Richard Stanway via nginx
> , wrote:
>
I regularly build with zlib-ng, unfortunately it requires patching the
zlib-ng files to enable zlib compatibility mode as nginx doesn't seem to
have a way to pass options to configure.
Edit "configure" in the zlib-ng directory and change the line compat=0 to
compat=1. Then specify --with-zlib=/pat
In the context of a mobile app, pinning usually means checking the public
key of the server in your app matches what is expected. There is nothing to
configure server-side. If you change the private key used by your SSL
certificate, then your app will break. Renewing an SSL certificate doesn't
usua
Hello,
I'm running into an issue where a proxied location with a regular
expression match does not correctly update the cache when using
proxy_cache_background_update. The update request to the backend seems
to be missing the captured parameters from the regex. I've created a
small test case that d
I recently implemented something similar, and one issue I ran in to was
that $sent_http_content_type doesn't always map to a mime type. For
example, "Content-Type: text/html" would match mime type text/html, but
"Content-Type: text/html; charset=utf-8" would match only the default. You
need to all
This seems intentional, per the documentation at
http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html:
"The following directives are also taken into account: gzip_http_version,
gzip_proxied, gzip_disable, and gzip_vary."
No mention of gzip_types. I recommend only enabling gzip_static in
nginx only resolves hostname once on startup. See this workaround:
https://github.com/DmitryFillo/nginx-proxy-pitfalls
On Thu, Jul 26, 2018 at 8:47 PM basti wrote:
> Hello,
>
> inside a location I have a proxy_pass to a hostname with a dynamic IP
> for example
>
> location ^~ /example/ {
>
I'd also like to voice support for having this patch upstream. I've been
using a similar patch ever since requiring TLS 1.2 as the error log is
filled with "critical" version errors otherwise.
On Wed, Jul 11, 2018 at 9:03 PM shiz wrote:
> > Since you are using newer openssl, you may want to appl
I don't think this is possible. By the time you know the client wishes to
request the /winac location, the SSL session has already been established,
at which point the server can no longer send a ClientCertificateRequest.
Using the stream module to proxy the whole connection may work, but
obviousl
One way to do this may be to block the port with a firewall rule during
reload. This way nginx will have to wait for the connect timeout (and
hopefully retry) rather than failing immediately after receiving a RST from
the closed port.
On Thu, Jun 28, 2018 at 2:15 PM duda wrote:
> *That is
>
> Po
That IP resolves to rate-limited-proxy-72-14-199-18.google.com - this is
not the Google search crawler, hence why it ignores your robots.txt. No one
seems to know for sure what the rate-limited-proxy IPs are used for. They
could represent random Chrome users using the Google data saving feature,
he
This is almost certainly not Google as they obey robots.txt. The & to &
conversion is another sign of a poor quality crawler. Check the RDNS and
you will find it's probably some IP faking Google UA, I suggest blocking at
network level.
On Fri, Jun 8, 2018 at 1:57 AM shiz wrote:
> Hi,
>
> Recentl
You should check your upstream logs to see why it is closing connections or
crashing.
On Tue, May 15, 2018 at 6:22 PM Ricky Gutierrez
wrote:
> Any help?
>
> El lun., 14 may. 2018 20:02, Ricky Gutierrez
> escribió:
>
>> hello list, I have a reverse proxy with nginx front end and I have the
>> ba
Even though it shouldn't be reaching your limits, limit_req does delay in
1 second increments which sounds like it could be responsible for this. You
should see error log entries if this happens (severity warning). Have you
tried without the limit_req option? You can also use the nodelay option to
PHP-FPM is only for PHP. You'll want something like fcgiwrap for regular
CGI files.
See https://www.nginx.com/resources/wiki/start/topics/examples/fcgiwrap/
On Fri, Apr 6, 2018 at 6:02 PM, Ralph Seichter
wrote:
> Hello list,
>
> I am fairly new to nginx and now have stumbled across an issue I c
> [alert] 11371#0: worker process 24870 exited on signal 9
This is almost certainly the cause of your problems - you need to figure
out why the nginx processes are crashing and resolve that. Most likely a
3rd party module is responsible.
On Fri, Feb 16, 2018 at 10:39 AM, Andrzej Walas wrote:
>
Only the server should be generating the tokens, if the client knows the
secret it can do whatever it wants.
On Wed, Jan 10, 2018 at 10:32 AM, anish10dec
wrote:
> Let me explain the complete implementation methodology and problem
> statement
>
> URL to be protected
> http://site.media.com/mediaf
Your ISP is blocking port 80, so you cannot get redirected to HTTPS.
http://www.dslreports.com/faq/11852
On Tue, Nov 28, 2017 at 6:17 PM, Jeff Dyke wrote:
> I think it is unfortunate that certbot does it this way, with an if
> statement, which i believe is evaluated in every request. I use some
You can use ct-submit, once built the binary can be copied and run on any
system without any dependencies.
https://github.com/grahamedgecombe/ct-submit
On Mon, Nov 27, 2017 at 10:21 PM, Ángel wrote:
> On 2017-11-26 at 14:17 +0100, A. Schulze wrote:
> > Hello,
> >
> > experiments with nginx-ct ¹
Look at
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_client_abort
or
http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_ignore_client_abort
etc depending on what you're doing with the request.
On Thu, Oct 26, 2017 at 1:07 PM, Torppa Jarkko
wrote:
> I have
On Wed, Oct 11, 2017 at 4:14 PM, Valentin V. Bartenev
wrote:
>
> Websockets cannot work over HTTP/2.
>
>
So it appears, I guess I should have checked that! Upon closer examination,
all the 101 responses I was seeing in the access log were from HTTP/1.1
clients, the HTTP 2 requests never even got
Hello,
I have a location that proxies to a websocket server. Clients connect over
HTTPS (HTTP2, wss://). Sometimes clients generate the following alerts in
the error log when hitting the websocket location:
2017/10/11 21:03:23 [alert] 34381#34381: *1020125 epoll_ctl(1, 603) failed
(17: File exists
This is something you should fix on whatever application is setting the
cookie. It probably isn't nginx.
On Tue, Oct 10, 2017 at 10:04 AM, Johann Spies wrote:
> A security scan on our server showed :
>
> Vulnerability Detection Method
> Details: SSL/TLS:
> Missing `secure` Cookie Attribute
> OID
You're connecting to localhost (127.0.0.1) and your set_real_ip_from only
accepts X-Forwarded-For from 172.0.0.0/8.
On Mon, Aug 28, 2017 at 8:25 PM, CJ Ess wrote:
> I've been struggling all day with this, I'm missing something, hoping
> someone can point out what I'm doing wrong w/ the realip mo
UDP packets are proxied individually - one socket per packet. This
implementation is not suitable for bulk traffic.
On Mon, Aug 28, 2017 at 6:40 PM, 231done
wrote:
> Hi,
> I'm running nginx-1.13.4 and I observe that UDP transparent proxying
> for
> bulk traffic is very slow when compared to
Your backend is returning a HTTP 500, which likely indicates a PHP Fatal
Error. You should check the error log for the site.
On Thu, Jul 27, 2017 at 10:28 AM, emios wrote:
> I bought a new VPS and I try to transfer my working ZendApp (work in
> previous hosting with DirectAdmin). Now i install n
The issue is not with your page size or gzip (or anything nginx
related actually). Your Rails backend is generating the content far
too slow. You should investigate why your backend is so slow.
time_namelookup: 0.004209
time_connect: 0.241082
time_appconnec
You shouldn't be changing worker_connections, this is the total number of
connections (of any type) permitted per worker.
Take a look at the documentation at http://nginx.org/en/docs/
Of interest to you are
http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html and
http://nginx.org/en/docs
7/06/30 01:33:54 [debug] 19140#19140: *21 recv: fd:11 1744 of 1744
> 2017/06/30 01:33:54 [debug] 19140#19140: *21 http client request body recv
> 1744
>
> @Payam I try worker_process to 1 but same result.
>
> The php-fpm log is free of errors...
>
>
>
>
> *ANDREA
rewrite and location matching do not include query strings. As a quick
workaround, I believe you could do something like this:
if ($request_uri = "/abc/xyz/def.php?Id=13") { return 301 "
http://www.example.com/fhu/foo";; }
Be aware that this matches the request exactly - query string parameters
m
If you want to stream the upload directly to your backend, you should
consider fastcgi_request_buffering[1].
The problem is most likely with your PHP backend though, you should examine
why it takes so long to process the request.
[1]
http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fast
That user agent doesn't belong to a Google crawler - they are end-user
requests from the Google App (mobile application). I'm not sure what the
motivation is for blocking them but I wouldn't consider it malicious /
unwanted traffic.
On Thu, Jun 22, 2017 at 4:47 PM, Jeff Dyke wrote:
> I'm glad yo
> openssl x509 -req -in client.csr -out client.crt -signkey client.key -CA
> server.crt -CAkey server.key -CAcreateserial -days 365
I think you should be using the CA certificate here, not the server
certificate.
___
nginx mailing list
nginx@nginx.org
ht
rver, so we are not really bothered about
> GET-requests getting logged on the server, so we should be good.
>
> Do I make sense?
>
> Kindly let know your thoughts.
>
>
> Thanks and Regards,
> Ajay
>
> On Thu, Apr 13, 2017 at 11:07 PM, Richard Stanway <
> r1ch+ng...
You're missing the "Authorization" header in
your Access-Control-Allow-Headers directive.
You can alternatively pass the basic auth in your URI, eg xhr.open("GET", "
https://username:password@1.2.3.4/";) rather than crafting it manually.
On Thu, Apr 13, 2017 at 4:50 PM, Ajay Garg wrote:
> Stran
Your are using auth_basic, so the 401 response code is not in the range
that add_header works with ("Adds the specified field to a response header
provided that the response code equals 200, 201, 204, 206, 301, 302, 303,
304, or 307."). You need to use "always" if you want to include the header
in
With the controls sites have over the referrer header, it's not very
effective as an access control mechanism. You can use something like
http://nginx.org/en/docs/http/ngx_http_secure_link_module.html
instead.
On Tue, Apr 4, 2017 at 1:39 PM, shahzaib mushtaq wrote:
> Hi,
>
> Thanks for quick resp
Thanks Maxim, everything is looking great after the patch.
On Fri, Mar 24, 2017 at 4:26 PM, Maxim Dounin wrote:
> Hello!
>
> On Fri, Mar 24, 2017 at 02:11:12PM +0100, Richard Stanway wrote:
>
>> Hi Maxim,
>> Thanks for the quick patch! I've applied it to our s
> On Fri, Mar 24, 2017 at 01:31:35PM +0100, Richard Stanway wrote:
>
>> Hello,
>> I recently moved our site to a new server running Linux 4.9, Debian
>> 8.7 64 bit with nginx 1.11.11 from the nginx repository. Our config is
>> straightforward - epoll, a few proxy backen
fb) at
src/http/ngx_http_request.c:2987
2987ngx_free_chain(c->pool, ln);
(gdb)
2983for (cl = hc->free; cl; /* void */) {
(gdb)
2987ngx_free_chain(c->pool, ln);
(and so on...)
On Fri, Mar 24, 2017 at 1:31 PM, Richard Stanway
wrote:
> He
Hello,
I recently moved our site to a new server running Linux 4.9, Debian
8.7 64 bit with nginx 1.11.11 from the nginx repository. Our config is
straightforward - epoll, a few proxy backends and a few fastcgi
backends, a handful of vhosts, some with HTTP2, geoip module loaded.
No AIO, no threads,
Your configs look fine, what you are seeing is the certificate that is sent
if a client does not support SNI. You can control which certificate is
chosen using the default_server parameter on your listen directive.
On Sun, Mar 12, 2017 at 4:54 PM, Fabian A. Santiago <
fsanti...@garbage-juice.com>
You'll want to proxy_pass to a named upstream with keepalive enabled.
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
On Mon, Feb 13, 2017 at 11:33 PM, brookscunningham <
nginx-fo...@forum.nginx.org> wrote:
> Hello All,
>
> I am seeing an increase in the number of new TLS c
unable to
reproduce the issue.
Thanks for the quick reply!
Regards,
Richard.
On Tue, Jan 31, 2017 at 4:32 PM, Maxim Dounin wrote:
> Hello!
>
> On Tue, Jan 31, 2017 at 04:19:59PM +0100, Richard Stanway wrote:
>
> > Hi all,
> > I'm experiencing odd behavior with some larger
Hi all,
I'm experiencing odd behavior with some larger HTTP file downloads from my
site. The files will download for a seemingly random amount of bytes then
the connection freezes until "send_timeout" expires, at which point the
error log shows "client timed out (110: Connection timed out) while se
The FIN ACK suggests that the other side is responsible for closing the
connection. If nginx was terminating the connection, there would be no ACK
bit set. Check your upstream server supports keepalive.
On Tue, Jan 10, 2017 at 10:55 PM, Jonathan Geyser
wrote:
> Hi guys,
>
> I'm attempting to hav
See http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header
"Adds the specified field to a response header provided that the response
code equals 200, 201, 204, 206, 301, 302, 303, 304, or 307."
If the always parameter is specified (1.7.5), the header field will be
added regardless
There's no "nice" way to handle this in nginx as far as I'm aware. I think
the best setup is a default vhost with a generic (server hostname?)
certificate, and for any bots or clients that ignore the common name
mismatch you can return the 421 Misdirected Request code.
https://httpstatuses.com/421
Nginx will need a valid header in order to know what to do with the
request. Maybe you should look into the stream module instead.
https://nginx.org/en/docs/stream/ngx_stream_core_module.html
On Mon, Oct 24, 2016 at 2:56 PM, Nattakorn S wrote:
> Dear all
>
>
> I have electronic device and I con
Why not use the location directive? This is what it is designed for.
http://nginx.org/en/docs/http/ngx_http_core_module.html#location
On Mon, Oct 3, 2016 at 12:28 PM, Tseveendorj Ochirlantuu <
tseveend...@gmail.com> wrote:
> Hello,
>
> I need to configure some locations go to index.php rest go
Keep in mind a terminated connection (444) is not a valid HTTP response.
Abruptly terminated connections may also be caused by broken middleware
boxes or other things interrupting the connection. Modern browsers have
retry mechanisms built in to safeguard against transient connection issues,
for ex
limit_req works with multiple connections, it is usually configured per IP
using $binary_remote_addr. See
http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req_zone
- you can use variables to set the key to whatever you like.
limit_req generally helps protect eg your backend again
You can put limit_req in a location, for example do not limit static files
and only limit expensive backend hits, or use two different thresholds.
On Fri, Sep 9, 2016 at 3:39 AM, wrote:
> Since this limit is per IP, is the scenario you stated really a problem?
> Only that IP is effected. Or as
File uploads are passed in the request body, not the headers so you cannot
disable or otherwise affect them by setting HTTP_X variables. This is a job
for your backend as nginx does not really interact with post body contents.
On Tue, Aug 30, 2016 at 1:54 AM, c0nw0nk
wrote:
> Christos Chatzaras
There is no standard for request compression. HTTP 2 has header compression
built in, but if you want to compress request bodies, you have to devise
your own solution.
On Mon, Aug 29, 2016 at 5:22 AM, serendipity30
wrote:
> Anyone has used this? Is gzip_static used for request compression?
>
> T
/22/16 8:15 PM, Richard Stanway wrote:
> > Could you at least fix the https download page, so it doesn't
> > directly link to a HTTP PGP key?
> >
> It works correctly: https://nginx.org/en/download.html
>
> > On Mon, Aug 22, 2016 at 6:49 PM, Maxim Konovalov >
r credibility.
> >
> Who did that? What's his name?
>
> > Now, as Richard pointed out, if you truly believe you need to
> > provide HTTP-only, you can. It would be better if it was in a very
> > visible fashion, though.
> > Where was despotism, again?
>
1. You could provide insecure.nginx.org mirror for such people, make
nginx.org secure by default.
2. Modern server CPUs are already extremely energy efficient, TLS adds
negligible load. See https://istlsfastyet.com/
On Mon, Aug 22, 2016 at 12:31 PM, Valentin V. Bartenev
wrote:
> On Sunday 21
Hello,
I noticed that the PGP key used for signing the Debian release packages
recently expired. I went to download the new one and noticed that nginx.org
wasn't using HTTPS by default. Manually entering a https URL works as
expected, although some pages have hard coded http links in them.
Is ther
Visiting http://www.craythorneweather.info/moonphase.php show a HTTP/500,
so you should examine your backend (PHP) error logs for more information.
On Sun, Aug 14, 2016 at 5:48 PM, Joe Curtis
wrote:
> I have a weather station website running successfully under apache2 on a
> fedora based server.
> generated 0 bytes in 640738 msecs
I would look into what is causing your backend to take over 10 minutes to
respond to that request.
On Tue, Aug 9, 2016 at 11:09 PM, Larry Martell
wrote:
> I just set up a django site with nginx and uWSGI. Some pages I go to
> work fine, but other fail with a
This is not nginx redirecting, as there is no response body. Most likely it
is your wordpress configuration that needs attention.
On Thu, Aug 4, 2016 at 5:03 PM, lukemroz
wrote:
> Hello,
>
> I followed the instructions at Digital Ocean for setting up a WordPress
> installation, including enablin
Are you sure you don't want to use try_files for this?
http://nginx.org/en/docs/http/ngx_http_core_module.html#try_files
On Mon, Aug 1, 2016 at 1:15 AM, Maxim Dounin wrote:
> Hello!
>
> On Sun, Jul 31, 2016 at 06:55:54PM -0400, Denis Papathanasiou wrote:
>
> > I have the following configuration
Not unless your / location passes the request to a vulnerable cgi-script
using a vulnerable version of bash.
See https://en.wikipedia.org/wiki/Shellshock_(software_bug)
On Sat, Jul 30, 2016 at 7:57 PM, li...@lazygranch.com
wrote:
> I see a return code of 200. Does that mean this script was exec
Check out try_files.
http://nginx.org/en/docs/http/ngx_http_core_module.html#try_files
On Wed, Jul 6, 2016 at 3:44 PM, Lantos István wrote:
> Sorry, the parent folder, /images/art was uncommented in .gitignore,
> that's why didn't uploaded into my repo. Problem solved.
>
> Still, is there any m
Why do you have that in a separate server block?
On Fri, Jun 24, 2016 at 9:31 AM, Zeal Vora wrote:
> Thanks. The above lined helped. However one more doubt. I want NGINX to
> return 200 whenever some one goes to /nature , so I wrote above
> configuration, however when some one goes to /nature ,
You need to provide more information such as Firefox error messages, nginx
config, server hostname, etc. You may find
https://developer.mozilla.org/en-US/docs/Tools/Network_Monitor useful.
I did notice your cipher suites include blacklisted ciphers, but this
shouldn't be an issue for outright fail
You generally want as long a keepalive timeout as you're able to tolerate.
Closing the connection after 5 seconds for example means a full new TCP and
TLS handshake has to occur if the user clicks a link after 5 seconds,
resulting in a minimum of two RTTs before content, causing a slow loading
expe
You probably need to specify the IP on the listen directive if you want
different configurations of listening ports on different IPs.
On Mon, Mar 14, 2016 at 11:43 PM, Roswebnet
wrote:
> Hi everyone,
>
> I have strange issue with nginx 1.9.12. I have 3 IP addresses as a server
> name that are al
The way I do this is to use multiple server {} blocks, and put all the
non-canonical hostnames / port 80 requests in a server block with a return
301 to the canonical (and HTTPS) host which only listens on 443 and the
canonical hostname.
On Fri, Mar 11, 2016 at 9:12 PM, mevans336
wrote:
> We cur
At a guess I would say your key zone is full. Try increasing the size of it.
On Thu, Mar 10, 2016 at 8:07 PM, CJ Ess wrote:
> This is nginx/1.9.0 BTW
>
>
> On Thu, Mar 10, 2016 at 2:06 PM, CJ Ess wrote:
>
>> Same condition on two more of the servers in the same pool. Reload
>> doesn't resolve t
You want proxy_buffer_size.
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size
On Wed, Feb 3, 2016 at 2:24 PM, Rafał Radecki
wrote:
> Hi All.
>
> I am currently trying to find the source of "upstream sent too big header
> while reading response header from upstream" in m
Have you checked the php-fpm logs? It seems like your backend is overloaded
and not accepting connections fast enough.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
This doesn't seem to be an nginx issue. The presence of the "X-Powered-By:
HHVM/3.11.0" in your response means your backend is the one issuing the 302
redirect, so you should investigate that instead of nginx.
On Sat, Dec 19, 2015 at 5:11 PM, Xavier Cardil Coll
wrote:
> On this setup, there is a
e,
>> X-Requested-With, Cache-Control, If-None-Match';
>
>
> And now it gives me:
>
> No 'Access-Control-Allow-Origin' header is present on the requested
>> resource. Origin 'https://test.project.com' is therefore not allowed
>> access. The resp
Your config doesn't appear to add any Access-Control-Allow-Origin header,
so unless your backend is adding this, you will need to add an
appropriate Access-Control-Allow-Origin header.
On Fri, Dec 11, 2015 at 1:14 PM, Vlad Fulgeanu wrote:
> Hi everyone!
>
> I am having some trouble setting up ng
Passive ports are dynamically allocated, so FTP with the stream module is
unlikely to work at all.
On Thu, Dec 3, 2015 at 7:10 AM, Felix HT1 Zhang wrote:
> Dears,
>
> Could Nginx stream support FTP PASSIVE?
>
>
>
> #er nobody;
>
> worker_processes 4;
>
>
>
> #error_log logs/error.log;
>
> #er
TCP has no concept of server names, so this is not possible. It only works
in HTTP because the client sends the hostname it is trying to access as
part of the request, allowing nginx to match it to a specific server block.
On Wed, Dec 2, 2015 at 12:31 PM, Charles Nnamdi Akalugwu <
cprenzb...@gmail
Running nginx directly works fine because nginx can see and use your
terminal. (Re)starting nginx through systemd does not, because systemd
doesn't provide a terminal (nor would your input reach it).
See https://trac.nginx.org/nginx/ticket/433
On Tue, Nov 17, 2015 at 9:13 PM, lakarjail wrote:
>
All your requests for non-static content are being routed through the
joomla index.php, so this is an option you'll have to look for in your
joomla configuration.
On Fri, Nov 6, 2015 at 2:00 AM, vmbeliz wrote:
> Hi guys,
> Please i need help.
> i have a server with Centos 6.5 + Nginx + PHP-FPM.
You've set port 80 to listen with http2, but you're not passing --http2 to
curl so you're getting back an unexpected binary http2 response. Due to
lack of ALPN I suggest you don't use http2 on port 80.
On Tue, Nov 3, 2015 at 8:05 PM, steve wrote:
> Hi folks,
>
> I'm having a problem with the con
This is probably a broken full page ad from an app that you have installed.
Remove any suspicious apps until the problem goes away. A factory reset
will probably work as well.
On Wed, Oct 28, 2015 at 4:26 PM, Shane Duffield
wrote:
> Hello Maxim
>
> I am using Android. The problem is on my phone
How are you testing? 301 is permanent so it may be cached if you added the
auth after the redirect. Try testing with curl from the command line to
verify your results.
On Mon, Oct 5, 2015 at 7:05 PM, Grant wrote:
> I have a server block that contains the following:
>
> auth_basic "Please log in.
This is expected behavior if you are using PHP sessions.
See http://php.net/manual/en/function.session-write-close.php
On Fri, Jul 10, 2015 at 11:06 PM, c0nw0nk wrote:
> Thanks for the information :) everything is default though so i am not sure
> what i should even be changing anything to.
>
>
On Tue, Apr 21, 2015 at 11:59 PM, Thiago Farina wrote:
> Hi all,
>
> I'm just trying to configure nginx to use use php, but it seemed too
> complicated.
>
> Why is it so complicated to tell nginx to use php-cgi interpreter [1]?
> When compared to mongoose it is just a matter of setting the
> cgi_
>
> ...
> 2015/01/13 12:22:59 [crit] 11871#0: *140260577 SSL_do_handshake()
> failed (SSL: error:1408A0D7:SSL
> routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL
> handshaking, client: *.*.*.*, server: 0.0.0.0:443
>
>
According to the openssl code, this occurs when a client attempts
You probably have a DNS caching issue, this is not related to nginx. Check
your DNS TTL and wait that long before trying again.
On Tue, Dec 9, 2014 at 11:06 PM, krajeshrao wrote:
> Hi Guys ,
>
> we are hosting providers , my doubt is when i cname www in godaddy or in
> AWS
> its not getting poin
Rather than the source, you should check the docs :)
http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#parameters
On Tue, Dec 2, 2014 at 6:11 PM, Some Developer
wrote:
> I've looked through the nginx source code and couldn't find a specific
> list of HTTP headers that nginx passes thr
>
>
> Well, I used to write a patch to enable IPP zlib (8.0) support in
> NGINX (enabled by ./configure --with-ipp-zlib), just for your
> reference:
>
>
Thank you for the patch. This solves the issue with streamed responses,
however when the "if (r->headers_out.content_length_n > 0)" branch is
take
Hello,
I recently came across a modified version of zlib with code contributed by
Intel [1] that makes use of modern CPU instructions to increase
performance. In testing, the performance gains seemed substantial, however
when I tried to use this version with nginx, the following alert types
appeare
Your config is returning a 403 from any referrer containing "love" any you
have such URLs on your own site according to your log excerpt. I would not
recommend such referrer matching, it's unlikely to help in any case.
On Tue, May 6, 2014 at 12:30 PM, dfumagalli wrote:
> This is also what I tho
It is probably your application / backend that is generating the 403, it's
unlikely nginx is responsible for this. I guess rate / connection limiting
with a custom error code may cause this, but you should know if you
configured this. Please show us your config and describe your backend in
more det
Just a note, I think the preferred way to do this is with "return". It's
much simpler (no rewrite / PCRE overhead):
location / {
if ($scheme = http) {
return 301 https://$http_host$request_uri;
}
On Mon, May 5, 2014 at 10:54 PM, Justin Dorfman wrote:
> Thanks Francis, worked perfectly.
>
>
> R
Did you check postpone_output?
http://nginx.org/en/docs/http/ngx_http_core_module.html#postpone_output
On Wed, Feb 12, 2014 at 10:41 AM, gaspy wrote:
> Hi,
>
> I know this has been asked before, but I could not find a definitive
> answer.
> I tried different solutions, nothing worked.
>
> I h
Hello,
I recently had a lot of trouble similar to this, and discovered that
the fastcgi_param directive is additive - eg a later declaration of
SCRIPT_FILENAME simply adds a second SCRIPT_FILENAME to the fastcgi
parameters. You most likely have SCRIPT_FILENAME set in your "include
fastcgi_params" w
> fastcgi_passunix:/tmp/php-fpm.sock;
> fastcgi_index index.php;
> fastcgi_param SCRIPT_FILENAME
> /usr/local/www/phpMyAdmin$fastcgi_script_name;
> include fastcgi_params;
>
What's in your fastcgi_params? Is it overriding your SCRIPT_FILENAME
> Here's a snippet of my config...
>
> location = /assets/Photo\ Gallery/Weather/current\.jpg {
> expires 10m;
> log_not_found off;
> }
>
The "location =" syntax does not use regular expressions. You may also
want to surround the string with quotes i
1 - 100 of 110 matches
Mail list logo