> Now, I’m wondering if the frequent logouts (session expiration) might be
> related to this cookie issue, and if there are any suggestions for addressing
> it via Nginx.
Nginx can't do much about it If the application behind deletes all the cookies.
So check what version of Gitlab you are runni
> It only accepts maximum of 128 kb of data, but the client_max_body_size
> 500M;. Is there a way to locate the cause of error.
Can you actually show what the "error" looks like?
The default value of client_max_body_size is 1M so the 128Kb limit most likely
comes from the backend application o
> *) Feature: the ngx_stream_pass_module.
Hello,
what is the difference between pass from ngx_stream_pass_module and
proxy_pass from ngx_stream_proxy_module?
As in what entails "directly" in "allows passing the accepted connection
directly to any configured listening socket"?
wbr
rr
__
> nginx: [emerg] unknown directive "log_by_lua_block" in
> /etc/nginx/conf.d/microservice.conf:8
> nginx: configuration file /etc/nginx/nginx.conf test failed
You need nginx lua/openresty module, but I'm not sure if Centos (community
version) repository has it.
You might need to build it yourse
> When I am trying to upload files by hitting
> https://software.example.com/upload_form.html I am encountering http status
> code 405 Not Allowed. Please find inline a screenshot for your reference.
Does your upload form point exactly to '/upload.php' ? as by by default Nginx
doesn't allow PO
> sudden surge of requests, existing connections can get enough share of CPU
> to be served properly, while excessive connections are rejected
While you can't limit the connections (before the TLS handshake) there is a
module to limit the requests per client/ip
https://nginx.org/en/docs/http/n
> I added gzip off to nginx.conf file and then check the configuration with
> nginx -t, and then reloaded with systemctl reload nginx.
>
> When I visit the site, I still have
> Accept-Encoding: gzip, deflate, br
First of all - how are you testing?
'Accept-Encoding' - is the header in http reque
>postrotate
> /bin/kill -USR1 `cat /run/nginx.pid 2>/dev/null` 2>/dev/null || true
> endscript
>}
```
> and I wonder...
> if it is logrotate's _fault_ or perhaps I screwed Nginx's configs somewhere?
> For after logs got rotated Nginx logs into: access.log.1 & error.log.1 and
> now as
> # nginx -t
> nginx: [emerg] duplicate listen options for http://65.109.175.140:443 in
> /etc/nginx/sites-enabled/b.com.conf:105
> nginx: configuration file /etc/nginx/nginx.conf test failed
>
> What am I doing wrong?
Probably complains about the duplicate 'reuseport' - leave it just only on on
> I am wondering why it's not more available in the Linux repositories. Any
> thoughts?
It's a third party module. Why it is not available in your particular linux
distribution you'll have to ask the distribution maintainers (Oracle) as for
example it's available in Opensuse
https://software.o
> Is it possible to use Nginx as a proxy for another website, while also having
> the ability to replace absolute paths in CSS, JS, and HTML content?
One way for that is to use proxy
http://nginx.org/en/docs/http/ngx_http_proxy_module.html + sub module
http://nginx.org/en/docs/http/ngx_http_sub
> Observed nginx's version 1.22.1 questionable behaviour with two virtual
> hosts, one with H2 - enabled, second without http2 support.
> Both on the same IP and port, with different domain names/server names.
>
> Is it a bug, feature, my misconfiguration or just not needed by anyone?
The short a
> Question:
> if I don't have any like inside de
> location / {}
> how does nginx delivers an html file out of many possibly found in the root
> for the location?
There is generally no need to specify location / { .. } if you don't provide
any specific configuration or behavior for it.
Having
> [error] 11#11: *49 access forbidden by rule, client: 10.48.11.9, server: _,
> request: "GET /auth/ HTTP/1.1", host: "http://my.domain.info";, referrer:
> "https://my.domain.info";
It seems that the rule is working but at some wrong place, I am not sure how to
organise or set the right sequence
> There are other locations like /auth, /auth/, /auth/admin, /auth/admin/ and
> few more which have the same rules. I am trying to restrict access to /auth
> and /auth/admin which are sensitive for public access. Do you think removing
> "=" can help in this case?
'=' in location definition me
> I am trying to restrict some Location block in my Nginx configuration to
> specific IPs. Below are the changes I made -
>
>location = /auth {
> }
>
> Here, the deny rule is not working. Users are still able to access the
> page publicly. Am I missing something?
Are you s
> Just curious. I though the WAF and ModSecurity was only available with
NGINX Plus on a paid subscription basis ?
You can compile the module yourself (either dynamically or inside nginx
binary) also on the community version.
https://github.com/SpiderLabs/ModSecurity-nginx/releases
> Is it poss
> in a network that is using Nginx as Proxy Reverse since some years, the
> nginx.conf file is like this include with Nginx-1.23.2 and that file appears
> like this..
>
> The second file is Nginx directly from Nginx.org, my question is, Why that
> difference between both files
nginx.conf i
> We are using the hey (https://github.com/rakyll/hey) tool to pump 50k
> requests per second and are seeing only 40k requests being received on the
> backend application side.
> Any other tcp configuration that needs to be tuned ?
I am not familiar with the tool but per documentation it should
You can’t have a full url in the error_page (just uri)
Change:
error_page 403 404 =200
https://mybucket.s3.eu-west-2.amazonaws.com/sitedown.html;
to something like:
error_page 403 404 =200 /sitedown.html;
location /sitedown.html {
proxy_pass https://mybucket.s3.eu-west-2.amazonaws.com;
> I fail to see why it will keep on answering to new requests if it can't
fulfill any of them.
Because there might be other requests which can be answered without touching
the problematic cache directory?
While the error is critical (for the particular request) I don't think it
makes sense for ngi
> Cache config:
> proxy_cache_path /nginx/cache levels=1:2 keys_zone=bla:20m max_size=10g
inactive=20m use_temp_path=off;
>
> I had a problem while using tmpfs mount for nginx cache dir, my automation
mounted it over and over again(so each time nginx created a new set of base
dirs [a-z]), but afte
> I got it working! I am not sure of the performance hit, maybe someone know
how to make this more effective?
Without going deeper on what are your actual requirements and what is the
expected result, the suggestion I could give is instead of the multiple if
statements to look at the map directiv
> I want to be able to do an redirect, but only one time, the hit it should
not redirect.
>
> If a client visits an web-store it will get redirected to the region
specific store, but if then then manually select an other store there it
should not redirect back again. I don't know if a nginx sessio
> We have an responseHeader with technical information sent by the upstream
> server tomcat.
> We want to log this information in nginx and delete the header to avoid to be
> visible in the response Header to the client.
>
> more_clear_headers 'Container-Id';
This seems to be a third-party modul
> Otherwise why is my application running into such performance limits as
> mentioned in this question on stackoverflow
> https://stackoverflow.com/questions/70584121/why-doesnt-my-epoll-based-program-improve-performance-by-increasing-the-number
> ?
You are testing something (public third party
>> https://www.nginx.com/blog/testing-the-performance-of-nginx-and-nginx-plus-web-servers/
>
> I don't view the test described as valid because the test is between one
> client and one server. I'm interested in testing with one server and many
> clients.
wrk (used in the tests) [1] is a multith
> Anyone?
Since the questions are quite general (like the upper limits are usually
hardware bound so the performance numbers vary based on that) maybe reading
these blog posts can give some insight:
https://www.nginx.com/blog/testing-the-performance-of-nginx-and-nginx-plus-web-servers/
and othe
> Hi everyone,
> I've installed in docker the NGINX and I'm trying to add the certificate
but
> I get everytime a internal errror!!
>
> Error: Command failed: certbot certonly --config "/etc/letsencrypt.ini"
You are showing only that the certbot command fails but nothing about the
reason or nginx
> As some of you probably know we added kTLS support in nginx-1.21.4.
Before testing myself wanted to quickly clarify - does this work in
combination with older cipher suites (as in fallback from kTLS to standard
non-kernel) to support older clients which still use tls 1.1 / 1.2 or you
are locked
> And I can't make a location block for a mimetype, or using another specifier
> than regexes to filter out requests to certain 'file types'. Is there any
> other 'good' solution except for, on my origin adding rewrites from
> /something/data1/ to /data1/?
Why just not separate the locations r
> Do generic request handlers/response filters work for request to upstream
> servers? Do you now any documentation/example on how to implement such an
> handler/filter?
As a variant the Sub filter module does
http://nginx.org/en/docs/http/ngx_http_sub_module.html (the only requirement
was tha
> The intranet website 10.8.10.220:2021 need login with specific username
and password. After I open http://my.domain.xyz: and then do the login,
the URL always changes to the following form:
> http://my.domain.xyz:2021
>
> This will cause subsequent operations to fail. Is there any way to
con
> I have found for these situation , and tried proxy_bind transparency.
> But, the socket server never get connection request.
> From proxy server , timed out logs are coming.
> How can socket server behind proxy get real client ip address?
There are more steps/things you have to do to make ip tr
> And most importantly, how its capabilities compare to Varnish. From my
> searching and articles I have read so far, the general
consensus seems to be that Varnish is more flexible and offers more abilities
for dynamic page caching
and cache purging. Is this indeed the case today?
Depends o
> How? The presentation does not cover that.
You shouldn't stick to a single youtube video.
> And it can't be a simple HTTP
It is.
Even the name HLS stands for "HTTP Live Streaming".
> , because as you said yourself, with HLS I would just end up with a cached
> playlist and bunch of useless
> How would that work?
> From the presentation on RTMP it looks like that nginx is just serving cached
> VODs instead of the actual stream.
RTMP and HLS are two different streaming technologies.
HLS just consists of a playlist (.m3u8) file (usually the player reads it by
continuously making htt
> OK, after doing more research myself I'm pretty sure that nginx can't proxy
> HLS streams.
There is nothing special about HLS it is just simple http requests which nginx
can serve/proxy just fine.
What were the issues you have encountered?
rr
__
From: nginx [mailto:nginx-boun...@nginx.org] On Behalf Of Šimon Tóth
Sent: otrdiena, 2021. gada 13. jūlijs 20:54
To: nginx@nginx.org
Subject: Re: Using nginx as a HLS reverse proxy with Envoy?
OK, after doing more research myself I'm pretty sure that nginx can't proxy HLS
streams.
It
> I wanna use nginx and ffmpeg to serve chunks to clients without using or
> sending .m3u files to clients. How can I do this lease?
> * ffmpeg copy streams in local ( in /home/STREAMS/channel/stream%d.ts ==>
> /home/STREAMS/channel/stream1.ts , /home/STREAMS/channel/stream2.ts ,
> /home/STR
> One option is to keep a copy of the file on disk (outside of the nginx
cache). Then use something like try_files to read it, and have that response
be cached by nginx. But then I end up with 2 of the files on disk (one in my
try_files directory, and one in the nginx cache). I also need to manuall
> Thanks for the report, it looks like this change broke things:
>
> changeset: 7738:554c6ae25ffc
>
> The only fix I can think of is to rewrite the lingering close so it will
happen after the request is logged.
Thkx Maxim for finding the cause.
I suppose that this is considered a bug then? If
Hello.
I have a strange issue where for a POST request having any form data Nginx
after version 1.9.4 doesn't log $ssl_protocol (or any other $ssl_*)
variable.
I have a configured custom accesslog:
log_format main '... $ssl_protocol $ssl_cipher $server_port';
A simple script ( for example from
> I want to set up two nginx servers: the first as a reverse proxy that will
direct - for starters - to the second nginx server, which will hold two
simple static pages as a web server.
It's fully possible to have such a setup.
> Will such a solution be practical? What do you think?
Without k
> We have a server (real name substituted by mapserver.example.com) running
> nginx 1.18.0 on CentOS 7 with php-fpm listening on port 9001.
Does the fpm listen also on ipv6 interface?
Check: ss -ntlr | grep 9001
If you see [::]:9001
Since you have fastcgi_pass localhost:9001; I assume at some
> I have a question about nginx internals. How does nginx ensure high
> throughput? I understand that nginx uses many parallel connections by using
> epoll. But what about processors? Is connection handling spread amongst
> multiple processors to handle any processing bottleneck?
If necessary y
> Is there a way to enable redirect from port 80 to 443 for both
> /etc/nginx/conf.d/onetest.conf and /etc/nginx/nginx.conf files. Any help
> will be highly appreciated.
You can have only one default_server per listen port.
It will be the used if a client makes a request not matching any hostna
> Keep alive works for other REST services, but not working for Hasura.
> (Keep-Alive requests:0 Vs Keep-Alive requests:200 for other
> services). Is Keep-Alive anything to do with the response headers of Hasura
> or its POST request?
It could be that the service/backend doesn't support
> Recently noted that when proxying Hasura for the https support reduces the
> speed to 7-50x times! More information including tcpdump available in
> https://github.com/hasura/graphql-engine/discussions/6154
Looking at the github discussion - you are comparing http vs https.
Since you are not
> As part of the security audit, I have set server_tokens off; in
> /etc/nginx/nginx.conf. Is there a way to hide Server: nginx, X-Powered-By and
> X-Generator?
>
> To hide the below HTTP headers
>
> Server: nginx
> X-Powered-By: PHP/7.2.34
> X-Generator: Drupal 8 (https://www.drupal.org)
Afa
> I am curios at what point the the cache exceeds the comfort zone of the
> design.
In my opinion it depends more on the aspect how important is your cache / how
quickly can you replace and repopulate it (how fast or loaded are your
backends) / can your service work without it - as in if you ha
> Please bear with me...
> It seems that I'm getting different results than I described earlier...
>
> In fact it is now working for the most part...
> The errors are limited to certain files in Chrome on the Mac, but not in
> Safari
> or Firefox.
You should clean cache (or set to never cache) f
> I was wrong...
>
> >This seems to work:
> >>rewrite ^/e/(.*) /$1 permanent;
>
> It only works for the first level...
> 'threedaystubble.com/Gallery.html' works but other links from that page that
> got deeper into the file structure do not!
What do you mean by "got deeper" can you give a sampl
It is a bit unclear if you want only a single rewrite or are there multiple
different directory mappings/redirects.
> I tried a couple of ideas, but they didn't work, I thought this location
> directive
> inside a server block was best, but it didn't work.
>
> location = /e {
>return 31
> I need a HTTP proxy that can handle requests for a single upstream server,
> but also log request headers, raw request body, response headers and raw
> response body for each request. Preferably this should be logged to a
> separate daily logfile (with date-stamped filename), with timestamps, but
> --
> #Proxy server (Server1)
>
> # threedaystubble.com server
> server {
> listen 80;
> server_name www.threedaystubble.com threedaystubble.com;
> location / {
> proxy_pass http://192.168.3.5:80;
> }
> }
In this co
> > Now instead you want the content of the url
> > http://externalserver.com/before_body.txt?
>
> Yes, that's right.
Can you actually open the file on the external server -
http://externalserver.com/src/before_body.txt and does it have the content you
expect (without redirects)?
Note that si
> I have the following server in NGINX and it works fine. But, I am wondering is
> it possible to add text to a response from a remote URL where hosts my
> before_body.txt and after_body.txt? Is there any way to tackle this? Is it
> possible at all?
According to documentation
(http://nginx.org/en
> I'm going over some Web Server STIGs (referenced here:
> https://www.stigviewer.com/stig/web_server_security_requirements_guide
> /) to make sure my NGINX web server is configured to comply with those
> security requirements. One of the requirements is that "The web server must
> initiate session
I'm not very into Java but you might get more details if you add
-Djavax.net.debug=SSL,handshake or -Djavax.net.debug=all
The current error is not very explanatory (at least to me) and from nginx side
the client just closes connection.
You could test the nginx side with cipherscan
https://git
> Can Unit be used as a reverse proxy server like what we do with Nginx?
It can.
> I want to update my Nginx reverse proxy server dynamically (&
> automatically) without any downtime, whenever the underlying services
> scale up & down automatically.
In general nginx reloads configuration gracef
> I am looking for APIs on Nginx Opensource. To monitor, get status and
> dynamic configuration of nginx.conf files.
>
> Does the opensource version has it, please confirm?
For the os version there is stub status module
http://nginx.org/en/docs/http/ngx_http_stub_status_module.html
There are se
> return 301 return 301 https://$server_name$request_uri;
Obviously a typo just a single return 301.
rr
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
> I am running nginx version: nginx/1.16.1 on CentOS Linux release 7.8.2003
> (Core). When I hit https://marketplace.mydomain.com it works perfectly fine
> whereas when I hit http://marketplace.mydomain.com
> (port 80) does not get redirected to https://marketplace.mydomain.com (port
> 443). I
> the links you’ve sent me I have tried to log in with my usual email and and
> password I use and it’s not correct tried to click remind then it doesn’t
> work
You can just send email to nginx-requ...@nginx.org with subject 'unsubscribe'
(without quotes).
It should remove you from list (it
> But when A is not available, it should send request to B.
> When A come back, it should send requests to A.
You can add 'backup' for the B server and it will be used only when all others
(A) are down:
https://nginx.org/en/docs/http/ngx_http_upstream_module.html#server
rr
_
> it's dependent on openssh version and installed one is 1.0.1t
On openssl.
> which seem to support TLS1.2, but "nmap --script ssl-enum-ciphers -p 443
> sitename" says only SSLv3 and TLS1.0 are supported. So is there anything I
> can to to make nginx 0.7.65 recognize TLS1.2 and use it?
>
> Yeah
> Subject: Can someone explain me why "curl: (7) Failed to connect to
> 127.0.0.1 port 2000: Connection refused" ?
>
> Hi!,
>
> I do not understand why it says "curl: (7) Failed to connect to 127.0.0.1 port
> 2000: Connection refused" :
> curl -X POST -F 'first_name=pinco' -F 'last_name=pallo' -F
> server {
> location / {
> root
> /home/marco/webMatters/vueMatters/ggc/src/components/auth/weights;
>}
> }
Since it's under /home most likely nginx has no access to the directory.
Check the user under which nginx is running (probably nobody) and try to check
if you can read the fil
> Can I compile nginx on Ubuntu 16.04 and reuse it on other deployments? Or do
> I need to compile every time ? Please advise.
As far as the hosts have all the shared libraries like openssl/pcre etc (you
can check with 'ldd /path/to/nginx') there is no need to compile every time and
you can jus
> Is there any way to tie the 'inactive' time to the cache-control header
> expiration time so that pages that are cached in a certain time-window are
> always kept and not deleted until after the header expiration time?
You can just set the inactive time longer than your possible maximum expire
> What I need is a cache of data that is aware that the validity of its data is
> dependent on who - foo or bar - is retrieving it. In practice, this means that
> requests from both foo and bar may be responded to with cache data from
> the other's previous request, but most likely the cache will b
> The Nginx built with OpenSSL 1.1.1d does not generate the error logs. I don't
> know how I can fix this problem.
> Belows are my Nginx build configuration and nginx.conf.
I'm using 1.1.1e bit with reverted EOF patch (so far haven't seen any issues
and it seems they are going to revert it anyway
> After using 1.1.1e, see also the commit where an explicit entry has been
> added.
> nginx just reports back what openssl passes, if this was unexpected (none
> critical) nginx needs to be patched, if not this openssl workaround (10880)
> needs to be changed.
Any comment on this from any nginx de
> The user MUST BE ABLE to download the file from the article pages when
> LOGGED.
> If the user is NOT LOGGED, he cannot download the file, therefore even
> recovering the url, he must receive an error or any other type of block.
It's rather difficult to achieve that only with a webserver (as typ
> Hi.
> Here the result from tcpdump:
> from inside my network
> 192.168.1.10.60221 > 192.168.1.3.fujitsu-dtcns: UDP, length 107
> 192.168.1.3.fujitsu-dtcns > 192.168.1.10.60221: UDP, length 85
>
> From all agenst fro outside my network:
> any.public.ip.address.56916 > 151.1.210.45.fujitsu-dtcns:
> I get that the NGINX listen statement works on an individual port basis, so
> the equivalent of what's below in NGINX would at the very least require 300
> listen statements.
You can listen on a port range (see below).
> FYI I've tried referencing my own declared variables from within the u
> but my agents are still unable to send logs over port 1514 UDP
Well at least the nginx setup seems in working order.
Now do you see any more detailed messages on the agents (like extended ip/port
info / connection error)?
Also you could inspect the network traffic to see if the centos box re
> The agents in my local network(192.x.x.x)) instead, are able to authenticate
> over port 1515 TCP, but not to send logs over 1514 UDP. The agents log said
> that they are unable to connect over that port.
>
> If I temporally change the port 1514 UDP to 1514 TCP in my HIDS nodes, and
> make the s
> Where is the Bionic repo?
>
> If you are referring to the default repository for all things Linux Mint,
> there
> was only Nginx 1.14.
I mean the nginx bionic repo (here you can see the available Ubuntu versions
http://nginx.org/packages/mainline/ubuntu/dists/ )
But it seems you have alread
> E: The repository 'http://nginx.org/packages/ubuntu tricia Release' does not
> have a Release file.
> N: Updating from such a repository can't be done securely, and is therefore
> disabled by default.
> -
> Are there any other instructions available to get Nginx 1.17 downloaded?
You should p
> I did follow your steps. My nginx.conf file is
> https://paste.centos.org/view/ae22889e when I run the curl call, I am still
> receiving HTTP 200 OK response instead of HTTP 444 (No Response) as per the
> below output
If you've just called config reload then most likely your nginx is still us
> So either place it as first or add listen 443 default_server;
By first I mean the "catch all" server { server_name _; .. } block.
rr
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
> I have added the below server block https://paste.centos.org/view/0c6f3195
>
> It is still not working. I look forward to hearing from you and your help is
> highly appreciated. Thanks in Advance.
If you don't use the default_server for the catch all server{} block then you
should place it a
> I have added the below server block in /etc/nginx/nginx.conf
> (https://paste.centos.org/view/raw/d5e90b98)
>
> server {
> listen 80;
>server_name _;
>return 444;
> }
>
> When i try to run the below curl call, I am still receiving 200 OK response.
> #curl --verbose --h
> Is there a way to prevent Arbitrary HTTP Host header in Nginx? Penetration
> test has reported accepting arbitrary host headers. Thanks in Advance and I
> look forward to hearing from you.
You can always define "catch all" server block with:
server {
listen 80 default_server;
s
> From the hosts outside i've no connection problem, but from inside they are
> unable to connect to the port. No firewall are enable on Nginx LB( Centos 7
> machine by the way) and Selinux is disabled.
By "from inside" you mean other hosts in LAN or the same centos machine?
If first then it's
> Now while accessing my VM ip http://x.y.z.a, I am getting "403 Forbidden"
> error in the browser. However gitlab still working. How to get both the sites
> working listening on port 80 but with different context of location?
First of all you should check the error log to see why the 403 is retur
> Hi!,
> I do not understand what should I modify.
The problem is your backend application (I assume node app) which listens on
the 8080 port. While nginx is doing everything right the app responds and
constructs the urls using internal ip and/or 'localhost'.
Depending on what the app uses for
> While the AJAX request does not know anything about the NGINX Proxy, they
> does not know anything about the “webui” path. So I need to find a solution
> to manipulate these javascript code:
If the javascript files are proxied the same way (from origin server) as the
application you can use t
>
> if ($args ~ "^p=(\d+)") {
> set $page $1;
> set $args "";
> rewrite ^.*$ /p/$page last;
> break;
> }
>
> I knew there'd be a simpler way and I due to the time
> I struggled with using the alias directive because I (incorrectly) assumed
> that it was relative to root since all other parts of my nginx configs are.
> This is not mentioned in the documentation, it'd be nice to see it there.
Well it's not directly worded but you can (should) see from the
> Hello,
>
> is there a way to check if a requested resource is in the cache?
>
> For example, “if” has the option “-f”, which could be used to check if a
> static
> file is present.
>
> Is there something similar for a cached resource?
Depending on what you want to achieve you could check $up
> proxy_store seems to be a much simpler alternative to “cache" pseudo-static
> resources.
>
> Is there anything non-obvious that speaks agains the use of proxy_store?
Depends on how you look at "much simpler".
proxy_store doesn't have a cache manager so space limitation/cleaning is up to
you.
> The problem is, whatever URL I put in the browser, it redirects to
> https://trisect.uk ___
> server_name trisect.uk *.trisect.uk;
> return 301 https://$server_name$request_uri; }
For that I don't think you can use $server_name here because it will al
> I am trying to reduce transfer size in my website. Although i apparently have
> enabled Gzip compression, i does not show as enabled in GTmetrix testing.
For testing purposes just putting:
gzip on;
gzip_typestext/html text/plain text/xml text/css application/javascript
application/jso
> Here is the situation posted in DO community.
> https://www.digitalocean.com/community/questions/enabling-gzip-compression-guidance-needed
> Thanks for any help.
Well you are testing in a wrong way. First of all:
curl -H "Accept-Encoding: gzip" -I http://localhost/test.jpg
HTTP/1.1 301 Moved P
> I will search for this. Not sure how to add this info to my logs, or
> whether it logs failures too?
$ssl_client_verify - contains the verification status
You have to define a custom log_format
(http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format )
For example:
log_format cli
> When this is all done, and I import the p12 client certificate on my Windows
> PCs (tested 2) Chrome and Firefox show me the "400 Bad Request\n No required
> SSL certificate was sent". The very strange thing is IE11 on one of the two
> PCs, actually prompts me to use my newly-installed cert t
> The problem is comming when I try to test both Django sites with ssllabs.com
>
> >Certificate #2: RSA 2048 bits (SHA256withRSA) No SNI
> The error what I see is "Alternative nameswpexample.org
> www.wpexample.org
> MISMATCH"
It is normal for clients which don't support SNI (server name indi
1 - 100 of 288 matches
Mail list logo