Try activating ssl without the plugin. Change the url in wordpress settings.
On Tue, 9 Feb 2021, 5:32 PM Rainer Duffner, wrote:
> It’s setup this way, because haproxy can’t really do vhosts and sometimes
> you need to limit access per vhost.
>
> OTOH, haproxy can do restrictions on a per-url bas
It’s setup this way, because haproxy can’t really do vhosts and sometimes you
need to limit access per vhost.
OTOH, haproxy can do restrictions on a per-url basis much better (IMO) than
Nginx.
There are up to several hundred vhosts there and sometimes you want to limit
stuff on any one of them
Hi, normally when I get infinite loop with ssl, its usually because of
redirection of http to https. Sometimes front proxy (cloudflare or haproxy)
is expecting simple http traffic and it gets https traffic and vice versa.
Also check your wordpress settings and its url. Try changing it.
And why are
Hi,
I have an interesting problem.
I have apache behind Nginx behind haproxy.
SSL is terminated with haproxy (because haproxy can load all certificates from
a single directory, and because some rate-limiting stuff is easier with
haproxy).
This makes using Let’s Encrypt easier.
Sometimes, I wa
or about
> 8
> > thousand 128-byte states.
> >
> >
> > What can a 100m zone for the fastcgi_cache store ?
> >
> > depending on the length of the fastcgi_cache_key and how many
> variables that
> > contains i am sure could affect it but be nice to hav
Hello!
On Tue, Apr 24, 2018 at 01:06:48PM -0400, c0nw0nk wrote:
> As it says on the Nginx docs for limit_req
>
> One megabyte zone can keep about 16 thousand 64-byte states or about 8
> thousand 128-byte states.
>
>
> What can a 100m zone for the fastcgi_cache store ?
As it says on the Nginx docs for limit_req
One megabyte zone can keep about 16 thousand 64-byte states or about 8
thousand 128-byte states.
What can a 100m zone for the fastcgi_cache store ?
depending on the length of the fastcgi_cache_key and how many variables that
contains i am sure could
quests
So, create a new pool file with the right user:group ... and send the specific
purge request.
-Original Message-
From: nginx [mailto:nginx-boun...@nginx.org] On Behalf Of Reinis Rozitis
Sent: Thursday, March 9, 2017 8:24 PM
To: nginx@nginx.org
Subject: RE: RE: Fastcgi_cache permi
> thanks for the reply. The use case that I have is when php-fpm is running as a
> user different than the nginx one. In this case the permissions being set as
> 0700
> basically deny any manipulation of the cached files from php scripts.
> Everytime
> you try something like this you get permissi
Hi Maxim,
thanks for the reply. The use case that I have is when php-fpm is running as
a user different than the nginx one. In this case the permissions being set
as 0700 basically deny any manipulation of the cached files from php
scripts. Everytime you try something like this you get permission
runs as www-data) to delete Nginx cache files).
-Original Message-
From: nginx [mailto:nginx-boun...@nginx.org] On Behalf Of maznislav
Sent: Thursday, March 9, 2017 5:06 PM
To: nginx@nginx.org
Subject: Fastcgi_cache permissions
Hello, I was searching for an answer for this question quite a
Hello, I was searching for an answer for this question quite a bit, but
unfortunately I was not able to find such, so any help is much appreciated.
The issue is the following - I have enabled Fastcgi_cache for my server and
I have noticed that the cache has very restricted permissions 700 to be
Just for the record: this topic contains 2 suggested solutions:
1) storing gzip compressed and uncompressed HTML separately and have Nginx
determine gzip support instead of the client
2) storing gzip permanently and use Nginx gunzip module to gunzip HTML for
browsers without gzip support
Posted a
Hi Lucas,
Thanks a lot for the information! Hopefully it will help many others that
find the topic via Google as there was almost no information about it
available.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,270604,270665#msg-270665
___
Hello,
It's not strange behavior, it's expected.
What happens is that even though the key is the same - the actual
returned content *might* be different, e.g. as an example:
If your origin returns Vary: accept-encoding
Nginx will cache based on this - so if accept-encoding differs it means
t
Hi Lucas,
Thanks a lot for the suggestion. We were already using that solution but a
strange behavior occurred (see opening post). The first request uses an
expected MD5 hash of the KEY, and the client will keep using that hash (the
MISS/HIT header is accurate). However, requests from other client
Well - then put fastcgi_ignore_headers Vary, make your map determine if
the client support gzip or not, then you'll have 2 entries of
everything, 1 gzipped and one not gzipped.
I'm not sure how much traffic we're talking about when it's about 'high
traffic' - you'd probably want to run your pr
Hi!
It sounds like a good solution to improve the performance, however, I just
read the following post by Jake Archibald (Google Chrome developer).
"Yeah, ~10% of BBC visitors don’t support gzip compression. It was higher
during the day (15-20%) but lower in the evenings and weekends (<10%).
Pret
What you could do (I basically asked the same question 1 week ago), is
that whenever you fastcgi_pass then enforce accept-encoding: gzip -
meaning you'll always request gzipped content from your backend - then
you can enable the gunzip directive by using "gunzip on;"
This means in case a clien
Hi *B. R.*!
Thanks a lot for the reply and information! The KEY however, does not
contain different data from http_accept_encoding. When viewing the contents
of the cache file it contains the exact same KEY for both MD5 hashes. Also,
it does not matter what browser is used for the first request. F
, Oct 27, 2016 at 8:41 PM, seo010 wrote:
> Hi!
>
> I was wondering if anyone has an idea to serve pre-compressed (gzip) HTML
> using proxy_cache / fastcgi_cache.
>
> I tried a solution with a map of http_accept_encoding as part of the
> fastcgi_cache_key with gzip compressed o
Hi!
I was wondering if anyone has an idea to serve pre-compressed (gzip) HTML
using proxy_cache / fastcgi_cache.
I tried a solution with a map of http_accept_encoding as part of the
fastcgi_cache_key with gzip compressed output from the script, but it
resulted into strange behavior (the MD5 hash
install and installation directory and
seperate database.
I have configured nginx with the fastcgi_cache module and it works - but
only for the very first website i setup on the server. Every subsequent
website gets nothing cached.
Running nginx/php7 on Ubuntu Server 16.04
Here is my nginx/nginx.conf
with about 2Gbit/s of outgoing traffic
> responses) there is an issue with the fastcgi_cache files not being freed.
> lsof shows a huge amount of files as (deleted) but the space it not being
> freed. Eventually the entire partition fills up like this.
>
> The keys
Hello,
I'm using nginx 1.10.0 and the nginx_fastcgi_cache option. I've noticed that
with a high amount of requests per second (I'm not sure when it occurs
exactly, but we have ~2500RPS with about 2Gbit/s of outgoing traffic
responses) there is an issue with the fastcgi_cache files
Hi,
Actually, I use fastcgi_cache / proxy_cache but, sometimes, I have problem
with how this cache is read... causing confusion for some sites when open
mobile ou desktop version.
In sites/systems, there are the check for mobile detect, common like
http://detectmobilebrowsers.com
but, for unknow
Some curl examples;
https://wordpress.org/support/topic/if-modified-since-request-header-can-cause-a-cache-control-negative-max-age
It all depends on what you get against what you expected.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,257078,257109#msg-257109
___
> Have you tried this with curl -i to see if it's not a browser cache issue?
>
> Sounds like a cached file with an expire date which is still valid against
> your expire date from cache.
I tried yesterday but I don't really see the point... What should I look
for ?
I set a header to see if cache
Have you tried this with curl -i to see if it's not a browser cache issue?
Sounds like a cached file with an expire date which is still valid against
your expire date from cache.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,257078,257079#msg-257079
__
Hey guys,
I got a strange issues when I activate fastcgi_cache some images doesn't
load on first load of the page but when hitting refresh it loads.
Here's my configuration :
location ~ \.php$ {
# FastCGI optimizing
fastcgi_buffers 4 256k;
fastcgi_buffer
Hello!
On Mon, Mar 02, 2015 at 09:11:11AM -0500, mastercan wrote:
> I've had 2 cases with status code 500 now since setting error log to debug
> level:
>
> The error msg: "epoll_wait() reported that client prematurely closed
> connection while sending request to upstream"
It's expected to be 49
I've had 2 cases with status code 500 now since setting error log to debug
level:
The error msg: "epoll_wait() reported that client prematurely closed
connection while sending request to upstream"
It's interesting to note that:
If a "normal" file (no caching involved) is requested and the client
Maxim Dounin Wrote:
> This makes me think that it is just a cached 500 response from
> your backend then. If in doubt, you can obtain details using
> debug log, see http://wiki.nginx.org/Debugging.
>
I also considered that, but then I'd need to have at least hundreds of 500
status codes since
Hello!
On Mon, Mar 02, 2015 at 07:50:46AM -0500, mastercan wrote:
> Maxim Dounin Wrote:
> ---
> > Hello!
> >
> > Try looking into the error log. When nginx returns 500, it used to
> > complain to the error log explaining the reason.
> >
>
>
Maxim Dounin Wrote:
---
> Hello!
>
> Try looking into the error log. When nginx returns 500, it used to
> complain to the error log explaining the reason.
>
Unfortunately the error log for that vhost does not reveal anything at the
specific t
Hello!
On Mon, Mar 02, 2015 at 06:11:24AM -0500, mastercan wrote:
> Hello,
>
> Nginx (all versions since September 2014, but at least 1.7.9, 1.7.10)
> sometimes returns HTTP status code 500, when serving pages from
> fastcgi_cache.
>
> Each time this happens, following
Hello,
Nginx (all versions since September 2014, but at least 1.7.9, 1.7.10)
sometimes returns HTTP status code 500, when serving pages from
fastcgi_cache.
Each time this happens, following conditions hold true:
*) $upstream_cache_status = HIT (so we don't even hit php-fpm)
*) $body_bytes
Hello!
On Mon, Jun 23, 2014 at 01:08:33PM -0400, ariel_esp wrote:
> Hi, I already try this... but... not work =/
> when in the page, I do "shift+f5", page is re-read "EXPIRED"... OK
> but, this entering in the page, or do F5 ... page = HIT cache...
> In this specifics pages, I always put php head
cache", so, I want always get a new page from backend...
understand?
fastcgi_cache microcache;
fastcgi_cache_key
$scheme$request_method$host$request_uri$http_x_custom_header;
fastcgi_cache_valid any 1m;
proxy_cache_use_stale error timeout invalid_header upda
Hello!
On Mon, Jun 23, 2014 at 10:56:17AM -0400, ariel_esp wrote:
> Hi,
> I am trying setup fastcgi_cache.
> Working fine BUT I need bypass some pages... when theses pages have
> header "no-cache" but I dont know how to do this...
> The rules for bypass using ur
Hi,
I am trying setup fastcgi_cache.
Working fine BUT I need bypass some pages... when theses pages have
header "no-cache" but I dont know how to do this...
The rules for bypass using urls, work fine.. like this:
[code]
if ($request_uri ~*
"(/wp-admin/|/xmlrpc.php|/wp-
}
}
location /PURGE/ {
allow 127.0.0.1;
fastcgi_cache_purge MYAPP;
}
location ~ \.php$ {
fastcgi_cache MYAPP;
fastcgi_cache_methods GET HEAD;
fastcgi_cache_valid 200 5m;
Hi,
There is sample varnish config file for mediawiki that purge cache on
updates
#
http://www.mediawiki.org/wiki/Manual:Varnish_caching#Configuring_Varnish_3.x
#
Can the same setting be used in LocalSettings.php and used with the
fastcgi_cache
with the aid of
Hi,
I use nginx + php-fpm (via fcgi) and needed responses from php-server are
putting into cache. I have one thought, could be better send cached pages to
clients from cache with 304 code instead 200.
So we must know time when response has been cached (something like variable)
and send 304 res
I just looked a little bit more on the topic and it is not possible I
believe.
I would have to put something in front of nginx (another nginx) or Varnish -
but that is a shame since nginx fastcgi_cache works so fine.
Best regards.
Posted at Nginx Forum:
http://forum.nginx.org/read.php
serving
content that HIT's the fastcgi_cache.
It seems that ngx_pagespeed has to do its thing on rendered output html
everytime a request is made to the page. I through fastcgi_cache cached
content was already ngx_pagespeed optimized versions. It seems like
ngx_pagespeed runs in front of
As promised here are my stats on vmware 4 vcpus siege -c50 -b -t240s -i
'http://127.0.0.1/test.html'
gzip off, pagespeed off.
Transactions: 898633 hits
Availability: 100.00 %
Elapsed time: 239.55 secs
Data transferred: 39087.92 MB
Response
Hello!
On Fri, Oct 04, 2013 at 12:52:28PM -0400, ddutra wrote:
> Maxim,
> Thank you again.
>
> About my tests, FYI I had httpauth turned off for my tests.
>
> I think you nailed the problem.
>
> This is some new information for me.
>
> So for production I have a standard website which is php
Maxim,
Thank you again.
About my tests, FYI I had httpauth turned off for my tests.
I think you nailed the problem.
This is some new information for me.
So for production I have a standard website which is php being cached by
fastcgi cache. All static assets are served by nginx, so gzip_static
Hello!
On Fri, Oct 04, 2013 at 09:43:05AM -0400, ddutra wrote:
> Hello Maxim,
> Thanks again for your considerations and help.
>
> My first siege tests against the ec2 m1.small production server was done
> using a Dell T410 with 4CPUS x 2.4 (Xeon E5620). It was after your
> considerations about
Well, I just looked at the results again and it seems my Throughput (mb per
s) are not very far from yours.
My bad.
So results not that bad right? What do you think.
Best regards.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,243412,243431#msg-243431
___
Hello Maxim,
Thanks again for your considerations and help.
My first siege tests against the ec2 m1.small production server was done
using a Dell T410 with 4CPUS x 2.4 (Xeon E5620). It was after your
considerations about 127.0.0.1 why I did the siege from the same server that
is running nginx (pro
gt; instead used to help.
>
> Alright, so you are saying my static html serving stats are bad, that means
> the gap between serving static html from disk and serving cached version
> (fastcgi_cache) from tmpfs is even bigger?
Yes. Numbers are _very_ low. In a virtual machine on my
inx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
Maxim,
Thanks for your help.
Alright, so you are saying my static html serving stats are bad, that means
the gap between serving static html from disk and serving cached version
(fastcgi_cache) from tmpfs is even bigger?
Anyways,
Hello!
On Thu, Oct 03, 2013 at 12:34:20PM -0400, ddutra wrote:
[...]
> Scenario three - The same page, saved as .html and server by nginx
>
> Transactions:1799 hits
> Availability: 100.00 %
> Elapsed time: 120.00 secs
> Data transferred:
via fastcgi_cache to TMPFS (MEMORY)
SIEGE -c 40 -b -t120s
'http://www.joaodedeus.com.br/quero-visitar/abadiania-go'
Transactions:1403 hits
Availability: 100.00 %
Elapsed time: 119.46 secs
Data transferred: 14.80 MB
As mentioned before, I'm tweaking pixabay's version of handling the new
Google Image search traffic killer by making their trap URLs more
cacheable.
Img tags in the html will have ?i appended to the source and those "?i"
are removed for bots. I thought I could use nginx's httpsubmodule to strip
th
57 matches
Mail list logo