authentication puzzle for
those who have seen Cloudflares I am under attack mode! you know what this
will do :) You no longer need the third party services like cloudflare you
can now protect your own Nginx servers with it.
https://github.com/C0nw0nk/Nginx-Lua-Anti-DDoS
I was inspired by Cloudflare
So with the following.
logformat qs "$remote_addr $args";
server {
server_name NAME;
access_log /path/to/log qs;
location / {
root /path/to/root;
}
}
If i go to url
/index.php?query1=param1&query2=param2
The access.log file shows
quer
So my issue is mostly directed towards Yichun Zhang (agentzh) if he is still
active here. I hope so.
My problem is I am trying to increase my Cache HIT ratio by removing
arguments from the URL that are fake / unwanted and order the arguments in a
alphabetical (same order every time) for a higher
itpp2012 Wrote:
---
> Have a look here http://nginx-win.ecsds.eu/
Best Nginx for windows builds around :) love itpp2012's work.
He also fixed the concurrent connection limitations and continuously ads
modules like Lua for Nginx into his builds
ay something corresponding to "Thu Jan 1 00:00:00 UTC 1970".
Should it look like yours or Nginx will read and understand it in the format
PHP is outputting it as ?
Francis Daly Wrote:
---
> On Sat, May 12, 2018 at 12:05:51AM -0400, c0n
You know you can DoS sites with Cache MISS via switching up URL params and
arguements.
Examples :
HIT :
index.php?var1=one&var2=two
MISS :
index.php?var2=two&var1=one
MISS :
index.php?random=1
index.php?random=2
index.php?random=3
etc etc
Inserting random arguements to URL's will cause cache mi
So it says this on the docs :
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_valid
The “X-Accel-Expires” header field sets caching time of a response in
seconds. The zero value disables caching for a response. If the value starts
with the @ prefix, it sets an absolute time in
Sergey Kandaurov Wrote:
---
> > On 11 May 2018, at 04:30, c0nw0nk
> wrote:
> >
> > So in order for my web application to tell Nginx not to cache a page
> what
> > header response should I be sending ?
> >
So in order for my web application to tell Nginx not to cache a page what
header response should I be sending ?
X-Accel-Expires: 0
X-Accel-Expires: Off
I read here it should be "OFF"
https://www.nginx.com/resources/wiki/start/topics/examples/x-accel/#x-accel-expires
But it does not mention if nu
http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_cache_background_update
How can I switch between an On and a Off version of this function within a
Nginx server {
set $var 1;
if ($var) {
fastcgi_cache_background_update On;
}
Is there a way to do this even with Nginx + Lua i
Maxim Dounin Wrote:
---
> Hello!
>
> On Tue, Apr 24, 2018 at 01:06:48PM -0400, c0nw0nk wrote:
>
> > As it says on the Nginx docs for limit_req
> >
> > One megabyte zone can keep about 16 thousand 64-byte states
As it says on the Nginx docs for limit_req
One megabyte zone can keep about 16 thousand 64-byte states or about 8
thousand 128-byte states.
What can a 100m zone for the fastcgi_cache store ?
depending on the length of the fastcgi_cache_key and how many variables that
contains i am sure could a
Igor Sysoev Wrote:
---
> > On 18 Apr 2018, at 01:35, c0nw0nk
> wrote:
> >
> > Thank you for the help :)
> >
> > A new dilemma has occurred from this.
> >
> > I add a location like so.
> &g
Thank you for the help :)
A new dilemma has occurred from this.
I add a location like so.
location ^~/media/files/ {
add_header X-Location-Order First;
}
location ~ \.mp4$ {
add_header X-Location-MP4 Served-from-MP4-location;
}
location ~*
\.(ico|png|jpg|jpeg|gif|flv|mp4|avi|m4v|mov|divx|webm|og
So I have a location setup like this.
location /media/files/ {
add_header X-Location-Order First;
}
location ~*
\.(ico|png|jpg|jpeg|gif|flv|mp4|avi|m4v|mov|divx|webm|ogg|mp3|mpeg|mpg|swf|css|js)$
{
add_header X-Location-Order Second;
}
When I access URL : domain_name_dot_com/media/files/image.jp
So when dealing with mp4 etc video streams what is the best speed to send /
transfer files to people that does not cause delays in latency / lagging on
the video due etc.
My current :
location /video/ {
mp4;
limit_rate_after 1m;
limit_rate 1m;
}
On other sites when i download / watc
So on each server you can add to your listen directive.
listen 8181 default bind reuseport;
Cloudflare use it and posted in on their blog and github here (benchmark
stats included)
GitHub :
https://github.com/cloudflare/cloudflare-blog/tree/master/2017-10-accept-balancing
Cloudflare Blog :
htt
garyc Wrote:
---
> Please ignore the last message, having learned a bit more about
> probing the file system we can now see that it is PHP that is caching
> the file to the system default location (hence rootfs) a small change
> to the PHP configu
blason Wrote:
---
> Hi Guys,
>
> We have multiple webservers behind Nginx Reverse Proxy and at one of
> the server we have discovered Content spoofing, the vulnerability is
> patched on Apache but also needs to be patchef on Nginx server.
>
> I
why don't you use
$uri $is_args $args
This will build the URL like.
index.php ? arguement=value&moreargs=morevalue
$request_uri will always output the full URL. Not individual segments of
it.
If you want the first part of the url only just use $uri on its own.
http://nginx.org/en/docs/http/n
Like i said before
c0nw0nk Wrote:
---
> Update your web application for example (PHP) first then how ever many
> hours later when all caches for your web application have cleared
> restart your Nginx so it only accepts secure links.
So I was looking at a upstream that has been flooded from multiple locations
and read that you can create what is called a blackhole within the upstream
what helps with the DDoS scenario.
Here Is My upstream config :
upstream web_rack {
server 127.0.0.1:9000 weight=1 fail_timeout=4;
server 127.0.
Update your web application for example (PHP) first then how ever many hours
later when all caches for your web application have cleared restart your
Nginx so it only accepts secure links.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,275668,275669#msg-275669
Yes but characters in args like = & and ? are allowed and its when they
insert more than one occurance of them nginx accepts them and they bypass
any caches that you have.
&argument=value | Cache : HIT
&&&arguement===value | Cache : MISS
And when they want to DoS you they will do something like
So I have been using Lua to iron out a few dilemmas and problems lately.
Does anyone know what characters Nginx accepts inside URL's
I am achieving a higher cache HIT ratio by modifying the URL's with Lua but
it also helps in preventing unwanted forms of DoS.
Here is my code :
local function fi
Couldn't you use
max_ranges 0;
To disable byte range support completely.
Also won't setting the value of ranges to max_ranges 1; break pseudo
streaming in HTML5 video apps etc. ?
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,275424,275437#msg-275437
__
Here is my config :
http {
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
limit_conn_zone $binary_remote_addr zone=addr:10m;
server {
location /secured/ {
auth_basic "secured area";
auth_basic_user_file conf/htpasswd;
limit_req zone=one burst=5;
limit_conn addr 1;
}
}
My q
no
> apparent reason. Case in point, I had a referral from the al Aqsa
> Martyrs Brigade. Terrorists! And numerous porn sites, all
> irrelevant. So Naxsi alone isn't sufficient.
>
> Original Message
> From: c0nw0nk
> Sent: Saturday, May 20, 2017 3:36 AM
> To: nginx
I take it you don't use a WAF of any kind i also think you should add it to
a MAP at least instead of using IF.
The WAF I use for these same rules is found here.
https://github.com/nbs-system/naxsi
The rules for wordpress and other content management systems are found
here.
http://spike.nginx-g
Use Nginx built in secure link module the link you provided is being
generated and served by PHP. ".com/vfm-admin/vfm-downloader.php?q="
Nginx's secure link module will resume downloads and support pseudo
streaming etc but you will find it is PHP that does not.
Change your setup and modify your
Dmitry S. Polyakov Wrote:
---
> On Thu, Apr 6, 2017, 10:50 shahzaib mushtaq
> wrote:
>
> > >>With the controls sites have over the referrer header, it's not
> very
> > effective as an access control mechanism. You can use something like
> > http
locked the following two user agents that those apps use.
Kodi
XBMC
(I would suggest making them non case sensitive matches too)
Where I posted in regards to this.
https://forum.nginx.org/read.php?2,270705,270739#msg-270739
https://github.com/C0nw0nk/Nginx-Lua-Secure-Link-Anti-Hotlinking
Poste
So this is my map
map $http_cookie $session_id_value {
default '';
"~^.*[0-9a-f]{32}\=(?[\w]{1,}+).*$" $session_value;
}
The cookie name = a MD5 sum the full / complete value of the cookie seems to
cut of at a plus + symbol
What would the correct regex to be to ignore / remove + symbols from
"s
e itpp2012
fixed for us anytime soon ?
Igal @ Lucee.org Wrote:
---
> Hi,
>
> On 3/21/2017 7:10 AM, c0nw0nk wrote:
> > I have used his builds you can download them for free...
> I didn't see a download link at http://nginx-w
Those are itpp2012's windows builds I believe he is a admin on the mailing
list.
https://forum.nginx.org/profile.php?11,7488
Under all his posts it says he is a admin.
I have used his builds you can download them for free... Just like nginx
mainline builds from nginx.org But specific custom feat
is missing it goes to the
next header for the realip, If the next header is missing it goes to the
next until no more potential realip headers exist so we set their IP as
their connection $remote_addr.
Be nice if the realip module did this but lucky we don't need the realip
module this shows and
easy.
Francis Daly Wrote:
---
> On Mon, Mar 06, 2017 at 02:12:40PM -0500, c0nw0nk wrote:
>
> Hi there,
>
> good that you've found some more answers.
>
> There's still some to be worked on, though, I suspect.
>
r using hyphens rather than real emptiness, I guess that helps
> validating there is no real value, differentating this case from a
> bogus
> 'empty' which would be a sign of a bug.
> ---
> *B. R.*
>
> On Sun, Mar 5, 2017 at 10:50 PM, c0nw0nk
> wrote:
&
g a too quick
> look
> at the log line.
>
> Your 'empty' variables are actually showing the value '-' in this log
> line.
> It probably does not help debugging to have static '-' mixed in the
> format
> of your log lines where you put them.
>
Francis Daly Wrote:
---
> On Fri, Mar 03, 2017 at 10:47:26AM -0500, c0nw0nk wrote:
>
> Hi there,
>
> > map $http_cf_connecting_ip $client_ip_from_cf {
> > default $http_cf_connecting_ip;
> > }
> >
> >
Thank's Francis much appreciated it seems to be working good :)
Francis Daly Wrote:
---
> On Fri, Mar 03, 2017 at 10:47:26AM -0500, c0nw0nk wrote:
>
> Hi there,
>
> > map $http_cf_connecting_ip $client_i
So I have the following Map
map $http_cf_connecting_ip $client_ip_from_cf {
default $http_cf_connecting_ip;
}
How can I make it so if the client did not send that $http_ header it makes
$client_ip_from_cf variable value = $binary_remote_addr
Not sure how to check in a map if that http header is
You should view
http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_catch_stderr
Might be what you seek for a empty blank page output or specific text that
would be a Fatal error etc.
CJ Ess Wrote:
---
> My employer uses Nginx in f
So in the documentation and from what I see online everyone is limiting
requests to prevent flooding on dynamic pages and video streams etc.
But when you visit a HTML page the HTML page loads up allot of various
different elements like .css .js .png .ico .jpg files.
To prevent those elements also
I think from my understanding the proxy_http_version 1.1; is ignored over
https since everything works and that directive does what it states
proxy_HTTP_version for unsecured requests only it will be version 1.1 so i
don't think it has any negative impact on HTTP2/SSL.
Posted at Nginx Forum:
http
So the Nginx documentation says this
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
For HTTP, the proxy_http_version directive should be set to “1.1” and the
“Connection” header field should be cleared:
upstream http_backend {
server 127.0.0.1:8080;
kee
For a server {} that you want to make both universally compatible with both
http port 80 and https port 443 ssl requests.
This was my solution for my own sites.
#inside http block
upstream proxy_web_rack { #port 80 unsecured requests
server 172.16.0.1:80;
}
upstream proxy_web_rack_ssl { #port 443
mex Wrote:
---
> grey rules means they are deactivated
>
>
> i'm gonna write a blog on how we use spike + doxi-rules in our
> setup, but it will take some time.
That's cool look forward to it also the rules on spike I think need updating
with t
mex Wrote:
---
> Hi c0nw0nk,
>
> mex here, inital creator of http://spike.nginx-goodies.com/rules/
> and maintainer of Doxi-Rules
> https://bitbucket.org/lazy_dogtown/doxi-rules/overview
> (this us where the rules live we
So I recently got hooked on Naxsi and I am loving it to bits <3 thanks to
itpp2012 :)
https://github.com/nbs-system/naxsi
I found the following Rule sets here.
http://spike.nginx-goodies.com/rules/
But I am curious does anyone have Naxsi written rules that would be the same
as/on Cloudflare's W
Provide your full config please.
Also this error log. [emerg] "if" directive is not allowed here
That means you put the code I provided in a invalid area I would assume not
between location {} or server {} tags as I said.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,271581,271586#ms
xstation Wrote:
---
> eneted this in the conf file under http
>
> SetEnvIfNoCase User-Agent "^Baiduspider" block_bot
> Order Allow,Deny
> Allow from All
> Deny from env=block_bot
>
>
> but on restart got a error message
>
> Job for nginx.serv
That is why you cache the request. DoS or in your case DDoS since multiple
are involved Caching backend responses and having Nginx serve a cached
response even for 1 second that cached response can be valid for it will
save your day.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,27148
gariac Wrote:
---
> This is an interesting bit of code. However if you are being ddos-ed,
> this just eliminates nginx from replying. It isn't like nginx is
> isolated from the attack. I would still rather block the IP at the
> firewall and preven
proxy_cache / fastcgi_cache the pages output will help. Flood all you want
Nginx handles flooding and lots of connections fine your back end is your
weakness / bottleneck that is allowing them to be successful in effecting
your service.
You could also use the secure_link module to help on your ind
I am curious what is the request uri they was hitting. Was it a dynamic page
or file or a static one.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,271483,271494#msg-271494
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mai
I think you could modify the conf/mime.types
video/mp4 mp4 gifv;
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,270812,270813#msg-270813
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/lis
Well I do use Nginx with Lua I was planning on writing up a little Lua to
replace body_contents outputs and include some JavaScript to append src
links.
For example in HTML :
I would use Lua to obtain the link between the quotation and replace it with
"" (Making it empty) and then use Lua to ins
Lukas Tribus Wrote:
---
> I have a question: secure_link is correctly blocking those requests so
> its not generating any traffic.
>
> Why does it bother you then, if it is already blocked?
>
> ___
> n
I wouldn't mind those using app's like Kodi if they did not just hotlink and
steal my links. If my adverts was still there and I am being reimbursed for
my work and content and bandwidth they are consuming. Then I wouldn't mind
but I bet Kodi is not the only app with plugins doing this.
The only s
Yes I see after looking at the various plugins on GitHub it seems they
replace the & ampersand string with & when they pull contents from the
HTML. They also fake / spoof referrers and can change user-agents etc but
they do it properly not like the person who has ended up in my logs. As you
said th
gariac Wrote:
---
> Apparently there is a scheme to feed urls to kodi.
>
> https://m.reddit.com/r/kodi/comments/3lz84g/how_do_you_open_a_youtube
> _video_from_the_shell/
>
> Block/ban as you see fit. ;-) These people are edge users of Kodi.
gariac Wrote:
---
> Kodi is the renamed xbmc. I use it myself, but I never "aimed" it at a
> website. I just view my own videos or use the kodi plug-ins. You can
> install it yourself on a PC and see it is intended to be just a media
> player. It
So with Nginx my access.logs show allot of Kodi user agents from what I look
up online Kodi is a app that runs on Phones, TV sticks, Mac, PC etc and it
is used for watching live TV I reckon its a pretty abusive app or service
since there is allot going around about IPTV and how illegal it is.
The
You should check your application sounds like that is compressing its
pages.
A simple test is this create a empty html file and serve that from a
location and check the headers.
location = /test.html {
root "path/to/html/file";
}
if the headers on that have no gzip compression as set in your ngi
Thanks :)
I thought the more servers I have within my upstream location would mean I
should also increase my keepalive to suit for best performance etc.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,269997,270001#msg-270001
___
nginx mailin
FastCGI :
upstream fastcgi_backend {
server 127.0.0.1:9000;
keepalive 8;
}
server {
...
location /fastcgi/ {
fastcgi_pass fastcgi_backend;
fastcgi_keep_conn on;
...
}
}
Proxy :
upstream http_backend {
server 127.0.0.1:80;
keepalive 16;
}
se
So this is one of those issues it is most likely a bad configuration but my
robots.txt file is returning a 404 because of another location because I am
disallowing people to access any text files but I do want to allow only the
robots.txt to be accessed.
location /robots.txt {
root 'location/to/ro
What I would say to do is write IP's from your toolkit or what ever you are
using for reading your access.log and those that trigger and spam the 503
error within milliseconds or what ever range it is you can do an API call
and add those IP's to be blocked at a router level.
With CloudFlare you ca
It is a response by the time the 444 is served it is to late a true DDoS is
not about what the server outputs its about what it can receive you can't
expect incoming traffic that amounts to 600Gbps to be prevented by a 1Gbps
port it does not work like that Nginx is an Application preventing any for
Francis Daly Wrote:
---
> On Mon, Sep 26, 2016 at 07:41:12PM -0400, c0nw0nk wrote:
>
> Hi there,
>
> > Whats a good setting that won't effect legitimate decent (I think I
> just
> > committed a crime callin
So to prevent flooding / spam by bots especially since some bots are just
brutal when they crawl by within milliseconds jumping to every single page
they can get.
I am going to apply limit's to my PHP block
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;
limit_conn_zone $binary_remote_
Anoop Alias Wrote:
---
> Ok .. reiterating my original question.
>
> Is the usage of if / map in nginx config more efficient than say
> naxsi (
> or libmodsecurity ) for something like blocking SQL injection ?
>
> For example,
> https://githu
So I want to find the best optimal settings for serving large static files
with Nginx. >=2GB
I read that "output_buffers" is the key.
Would also like to know if it should be defined per location {} that the
static file is served from or across the entire server via http {} and any
other settings
If you read the OWASP page it will also mention about header stripping etc
and proxies that will remove the X-Frames headers there is no real way to
stop proxies framing your site but the X-Frame-Options combined with that
JavaScript is a good way to start it will stop the majority.
Also break the
https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet
Inside your tags.
body{display:none !important;}
if (self === top) {
var antiClickjack = document.getElementById("antiClickjack");
antiClickjack.parentNode.removeChild(antiClickjack);
} else {
top.locat
Thanks for the information so based of what that resource says and from what
I understand surely that field should only say "anonymous" or "username" if
on those files / folders in my Nginx config I use "auth_basic" ?
http://nginx.org/en/docs/http/ngx_http_auth_basic_module.html
The fact they are
So in my access logs all my other logs the $remote_user is empty.
But for only this one single IP that keeps making requests the $remote_user
has a value.
CF-Real-IP: 176.57.129.88 - CF-Server: 10.108.22.151 - anonymous
[21/Sep/2016:18:54:52 +0100] "GET
/media/files/29/96/2b/701f56b345ce53119264
nce it would require
manually updating allot. The cloudflare server IP's would need excluding
from the $binary_remote_addr output.
Currently i am using my first method and it works great.
c0nw0nk Wrote:
---
> limit_req_zone $http_cf_con
Il test further with it but it definitely did not work with the following
using nginx_basic.exe (it was blocking the cloudflare server IP's from
connecting)
http {
#Inside http
real_ip_header CF-Connecting-IP;
limit_req_zone $binary_remote_addr zone=one:10m rate=30r/m;
limit_conn_zone $binary_re
itpp2012 Wrote:
---
> c0nw0nk Wrote:
> > Yes I can't test it at the moment unfortunately with the realip
> module
> > due to the fact i use "itpp2012" Nginx builds
> > http://nginx-win.ecsds.eu/ They
itpp2012 Wrote:
---
> c0nw0nk Wrote:
> > Yes I can't test it at the moment unfortunately with the realip
> module
> > due to the fact i use "itpp2012" Nginx builds
> > http://nginx-win.ecsds.eu/ They
gt; memory
> and why using as little data per client is highly advised in
> limit_req_zone
> <http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_re
> q_zone>
> directive docs as you do not seem to know what you are doing...
> ---
> *B. R.*
>
> On T
Reinis Rozitis Wrote:
---
> > But that book says it is to reduce the memory footprint ?
>
> Correct, but that is for that specific varible.
>
> You can't take $http_cf_connecting_ip which is a HTTP header comming
> from
> Cloudflare and prepe
Reinis Rozitis Wrote:
---
> > I just found the following :
> >
> https://books.google.co.uk/books?id=ZO09CgAAQBAJ&pg=PA96&lpg=PA96&dq=$
> binary_
>
> > limit_req_zone $binary_http_cf_connecting_ip zone=one:10m
> rate=30r/m;
> > limit_conn_zone $b
Reinis Rozitis Wrote:
---
> > I just found the following :
> >
> https://books.google.co.uk/books?id=ZO09CgAAQBAJ&pg=PA96&lpg=PA96&dq=$
> binary_
>
> > limit_req_zone $binary_http_cf_connecting_ip zone=one:10m
> rate=30r/m;
> > limit_conn_zone $b
I just found the following :
https://books.google.co.uk/books?id=ZO09CgAAQBAJ&pg=PA96&lpg=PA96&dq=$binary_
To conserve the space occupied by the key we use $binary_remote_addr It
evaluates into a binary value of the remote IP address
So it seems I should be doing this instead to keep the key in m
gariac Wrote:
---
> I'm assuming at this point if cookies are too much, then logins or
> captcha aren't going to happen.
>
> How about just blocking the offending websites at the firewall? I'm
> assuming you see the proxy and not the eyeballs a
> gariac Wrote:
> ---
> > What about Roboo? It requires a cookie on the website before the
> > download takes place. (My usual warning this is my understanding of
> > how it works, but I have no first hand knowledge.) I presume the
> hot
> > link
e no first hand knowledge.) I presume the hot
> linkers won't have the cookie.
>
> https://github.com/yuri-gushin/Roboo
>
> Original Message
> From: c0nw0nk
> Sent: Tuesday, September 13, 2016 1:09 AM
> To: nginx@nginx.org
> Reply To: nginx@nginx.org
> Subject: Keeping
So I noticed some unusual stuff going on lately mostly to do with people
using proxies to spoof / fake that files from my sites are hosted of their
sites.
Sitting behind CloudFlare the only decent way I can come up with to prevent
these websites who use proxy_pass and proxy_set_header to pretend t
gariac Wrote:
---
> This page has all the secret sauce, including how to limit the number
> of connections.
>
> https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-ngin
> x-plus/
>
> I set up the firewall with a higher number as
Sep 10, 2016 at 2:46 PM, c0nw0nk
> wrote:
>
> > Just fixed my problem completely now :)
> >
> > For anyone who also uses Lua and wants to overcome this cross
> browser
> > compatibility issue with expires and max-age cookie vars.
> >
> > if ($ho
Just fixed my problem completely now :)
For anyone who also uses Lua and wants to overcome this cross browser
compatibility issue with expires and max-age cookie vars.
if ($host ~* www(.*)) {
set $host_without_www $1;
}
set_by_lua $expires_time 'return ngx.cookie_time(ngx.time()+2592000)';
add_h
Can you provide a example also I seem to have a new issue with my code above
it is overwriting all my other set-cookie headers how can i have it set that
cookie but not overwrite / remove the others it seems to be a unwanted /
unexpected side effect.
Posted at Nginx Forum:
https://forum.nginx.org
Solved it now i forgot in lua i declare vars from nginx different.
header_filter_by_lua '
ngx.header["Set-Cookie"] = "value=1; path=/; domain=" ..
ngx.var.host_without_www .. "; Expires=" ..
ngx.cookie_time(ngx.time()+2592000) -- +1 month 30 days
';
Posted at Nginx Forum:
https://forum.n
if ($host ~* www(.*)) {
set $host_without_www $1;
}
header_filter_by_lua '
ngx.header["Set-Cookie"] = "value=1; path=/; domain=$host_without_www;
Expires=" .. ngx.cookie_time(ngx.time()+2592000) -- +1 month 30 days
';
So i added this to my config but does not work for me :(
Posted at Ngin
So i read that IE8 and older browsers do not support "Max-Age" inside of
set-cookie headers. (but all browsers and modern support expires)
add_header Set-Cookie
"value=1;Domain=.networkflare.com;Path=/;Max-Age=2592000"; #+1 month 30
days
Apprently they support "expires" though so i changed the ab
c0nw0nk Wrote:
---
> Francis Daly Wrote:
> ---
> > On Wed, Aug 31, 2016 at 01:30:30PM -0400, c0nw0nk wrote:
> >
> > Hi there,
> >
> > > Thanks works a tr
Francis Daly Wrote:
---
> On Wed, Aug 31, 2016 at 01:30:30PM -0400, c0nw0nk wrote:
>
> Hi there,
>
> > Thanks works a treat is it possible or allowed to do the following
> in a
> > nginx upstream map ? and if s
1 - 100 of 211 matches
Mail list logo