You can inspect the certificate at
https://www.ssllabs.com/ssltest/
Maybe you will get lucky and it will help you find out what is wrong.
Original Message
From: softwareinfo...@gmail.com
Sent: December 14, 2022 7:02 PM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject:
Isn't Openssl part of your OS?
Original Message
From: nginx-fo...@forum.nginx.org
Sent: March 26, 2022 11:07 PM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject: Nginx with OpenSSL 1.1.1n
The Mainline version of Nginx i.e 1.12.6 has the OpenSSL version 1.1.1m and
it i
lware rejection module? On Mon, Feb 14, 2022 at 6:17 PM lists <li...@lazygranch.com> wrote:...I have plenty of transit capacity. I can serve 3TB a month and I do 30GB. What I don't have is CPU power. I have a one CPU VPS. The CPU is shared resource. I think the RAM used by the VPS is more
There are probably 50 common web crawlers. If they aren't Google, Apple, or Microsoft, I don't want them. The worst is one called "majestic 12". It seems to suck down the the entire website every visit. There are some that try to determine what ads your serve, of which I serve none. Another reads
hp and use the 444 return code which means return nothing. There are lists of shady user agents that you can block. By examining my 404 returns I have made a map of typical hacker triggers to find in the URI. They get a 444 return. You can block wget and curl in the maps. Periodically I feed a
Being that you are using Opensuse I think the answer is no but do you SELINUX
enabled? Usually when a file permission doesn't solve the problem for me then
it is some "policy" feature I don't know about.
I do the file 777 permission during testing to debug something but I can't
think of a case
This is the list of effected programs.
https://github.com/cisagov/log4j-affected-db/blob/develop/SOFTWARE-LIST.md
Original Message
From: ma...@nginx.com
Sent: December 29, 2021 11:21 PM
To: mauro.trid...@cmcc.it
Reply-to: nginx@nginx.org
Cc: nginx@nginx.org
Subject: Re: Help
licking links, or following guidance.
>
> Thank you very much for your reply. I really appreciated it.
> I’ll wait for the final gurus feedback too.
>
> Mauro
>
>> On 29 Dec 2021, at 18:03, lists wrote:
>>
>> That IP space is certified shady. I dete
That IP space is certified shady. I detect the occasional hack from them. See
https://krebsonsecurity.com/2019/08/the-rise-of-bulletproof-residential-networks/
and
https://wirelessdataspco.org/faq.php
These wireless companies will do anything for money including leasing their IP
space.
I do
OK, I’ve inserted that return into the conf file. BUT, there’s an error from
an entry I’ve had in there before of a master error.log. It says the path
provided is no such file or directory. That path is nowhere to be found in
nginx.conf or alpha.conf.
Something wrong with nginx? It isn’t re
I did notice that nginx.conf structure started to recognize trailing semicolons
recently. I have updated to a new OS from an old box on several versions ago.
Comments are allowed on the same line in nginx.conf still?
Cheers, Bee
> On Jun 28, 2021, at 7:08 PM, Sergey A. Osokin wrote:
>
> We
Ya that’s too many to report. I have a catch-all with *.conf. I can restrict
it down to that main nginx.conf and the extra VHost.
Same as I posted before. Same result. This has the main two in my nginx.conf,
and the included one. That last one has its own conf file:
include /opt/
Same result. Default returned. Nothing in access log nor error log.
Cheers, Bee
> On Jun 28, 2021, at 6:35 PM, Sergey A. Osokin wrote:
>
> Seem like curl didn't send a valid "Host: alpha.local" header for some reason,
> that's why NGINX replied with an answer for default_server.
>
> Could
/;
location = /img/favicon.ico { access_log off;}
}
_
Rich in Toronto @ VP
> On Jun 28, 2021, at 10:21 AM, Sergey A. Osokin wrote:
>
> Hi Bee,
>
> hope you're doing well.
>
> On Mon, Jun 28, 2021 at 09:52:17AM -0400, BeeRic
I have a VHost that isn’t serving up. I’ve changed nothing, and it just
started defaulting to the default_server.
The VHost is included in a catch-all for all the other local domains (my
workstation):
include /opt/homebrew/etc/nginx/servers/*.conf;
I’ve even hard coded the VHost in its own
If you follow the suggested link in the previous post you can download an
O'Reilly Nginx book.
One suggestion I have to improve performance is to firewall off all the 'bots.
Firewalls are extremely efficient. Start with AWS:
https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
Bots
Following up, after implementation and rollout.
On Monday, September 21, 2020 1:52:32 AM PDT Francis Daly wrote:
> That's probably the right thing to do overall; except that you probably
> will not control what the typical browser shows for (e.g.) a 401 response.
I've not seen that a 401 or what
See reply below
On Sunday, September 20, 2020 8:29:32 AM PDT Francis Daly wrote:
> On Sat, Sep 19, 2020 at 09:26:57AM -0700, Lists wrote:
>
> Hi there,
>
> > How do I configure nginx to use subrequest authentication for a reverse
> > proxied application with
How do I configure nginx to use subrequest authentication for a reverse proxied
application with websocket upgrades? The documentation doesn't seem to contain
the information I need to do this.
https://docs.nginx.com/nginx/admin-guide/security-controls/configuring-subrequest-authentication/
Wh
er-Agent header of web requests - both to understand who is trying to do what to your website,
and then to start blocking on the basis of user agent.
There may be some bots and spiders that are helpful or even necessary for your business.
Peter
> On Aug 24, 2020, at 2:54 PM, lists <li...@l
@nginx.org
Reply-to: nginx@nginx.org
Subject: Re: Is this an attack or a normal request?
On Mon, 24 Aug 2020 11:54:35 -0700, lists wrote:
<-snip->
> At a minimum I suggest blocking all Amazon AWS. No eyeballs there,
> just hackers. Also block all of OVH.
Great suggestions. Also
I can't find it, but someone wrote a script to decode that style of hacking.
For the hacks I was decoding, they were RDP hack attempts. The hackers just
"spray" their attacks. Often they are not meaningful to your server.
I have Nginx maps set up to match requests that are not relevant to my ser
In theory not a problem, but look at the text on this page about placing root
in location blocks.
https://www.nginx.com/resources/wiki/start/topics/tutorials/config_pitfalls/
I saw your first post and thought it was entertaining. Somebody needs to annoy
those hackers. Since I don't use php I tr
That clears it up. Most of what I see in the error log is stuff I have no idea
how to fix. I will Google some errors and see what is fixable. The deal is my
websites work for me and I get no complaints.
Reading questions on the interwebs, most people get error messages when they
use curl on th
I'm not sure I understand the question, but how does this sound? I use a map to
catch requests that I don't want. For instance I return a 444 if I receive a
"wget".
Original Message
From: c...@tunnel53.net
Sent: June 14, 2020 5:40 AM
To: nginx@nginx.org
Reply-to: nginx@ngi
Not to get too far off topic, but unless your server is important (government, financial, etc.), it is most likely the hacks it will receive are just "sprayed." They don't care what rev of OS you are running. The hacker tries a number of exploits on IP space known to host servers. Who you are is
: April 27, 2020 10:54 PMTo: nginx@nginx.orgReply-to: nginx@nginx.orgSubject: Re: How to hide kernel information SINFP method is used to get the kernel information.On Tue, Apr 28, 2020 at 11:10 AM lists <li...@lazygranch.com> wrote: Well I know nmap can detect the OS. I don't reca
Well I know nmap can detect the OS. I don't recall it could detect the rev of the kernel. https://nmap.org/book/man-os-detection.htmlhttps://nmap.org/book/defenses.html
Wouldn't it be less work to set up subdomains and handle this with DNS?
I for one will never qualify for this T shirt.
https://store.xkcd.com/products/i-know-regular-expressions
Original Message
From: p...@stormy.ca
Sent: April 14, 2020 1:39 PM
To: nginx@nginx.org
Reply-to: nginx@nginx
Run openssl versionThe problem is openssl is too old for TLS 1.3 using Centos 7.You might want to read this:https://forums.centos.org/viewtopic.php?t=71848I have seen threads on building openssl so that you can support tls 1.3 on Centos 7. The trouble is once you build something it is your problem
You could make it harder to pass around the URL if it is dynamic. That is make
the url session related.
You can do a search on "uncrawlable" and then exactly the opposite of what they
suggest. That is most people want to be crawled, so their advice is backwards.
One thing to watch out for is
If you are going to block one thing, eventually you will block two, then three,
etc.
I suggest learning how to use "map".
https://www.edmondscommerce.co.uk/handbook/Servers/Config/Nginx/Blocking-URLs-in-batch-using-nginx-map/
Original Message
From: nginx-fo...@forum.nginx.org
Sent
There are websites that check web server performance. I haven't bothered with them in years, but the suggestions on browser caching were useful. Google will find half a dozen.
I am not currently using any bandwidth limiting features so I can't comment on how it is done currently. However in the past I use the one built into Nginx and tested it with a download manager. My recollection is you could open more streams but the net effect was the download stayed at the same
IMHO you did the right thing with fail2ban. I don't see how a firewall is
"expensive" other than they they are a little RAM heavy. Half the internet
traffic is bots. That doesn't even count the hot linkers. So the reality is you
will need a firewall to block what doesn't have eyeballs, namely da
You could test the cert using SSL labs.
https://www.ssllabs.com/
You might have a drop your firewall if it doesn't work at first.
It never hurts to do dumb stuff like boot the server again.
Original Message
From: wiz...@bnnorth.net
Sent: October 3, 2019 8:55 PM
To: nginx@ngi
What shows up in the log files?
Do you really need to use Cloudflare? Have you been DDoSed? I view Cloudflare
as a man in the middle.
I've been using Let's Encrypt for about a year with no drama.
Original Message
From: nginx-fo...@forum.nginx.org
Sent: September 27, 2019 2:
est for x?
Hi Mark,
On 30/08/19 22:23, lists wrote:
> I've been following this thread not really out of need but rather that it is
> really interesting. That said, I don't think for security you want to
> "escape" the web root. The risk is that might aid a travers
I've been following this thread not really out of need but rather that it is
really interesting. That said, I don't think for security you want to "escape"
the web root. The risk is that might aid a traversal attack.
Original Message
From: hobso...@gmail.com
Sent: August 30
Tracing or interprocess communication?
Original Message
From: nginx@nginx.org
Sent: June 14, 2019 2:17 PM
To: nginx@nginx.org; mdou...@mdounin.ru
Reply-to: nginx@nginx.org
Cc: vgrin...@akamai.com
Subject: Re: nginx use of UDP ports?
On 6/12/19 4:31 AM, Maxim Dounin wrote:
> Hell
https://gist.github.com/xameeramir/a5cb675fb6a6a64098365e89a239541d
This claims to be the original.
Original Message
From: wiz...@bnnorth.net
Sent: May 11, 2019 6:40 AM
To: nginx@nginx.org
Reply-to: nginx@nginx.org
Subject: nginx stopped working
Can someone give me a copy of
many different things - but that doesn’t mean it’s right to expect that it does everything.PeterSent from my iPhoneOn Apr 12, 2019, at 10:57 PM, lists <li...@lazygranch.com> wrote: Perhaps a dumb question, but if all you are going to do is return a 403, why not just do this filtering in the fir
Perhaps a dumb question, but if all you are going to do is return a 403, why not just do this filtering in the firewall by blocking the offending IP space. Yeah I know a server should always have some response, but it isn't like you would be the first person to just block entire countries. (I don
> On Jun 27, 2018, at 2:02 AM, Maxim Dounin wrote:
>
> Hello!
Hello again!
> On Wed, Jun 27, 2018 at 12:56:09AM -0400, VP Lists wrote:
>
> [...]
>
>> OK, here’s where things get interesting:
>>
>> On MacOS El Capitan:
>> --http-c
> On Jun 26, 2018, at 10:51 PM, Maxim Dounin wrote:
>
> Hello!
Hello there. Thanks for the reply.
> On Tue, Jun 26, 2018 at 04:56:55PM -0400, VP Lists wrote:
>
>> I’m having a problem uploading any files of any significant size to a test
>> site on my workstati
I am guessing it’s the permissions issue on the incoming temp folder. I just
posted the same on the list, not published yet.
2018/06/26 16:50:20 [crit] 36196#0: *1099 open()
"/usr/local/var/run/nginx/client_body_temp/18" failed (13: Permission
denied), client: 127.0.0.1, server: pass1.
Hi folks.
I’m having a problem uploading any files of any significant size to a test site
on my workstation.
2018/06/26 16:50:20 [crit] 36196#0: *1099 open()
"/usr/local/var/run/nginx/client_body_temp/18" failed (13: Permission
denied), client: 127.0.0.1, server: pass1.local, request:
I’ve done the same.
Try listen port 8080, as anything < port 1024 needs to run as root. Then in
your url, enter hedge.local:8080. Shove hedge.local into your /etc/hosts file
and point to the proper IP. But you need to enter the port number in that url
to fetch it on the LAN.
> On Jul 23,
> On Jul 15, 2017, at 6:24 AM, nanaya wrote:
>
>> If I deliberately start up using root, why would I need a directive that
>> indicates that? This directive seems like a reminder after the fact.
>>
>
> root is usually needed to bind port 80 and 443 so usually people want to
> start it using
> On Jul 15, 2017, at 5:04 AM, nanaya wrote:
>
>
> It works if you start it from user with root privilege. Otherwise you
> can't switch user and thus the directive is ignored.
If I deliberately start up using root, why would I need a directive that
indicates that? This directive seems like a
The latter. It makes little sense. If it’s ignored then there’s no sense in
having it.
Much like how the current `nginx -t` report makes little sense as well:
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax is ok
nginx: [emerg] open() "/var/run/nginx.pid" failed (13: Per
I took the opposite approach. You put a funny character in the URL, you get a
444. I only allow underscore and hypen.
For a while, I was getting fuzzed. Maybe a year ago it was a thing. Nothing
bad happened, which I would say is a tribute to Nginx. I just returned 404s,
but I figured I better
But in actual use, you would just run nginx as a service, so I don't get the
sudo initiation. In fact, unless you run a very simple website, nginx alone
isn't sufficient, so you would be starting a number of services.
I make enough work for myself, but if security is an issue, I'd suggest setti
OK, good to know. Thank you. This does suggest that security isn’t really
respected in this case.
Cheers
> On Jul 14, 2017, at 11:04 AM, Alberto Castillo wrote:
>
> I've just set up mine on a FreeBSD box and using sudo solves the
> problem, same issue with .pid.
_
Rich in T
all nginx.pid Permissions Errors
On 07/14, Jim Ohlstein wrote:
> Hello,
>
> On 07/14/2017 10:39 AM, Viaduct Lists wrote:
> >
> >> On Jul 13, 2017, at 9:31 PM, li...@lazygranch.com wrote:
> >>
> >> However the nginx process is owned by www:
> >> 823
Hi there.
> On Jul 14, 2017, at 9:29 AM, Francis Daly wrote:
>
> In unix land, usually, if a process starts running as root, then it is
> able to "switch" to run as another user. If a process starts running as
> non-root, it is not able to switch to run as another user.
>
> And (usually) only r
> On Jul 13, 2017, at 9:31 PM, li...@lazygranch.com wrote:
>
> However the nginx process is owned by www:
> 823 www 1 200 28552K 7060K kqread 0:01 0.00% nginx
Sure the process is owned, and is called upon by nginx as the www user. The
`nginx -t` report is being called by ri
Hi there. Thanks for the reply.
Persistent permissions issues are on other boxes, on OSX as well. But I had
some Passenger issues so I’ve moved into another issue.
But sudo nginx -t gets rid of the error on nginx.pid
That whole user/group issue on the user directive in nginx.conf is confusing
Hi folks. Trying to get this FreeBSD nginx installation set up.
FreeBSD 11.1-RC1
nginx version: nginx/1.12.0
3 vhosts on this box. nginx.conf tests show the following:
[Wed Jul 12 06:08:41 rich@neb /var/log/nginx] nginx -t
nginx: the configuration file /usr/local/etc/nginx/nginx.conf syntax
Just wondering where the best location is for www on FreeBSD.
The default nginx.conf reports /usr/local/www/nginx
But then it reports this:
[Wed Jul 12 05:10:09 rich@neb /usr/local/www/nginx] ll
dr-xr-xr-x 2 root wheel5 Jul 7 10:23 .
drwxr-xr-x 3 root wheel4 Jul 7 10:23 ..
-rw-r--r
nginx.conf sets the user and admin, but that coughs up an error when trying to
run as root. This is why it’s so confusing.
> On Jul 10, 2017, at 9:27 PM, li...@lazygranch.com wrote:
>
> I don't have server access at the moment, but I think nginx under FreeBSD
> runs under user www.
__
!Only the root can bind the ports small than 1024. You should start your nginx service with the sudo prefix. On 11 July 2017 at 09:20:40, Viaduct Lists (li...@viaduct-productions.com) wrote: Hi there. Looking to get port 80 serving. Changed to root, but the error keeps the user from running:
Hi there.
Looking to get port 80 serving. Changed to root, but the error keeps the user
from running:
nginx: [warn] the "user" directive makes sense only if the master process runs
with super-user privileges, ignored in /usr/local/etc/nginx/nginx.conf:2
nginx: the configuration file /usr/loca
I'd suggest the online guides on anti-DDoSing for NGINX. They cover limiting
the number of connections, etc. Of course in reality these schemes would just
limit some some kid in the basement from flooding your server rather than a
real DDoS attack. But better than nothing, plus what is in those
Simply to reduce the attack surface, I would not use PHP if all that is served
is static pages.
If you are just serving static pages, you may be able to reduce your verbs to
"head" and "get". That is avoid "post." Again attack surface reduction.
I put PHP in a "map" search and it is a favorite
Ah but I want Google to look, but just return links to pages, not images. There all those hits that pretend to be Google, because hey, why not. ;-) I block a large number of bots simply by the firewall. I started
The IP addresses from the Google app aren't those of Google. They are ISPs generally. What bugs me is a fair number of these IP addresses never read my web pages. Easy enough to see from access.log. They just look
I want to block by referrer. I provided a more "normal" record so that the user
agent and referrer location was obvious by context.
My problem is I'm not creating the match expression correctly. I've tried
spaces, parens. I haven't tried quotes.
Original Message
From: Robert Paprocki
Se
If the secret page is on a different subdomain, could it be restricted to one
IP?
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
I suppose I'm stating the obvious, but if you are going to implement blocking
schemes with either simple map matches or a full blown WAP like Naxsi, you will
need a test suite. For a very simple website, you can just crawl it with wget
and see what you broke. But if you have forms, databases, e
I had run Naxsi with Doxi. Trouble is when it cause problems, it was really
hard to figure out what rule was the problem. I suppose if you knew what each
rule did, Naxsi would be fine.
That said, my websites are so unsophisticated that it is far easier for me just
to use maps.
Case in point.
Reading a blog from the person that set up the website for Emmanuel Macron, I
came across this nginx tip. I would return 444 and add it to my user agent map.
But in the simplest form:
-
# Block WordPress Pingback DDoS attacks
if ($http_user_agent ~* "WordPress") {
ret
Well this is interesting. Since this situation should never happen (I think) in real life, should this code be always implemented? Any downsides?
https://httpstatuses.com/444A non-standard status code used to instruct nginx to
I would return nothing, that is the 444 code. I have scripts that process access.log for 444, then see if they come from locations without eyeballs such as data centers, VPS, etc. The entire IP space then goes in
Beats me. I thought the 404 is what you get with the deny access. I'm sure my nginx skills are worse than yours. ;-)At one time I had a long list of deny addresses on nginx, but nginx does some processing before f
I've used this for traversal tests, but my experience is the false positive rate is very high. I ended up writing some rules to filter the test.
My experience with deny in nginx is the url isn't hidden. That is I think a crawler will see the "secret" location. Can you set this up for the 444 code, that is no reply?Rethinking this, I suppose if the webser
You would probably want to also limit the number of connections per IP address,
else one IP could lock up the entire site.
Original Message
From: Valentin V. Bartenev
Sent: Tuesday, April 4, 2017 1:58 PM
To: nginx@nginx.org
Reply To: nginx@nginx.org
Subject: Re: Limit number of connections t
FYI, benchmark mentioned in the video.
https://github.com/wg/wrk
Wouldn't a number of test machine ls on the Internet make more sense than
flogging nginx locally on your network?
With VPS time being sold by the hour, seems to me you should get one VPS tester
running acceptably, then clone a do
Are you trying to block baiduspider from your html email?
I think you should review the commented out lines. Very old school, but you may
want to just print your conf file and line up curly braces. Perhaps copy the
conf file, delete commented lines, and then see if it makes sense. It looks to
Take a look at this:
http://ask.xmodulo.com/block-specific-user-agents-nginx-web-server.html
Personally, I would use the map feature since eventually there will be other
user agents to block.
I use three maps. I block based on requests, referrals, and user agents. The
user agent is kind of o
Here is my philosophy. A packet arrives at your server. This can be broken down
into two parts: who are you and what do you want. The firewall does a fine job
of stopping the hacker at the who are you point.
When the packet reaches Nginx, the what do you want part comes into play. Most
likely
This is an interesting bit of code. However if you are being ddos-ed, this just
eliminates nginx from replying. It isn't like nginx is isolated from the
attack. I would still rather block the IP at the firewall and prevent nginx
from doing any action.
The use of $bot_agent opens up a lot of p
By the time you get to UA, nginx has done a lot of work.
You could 444 based on UA, then read that code in the log file with fail2ban or
a clever script. That way you can block them at the firewall. It won't help
immediately with the sequential number, but that really won't be a problem.
I'm no fail2ban guru. Trust me. I'd suggest going on serverfault. But my other
post indicates semrush resides on AWS, so just block AWS. I doubt there is any
harm in blocking AWS since no major search engine uses them.
Regarding search engines, the reality is only Google matters. Just look at y
They claim to obey robots.txt. They also claim to to use consecutive IP
addresses.
https://www.semrush.com/bot/
Some dated posts (2011) indicate semrush uses AWS. I block all of AWS IP space
and can say I've never seen a semrush bot. So that might be a solution. I got
the AWS IP space from s
That attack wasn't very distributed. ;-)
Did you see if the IPs were from an ISP? If not, I'd ban the service using the
Hurricane Electric BGP as a guide. At a minimum, you should be blocking the
major cloud services, especially OVH. They offer free trial accounts, so of
course the hackers abu
I find Naxsi hard to debug. For me, it generated many false positives. YMMV
The is nothing in my html that would generate that request, though the web page address is perfectly valid. I thought it be some IOS thing. You know all the stuff safari generates. I'm going to ignore it. I haven
fwiw,
I use the map approach discussed here.
I've a list of a hundred or so 'bad bots'.
I reply with a 444. Screw 'em.
IMO, the performance hit of blocking them is far less than the performance
havoc they wreak if allowed to (try to) scan your site, &/or the inevitable
flood of crap from you
Comparing strings is CS101. If map is a linear search, that should be something to improve.I'm assuming you read the code
I'd be shocked if the map function doesn't use a smart search scheme rather than check every item.
You can block some of those bots at the firewall permanently.
I use the nginx map feature in a similar manner, but I don't know if map is
more efficient than your code. I started out blocking similar to your scheme,
but the map feature looks clear to me in the conf file.
Majestic and Sogou s
Makes perfect sense!
Original Message
From: Maxim Dounin
Sent: Wednesday, November 9, 2016 2:02 AM
To: nginx@nginx.org
Reply To: nginx@nginx.org
Subject: Re: Unexptected return code
Hello!
On Tue, Nov 08, 2016 at 11:27:36PM -0800, li...@lazygranch.com wrote:
> I only serve static pages,
Is that 2.2 million CIDRs, or actual addresses?
I use IPFW with tables for about 20k CIDRs. I don't see any significant server
load. It seems to me nginx has a big enough task that it makes sense to offload
the blocking to something that is more tightly integrated to the OS.
At a bare minimum,
I don't know how to state this without being insulting, but Kodi is designed to
be used by dumb people. That is how I use it. It seems pointless to me to try
to hack Kodi into doing something it wasn't meant to do. That is why I called
that example an edge case.
There is a YouTube plugin for K
Apparently there is a scheme to feed urls to kodi.
https://m.reddit.com/r/kodi/comments/3lz84g/how_do_you_open_a_youtube_video_from_the_shell/
Block/ban as you see fit. ;-) These people are edge users of Kodi.
But you may want to search the interwebs to see if someone is attempting to
write
Kodi is the renamed xbmc. I use it myself, but I never "aimed" it at a website.
I just view my own videos or use the kodi plug-ins. You can install it yourself
on a PC and see it is intended to be just a media player. It really isn't any
different that seeing VLC as the agent.
Perhaps someone
If you get hammered, even serving the 403-page is actually noticeable traffic.
-
Nginx rate limiting works very well.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
1 - 100 of 160 matches
Mail list logo