On Tue, 7 Jun 2022 at 14:15, Sergey Kandaurov wrote:
> > On 7 Jun 2022, at 13:41, Peter Volkov wrote:
> > After we enabled HTTP/2 in nginx some old software started to fail. So
> we would like to have HTTP v2 enabled in general but disabled for some
> specific IP:PORT. I
both ports I see: * ALPN: offers h2. Is it possible
to disable HTTP v2 for specific IP:PORT?
Thanks in advance,
--
Peter.
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org
Hi, any ideas here?
--
Peter.
On Wed, Oct 13, 2021 at 1:12 PM Peter Volkov wrote:
> Hi.
>
> We use Nginx as a reverse proxy for our service that manages CORS by
> itself. Yet we have problems with errors that Nginx generates itself, e.g.
> 413 Request Entity Too Large. Such err
the real reason for this problem. So we would like to add permissive CORS
headers to all error pages that Nginx generates. Does there exist such a
list of error codes that Nginx generates? Like wrong headers, bad requests,
whatever?
Thanks in advance for your help,
--
Peter
Err, After few hours of debugging, and writing email here, I've realised
that I have `gzip off;` in http {} block of configuration. After enabling
gzip in http block everything works fine. Is it correct behaviour that no
warning is issued with such configuration?
--
Peter.
On Tue, May 11,
close after body
< HTTP/1.0 200 OK
< Server: UDP to HTTP tool with smart buffering
< Accept-Ranges: none
< Content-type: application/octet-stream
< Cache-Control: no-cache
Thanks in advance for any help,
--
Peter.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
From a shell on your nginx host you can run something like netstat -ant | egrep
“ESTAB” to see all the open TCP connections. If you run your command line with
watch you will see it update each two seconds, etc ..
FWIW A long time ago I did a bunch of experiments with different load balancer
str
Gary,
This was interesting to read. There was one thing that wasn’t obvious to me
however.
What was the high level problem that you were solving with this specific
configuration?
Curiously
Peter
Sent from my iPhone
> On Oct 30, 2020, at 3:16 PM, garyc...@yahoo.com
> wrote:
>
for your
business.
Peter
> On Aug 24, 2020, at 2:54 PM, lists wrote:
>
> I can't find it, but someone wrote a script to decode that style of hacking.
> For the hacks I was decoding, they were RDP hack attempts. The hackers just
> "spray" their attacks. Often t
Why are you doing an nginx POC?
To be blunt, nginx is the most powerful, flexible web server/reverse
proxy/application delivery software product that exists. If it has an obvious
competitor it’s the F5 BigIP LTM/WAF device - and F5 owns nginx. So what does
this mean? It means that if you don’t
Why do you want to do this at all?
What is the real underlying problem that you are attempting to solve?
> On Nov 11, 2019, at 8:29 AM, Kostya Velychkovsky
> wrote:
>
> I use Linux, and had a bad experience with Linux shaper (native kernel QoS
> mechanism - tc ), it consumed a lot of CPU an
Is your web server on the internet? If so then see what redbot shows. It’s an
amazing tool to debug nuanced http issues
Sent from my iPhone
> On Oct 9, 2019, at 1:52 AM, Ken Wright wrote:
>
> Sorry to be taking up so much bandwidth lately, but I'm seeing some
> weird behavior from nginx.
>
>
I’m wondering if you are overthinking this. You said that the memory was reused
when the workload increased again. Linux memory management is unintuitive. What
would happen if you used a different metric, say # active connections, as your
autoscaling metric? It sounds like this would behave “bet
I’d suggest that you use wrk2, httperf, ab or similar to run a synthetic test.
Can your site handle one request every five seconds? One request every second?
Five every second? ... is your backend configured to log service times? Is your
nginx configured to log service times? What do you see? By
Hi All,
I had GeoIP work on nginx 1.14.x. I upgrade to nginx 1.16.x and the whole thing
broke so I decided to just upgrade to GeoIP2. I have the following below in
nginx.conf which I saw on the nginx page.
load_module "/usr/local/libexec/nginx/ngx_http_geoip2_module.so";
load_module "/usr/local/
Andreas,
Do you know of any large, high traffic sites that are using HSTS today?
Peter
> On Jun 5, 2019, at 12:56 PM, A. Schulze wrote:
>
>
>
> Am 05.06.19 um 14:54 schrieb Sathish Kumar:
>> Hi Team,
>>
>> We would like to fix the HTTPS pinning vuln
Mik,
I’m not going to get into the openbsd question, but I can tell you some of the
different things that I have done to solve this kind of problem in the past.
Your environmental constraints will impact which is feasible:
1. Use tcpdump to capture packets
2. Use netcat as an intercepting proxy
Increasing # ephemeral ports
Adjusting ulimit
3 tuning specific to your workload.
UDP and TCP buffet size - should match BDP
NIC tuning - irq coalescing, 10G specific tuning, see CDN, melanox,redhat, hp
suggestions for lie latency tuning.
That’s a start.
Peter
Sent from my iPhone
> On Apr 19, 2
everything.
Peter
Sent from my iPhone
> On Apr 12, 2019, at 10:57 PM, lists wrote:
>
> Perhaps a dumb question, but if all you are going to do is return a 403, why
> not just do this filtering in the firewall by blocking the offending IP
> space. Yeah I know a server should a
,
Peter
Sent from my iPhone
> On Mar 23, 2019, at 8:17 PM, Hemant Bist wrote:
>
> Hi,
> I want to know if this a right way to make the change ( or if there is a
> better /recommended method). So far we have only tweaked the configuration of
> nginx which scales very nicely fo
?
Curious ,
Peter
Sent from my iPhone
> On Mar 12, 2019, at 9:57 PM, Maxim Dounin wrote:
>
> Hello!
>
>> On Tue, Mar 12, 2019 at 02:09:06PM -0400, wkbrad wrote:
>>
>> First of all, thanks so much for your insights into this and being patient
>> with me. :) I
the-curious-case-of-the-crooked-tcp-handshake
<https://labs.ripe.net/Members/gih/the-curious-case-of-the-crooked-tcp-handshake>
You can adjust the net.inet.tcp.finwait2_timeout
and similar and see if that changes the length of your three second effect to
something else.
Hope this helps,
Pe
Satish,
The browser (client-side) cache isn’t related to the nginx reverse proxy cache.
You can tell Chrome to not cache html by adding the following to your location
definition:
add_header Cache-Control 'no-store';
You can use Developer Tool in Chrome to check that it is working.
P
+1 to the openresty suggestion
I’ve found that whenever I want to do something gnarly or perverse with nginx,
openresty helps me do it in a way that’s maintainable and with any ugliness
minimized.
It’s like nginx with super-powers!
Sent from my iPhone
> On Feb 11, 2019, at 1:34 PM, Robert Pap
behavior.
Peter
> On 11 Feb 2019, at 2:00 PM, Peter Booth wrote:
>
> You should be able to answer this by tailing the log of your nginx and orig
> server at the same time.
>
> It would be helpful if you shared an (anonymized) section of both logs. When
> I say fast or slow
>
You should be able to answer this by tailing the log of your nginx and orig
server at the same time.
It would be helpful if you shared an (anonymized) section of both logs. When I
say fast or slow
I might mean something very different to what you hear.
> On 11 Feb 2019, at 10:06 AM, joao.pere
Open this and you will see that a request to https://digitalkube.com/ returns a
301 pointing back to itself.
Check your CDN configuration
https://redbot.org/?uri=https%3A%2F%2Fdigitalkube.com%2F
Sent from my iPhone
> On Jan 28, 2019, at 11:47 AM, Gary wrote:
>
> Log files? Nginx.conf file? Y
request that with curl
Also, your config suggests that your web server might be internet visible.
If It is, I would suggest that you try access these test URLs, and also directly
accessing your IIS using the redbot.org HTTP validator.
Good luck,
Peter
> On 21 Jan 2019, at 9:23 AM, petrose
If you use the openresty nginx distribution then you can write a few lines of
Lua to implement your custom logic.
Sent from my iPhone
> On Jan 13, 2019, at 9:13 AM, shahzaib mushtaq wrote:
>
> Hi,
>
> We've a location like /school for which we want to set browser cache lifetime
> as 'current
Is your nginx/Apache site visible on the internet without any authentication?
If so, I recommend that you access your site directly, not through cloud flare
with redbot.org, which is the best HTTP debugger ever, for both the nginx and
Apache versions of the site and see how they compare.
Why is
1. What does GET / return?
2. You said that nginx was configured as a reverse proxy. Is / proxied to a
back-end?
3. Does GET / return the same content to different users?
4. Is the user-agent identical for these suspicious requests?
Sent from my iPhone
> On Jan 10, 2019, at 11:19 PM, gnusys wr
How do you know that this is an attack and not “normal traffic?”
How are these requests different from regular requests?
What do the weblogs say about the “attack requests?"
> On 10 Jan 2019, at 10:30 PM, gnusys wrote:
>
> My Current settings are higher except the worker_process
>
> worker_pro
Your web server logs should have the key to solving this.
Do you know what url was being requested? Do the URLs look valid?
Are there requests all for the same resource?
Are the requests coming from a single IP range?
Are the requests all coming with the same user-agent?
Does the time this starte
The important question here is not the connections in FIN_WAIT. It’s “why do
you have so many sockets in ESTABLISHED state?”
First thing to do is to run
netstat -ant | grep tcp and see where these connections are to.
Do you have a configuration that is causing an endless loop of requests?
Sent
peter wright is no longer with the company
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
thousands of requests ended up
generating only one request for the backend, and the site stayed up under such
spiky loads.
My tip is to start simple and add one feature at a time and understand your web
server logs, which contain lots of information.
Peter
> On 2 Nov 2018, at 10:45 AM, yf
So this is a very interesting question. I started writing dynamic websites in
1998. Most developers don’t want to generate static sites. I think their
reasons are more emotional than technical. About seven years ago I had two jobs
- the day job was a high traffic retail fashion website. the side
issue is intermittent do you mean that you make
the same request and get different results?
As for listening to production logging, it needn’t be an issue
> On 7 Oct 2018, at 4:57 PM, Jane Jojo wrote:
>
> Thanks for this Peter. I’ll look at redbot.
>
> Do you by any chanc
You need to understand what requests are being received, what responses are
being sent and the actual keys being used to write to your cache.
This means intelligent request logging, possibly use of redbot.org, and
examination of your cache. I used to use a script that someone had posted here
y
One more approach is to not change the contents of resources without also
changing their name. One example would be the cache_key feature in Rails, where
resources have a path based on some ID and their updated_at value. Whenever you
modify a resource it automatically expires.
Sent from my iPho
show at known time of
the day every week.
nginx proxy_cache was invaluable at helping the site stay up and responsive
when hit with enormous spikes of requests.
This is nuanced, subtle stuff though.
Is your site something that you can disclose publicly?
Peter
> On 12 Sep 2018, at 7:23
On Wed, Sep 5, 2018 at 3:25 PM, Maxim Dounin wrote:
> On Wed, Sep 05, 2018 at 09:58:54AM +0300, Peter Volkov wrote:
>
> > Hi. Could you, please, explain. Why nginx sends 301 redirect for the
> > following vhost:
> >
> > server {
> > listen 80;
>
nently
nginx
--
Peter.
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
So it’s very easy to get caught up in he trap if having unrealistic mental
models of how we servers work when dealing with web servers. If your host is a
recent (< 5 years) single Dickey host then you can probably support 300,000
requests per second fir your robots.txt file. That’s because the f
I’ve tried chef, puppet and ansible at thre different shops. I wanted to like
chef and puppet because they are Ruby based (which I like) but they seemed
clunky, ugly, and heavyweight. Ansible seemed to solve the easy problems. When
I had a startup I just used Capistrano for deployments, with erb
gh in
the event of an error? Or is there more to it than that?
Sometimes people build sites that are “more dynamic” than they need to because
they didn't
consider a static site that gets frequently periodically regenerated.
Peter
> On 28 Jun 2018, at 9:27 AM, Friscia, Michael wr
How large is a large POST payload?
Are the nginx and upstream systems physical hosts in same data center?
What are approx best case / typical case / worst case latency for the post to
upstream?
Sent from my iPhone
> On Jun 22, 2018, at 2:40 PM, scott.o...@oracle.com wrote:
>
> I have an nginx p
Your question raises so many other questions:
1. The static content - jpg, png, tiff, etc. It looks as though you are serving
them your backend and caching them. Are they also being built on demand
dynamically? If not, then why csche them? Why not deploy them to nginx and
serve them directly?
Sounds weird.
1. It doesn’t make sense for your cache to be on a tmpfs share. Better to use s
physical disk allow Linux ‘s page csche to do its job
2. How big are the files in the larger cache? Min/median/max?
Sent from my iPhone
> On Jun 20, 2018, at 7:38 AM, rihad wrote:
>
> Have you be
Is your client running n a different host than your server?
> On 8 Jun 2018, at 5:35 AM, prabhat wrote:
>
> I am taking performance data on nginx.
> The client I used is h2load
>
> Request per second using h2 is much higher than h2c. But I think it should
> not be as h2 is having the overhead o
Dont.
You should let every tier do it’s job. Just because nginx has geoid
functionality doesn’t mean that you should use it.
If you are lucky enough to have hardware load balancer in front of nginx then
do the blocking there, so you reduce the
load on your nginx. The Golden Rule of keeping websi
If you can dump your http traffic you will probably see a headers with names
like:
X-Real-IP
X-Forwarded-For
Sent from my iPhone
> On May 23, 2018, at 11:25 PM, Frank Liu wrote:
>
> Since only load balancer sees the client IP, it has to pass that information
> to nginx. You need to talk to yo
5. Do you use keepslive?
Sent from my iPhone
> On May 20, 2018, at 2:45 PM, Peter Booth wrote:
>
> Rate limiting is a useful but crude tool that should only be one if four or
> five different things you do to protect your backend:
>
> 1 browser caching
> 2 cDN
> 3
Rate limiting is a useful but crude tool that should only be one if four or
five different things you do to protect your backend:
1 browser caching
2 cDN
3 rate limiting
4 nginx caching reverse proxy
What are your requests? Are they static content or proxied to a back end?
Do users login?
Is i
uri";;
add_header Link "<$canonical_url>; rel=\"canonical\"";
proxy_pass http://apache$request_uri;
}
Tis snippet shows a key made of three parts. The real version has seven parts.
Good luck!
Peter
> On 14 May 2018, at 12:06
I’m guessing that you have script that keeps executing curl. What you can do
is use curl -K ./fileWithListOfUrls.txt
and the one curl process will visit each url in turn reusing the socket (aka
HTTP keep alive)
That said, curl isn’t a great workload simulator and, in the long time, you can
get
Does this imply that that different behavior *could* be achieved by first
defining virtual IP addresses (additional private IPs defined at the OS) which
were bound to same physical NIC, and then defining virtual hosts that reference
the different VIPs, in a similar fashion to how someone might c
connections, cache
hit ratios etc is important to understand “what is normal?” It’s easy for our
mental model of how a site works to differ markedly from reality.
Sent from my iPhone
> On Apr 11, 2018, at 2:04 AM, Jeff Abrahamson wrote:
>
>> On Wed, Apr 11, 2018 at 01:17:14AM
will cause google and bing and other search engines to scrape in a
pathological manner
Sent from my iPhone
> On Apr 11, 2018, at 2:04 AM, Jeff Abrahamson wrote:
>
>> On Wed, Apr 11, 2018 at 01:17:14AM -0400, Peter Booth wrote:
>> There are some very good reasons for do
Jeff,
There are some very good reasons for doing things in what sounds like a heavy
inefficient manner.
The first point is that there are some big differences between application
code/business logic and monitoring code:
Business logic, or what your nginx instance is doing is what makes you mon
John,
I think that you need to understand what is happening on your host throughout
the duration of the test. Specifically, what is happening with the tcp
connections. If you run netstat and grep for tcp and do this in a loop every
say five seconds then you’ll see how many connections peak get
desired request
distribution without triggering the ddos protection. Wrk2, Tsung, httperf are
candidates, as well as the cloud based load generator services. Also see Neil
Gunther’s paper on how to combine multiple jmeter instances to replicate real
world tragic patterns.
Peter
Sent from my iPhone
processes.
>
> Do you have any suggestions for differentiating between the two issues that
> might prevent memory from being returned to the system?
>
> Thanks!
>
>> On Thu, Mar 15, 2018 at 1:06 PM Peter Booth wrote:
>> Two questions:
>>
>> 1. how a
Two questions:
1. how are you measuring memory consumption?
2. How much physical memory do you have on your host?
Assuming that you are running on Linux, can you use pidstat -r -t -u -v -w -C
“nginx”
to confirm the process’s memory consumption,
and cat /var/meminfo to view a detailed descrip
size of your problem space.
Peter
Sent from my iPhone
> On Mar 13, 2018, at 5:58 PM, Kenny Meyer wrote:
>
> Hi Roman,
>
>> Are there any errors in error.log?
> No errors…
>
>> And what happens if you switch www.example.com and staging.example.com?
> Then I get
happening and use ss or
tcpdump
to confirm that no request is sent to your staging destination.
I’m assuming that both ww.example.com and staging.example.com are hosted
on different hosts, different IPs and are both functional.
Peter
> On Mar 13, 2018, at 5:58 PM, Kenny Meyer wrote:
>
I agree that avoiding if is a good thing. But avoiding duplication isn’t always
good.
Have you considered a model where your configuration file is generated with a
templating engine? The input file that you modify to add/remove/change
configurations could be free of duplication but the conf fi
rs ...
But the bottom line is separation of concerns. Nginx should not use fsync
because it isn’t nginx's business.
My two cents,
Peter
> On Feb 28, 2018, at 4:41 PM, Aziz Rozyev wrote:
>
> Hello!
>
> On Wed, Feb 28, 2018 at 10:30:08AM +0100, Nagy, Attila wrote:
>
>&g
100GB of cached files sounds enormous. What kinds of files are you caching? How
large are they? How many do you have?
If you look at your access log what hit rate is your cache seeing?
Sent from my iPad
> On Feb 16, 2018, at 3:16 AM, Andrzej Walas
> wrote:
>
> After this inactive logs I have
I think that part of the power and challenge of using nginx’s caching is that
there are many different ways
of achieving the same or similar results, but some of the approaches will be
more awkward than others.
I think that it might help if you could express what the issue is that you are
try
The tech empower web framework benchmark is a set of six micro benchmarks
implemented with over 100 different web frameworks. It’s free, easy to setup,
and comes as prebuilt docker containers.
Sent from my iPhone
> On Jan 26, 2018, at 2:27 PM, leeand00 wrote:
>
> Does anyone have a suggestion
So some questions:
What hardware is this? Are they 16 “real” cores or hyper threaded cores?
Do you have a test case setup so you can readily measure the impact of change?
Many tunings that involve numa will only show substantial results ion specific
app
What does cat /proc/cpuinfo | tail -28 ret
Perhaps you should use pidstat to validate which processes are running on the
two busy cores?
> On Jan 11, 2018, at 6:25 AM, Vlad K. wrote:
>
> On 2018-01-11 11:59, Lucas Rolff wrote:
>> Now, in your case with php-fpm in the mix as well, controlling that
>> can be hard ( not sure if you can pin
t; On Fri, Jan 5, 2018 at 6:28 AM, Wade Girard wrote:
>> Hi Peter,
>>
>> Thank You.
>>
>> In my servlet I am making https requests to third party vendors to get data
>> from them. The requests typically take 4~5 seconds, but every now any then
>> on
that behave differently”
It would probably help us if you explained a little more about your test, why
the sleep is there and what your goals are?
Peter
> On Jan 4, 2018, at 11:45 PM, Wade Girard wrote:
>
> I am not sure what is meant by this or what action you are asking me to tak
Are you running apache bench on the sam for different host?
How big is the javascript file? What is your ab command line?
If your site is to be static published (which is a great idea)
why are you using SSL anyway?
> On 4 Jan 2018, at 6:12 PM, eFX News Development wrote:
>
> Hello! Thanks for
Take a look at the stream directive in the nginx docs. I’ve used that to proxy
an https connection to a backend when I needed to make use of preecisting SSO
Sent from my iPhone
> On Dec 6, 2017, at 5:47 PM, Nicolas Legroux wrote:
>
> Hi,
>
> I'm wondering if it's possible to do what's descri
First Step
Use something like http://www.kloth.net/services/nslookup.php
To check the IP addresses returned for all six names (with and without www for
the three domains)
Do these look correct?
Sent from my iPhone
> On Dec 6, 2017, at 5:27 PM, qwazi wrote:
>
> I'm new to nginx but needed a
017, at 1:11 AM, Peter Booth wrote:
>
> I’m a situation where you are confident that the workload is coming from a
> DDOS attack and not a real user.
>
> For this example the limit is very low and nodelay wouldn’t seem appropriate.
> If you look at the techempower benchmark res
I’m a situation where you are confident that the workload is coming from a DDOS
attack and not a real user.
For this example the limit is very low and nodelay wouldn’t seem appropriate.
If you look at the techempower benchmark results you can see that a single vote
VM should be able to serve ov
So what exactly are you trying to protect against?
Against “bad people” or “my website is busier than I think I can handle?”
Sent from my iPhone
> On Nov 30, 2017, at 6:52 AM, "tongshus...@migu.cn"
> wrote:
>
> a limit of two connections per address is just a example.
> What does 2000 reque
There are many things that *could* cause what you’re seeing - say at least
eight. You might be lucky and guess the right one- but probably smarter to see
exactly what the issue is.
Presumably you changed your upstream webservers to do this work, replacing ssl
with unencrypted connections? Do y
Can you count the number of files that are in your cache and whether or not
it's changing with time?
Then compare with the number of unique cache keys (from your web server log)
When the server starts returning a MISS - does it only do this for newer
objects that haven’t been requested before?
D
$50k F5 BigIP LTM+WAF at less than 1/10 the
cost.
But all of these features need to be used delicately, if you want to avoid
rejecting valid requests.
Peter
Sent from my iPhone
> On Nov 20, 2017, at 9:28 AM, Stephan Ryer wrote:
>
> Thank you very much for clearing this out. All I n
You need to understand, step-by-stp, exactly what is happening.
Here is one (of many) ways to do this:
1. Open the Chrome browser
2. Right click on the background and select inspect, this will open the
developer tools page
3. Select the tab “Network” which shows you the HTTp requests issued for
This is true in general, but with a single exception that I know of.
It’s common for nginx to proxy requests to a Rails app or Java app on
an app server and for the app server to implement the session logic
This is an open-resty session implementation that sits within the nginx process.
https:/
data.
Are there any lans within nginx to report higher resolution timings?
Peter
> On Oct 29, 2017, at 9:35 AM, yang chen wrote:
>
> Thanks for your reply, why calling the ngx_event_expire_timers is unnecessary
> when ngx_process_events handler returns so quickly that the
&
There are a few approaches to this but they depend upon what you’re trying to
achieve. Are your requests POSTs or GETs? Why do you have the mirroring
configured?
If the root cause is that your mirror site cannot support the same workload as
your primary site, what do you want to happen when yo
in
use)
If you look at all of the lines its easier to see that there is no trend of
memory increasing over time:
NewiMac:Records peter$ cat phpOutput.txt | grep php-fpm | awk '{print $11,$6}'
| head -15 | awk '{print $2}' | average -M
11852
NewiMac:Records peter$ cat phpO
w many milliseconds are
spent building every request
See https://lincolnloop.com/blog/tracking-application-response-time-nginx/
<https://lincolnloop.com/blog/tracking-application-response-time-nginx/>
It’s better that you email me off-list for further discussion
Peter
peter _ booth @ m
Agree,
Can you email me offline. I might have a few ideas on how to assist.
Peter
peter _ booth @ me.com
> On Oct 16, 2017, at 3:55 PM, agriz wrote:
>
> Sir,
>
> Thank you for your reply.
>
> This is a live server.
> It is an NPO (non profit organisation).
&g
You said this
> On Oct 16, 2017, at 3:30 PM, Peter Booth wrote:
>
> If i change the values, it hangs with 3k or 5k visitors.
> This one handle 5k to 8k
what hangs? the host or the nginx worker processes or the PHP or the mysql?
You need to capture some diagnostic information over
Advice
- instead of tweaking values, first work out what is happening,
locate the bottleneck, then try adjusting things when you have a theory
First QN you need to answer:
For your test, is your system as a whole overloaded?
As in, for he duration of the test is the #req/se supported constant?
error
causes.
Peter
> On Oct 12, 2017, at 4:52 AM, Dingo wrote:
>
> I found the solution, but I don't understand what it does. When I add:
>
> proxy_cache_key "$host$uri$is_args$args";
>
> To a location block it magically works. I have no clue what happens
at it has sufficient memory and that no major page faults are occurring
(sar -B should return 0.0)
Peter
> On Oct 6, 2017, at 3:05 AM, rnmx18 wrote:
>
> Hi,
>
> To realize a distributed caching layer based of disk-speed and storage, I
> have prepared the following configur
I can say that Maxim's idea of using tcp proxying with the streams module Is
very simple to configure - just a couple of lines, and tremendously useful.
Sent from my iPhone
> On Oct 4, 2017, at 3:24 PM, pan...@releasemanager.in
> wrote:
>
> Maxim,
>
> totally agree on your statement and op
I found it useful to define a dropCache location that will delete the cache on
request. I did this with a shell script that I invoked with lua (via openresty)
but I imagine there are multiple ways to do this.
Sent from my iPhone
> On Oct 4, 2017, at 11:39 AM, Maxim Dounin wrote:
>
> Hello!
>
the request to nginx which is in front of another
back-end?
Is so, what is the back-end?
How much data is being sent in the POST?
Who creates the JSON doc?
Peter
Are you familiar withe material in
https://www.codeproject.com/Articles/648526/All-about-http-chunked-responses
<ht
Lots of questions:
What are the upstream requests?
Are you logging hits and misses for the cache - what's the hit ratio?
What size are the objects that you are serving?
How many files are there in your cache?
What OS and what hardware are you using? If it's Linux can you show the results
of the f
What is your ultimate goal? You say that you want to replay 0.05% of traffic
into a test environment.
Are you wanting to capture real world data on a one off or ongoing basis?
You say that this particular proxy is very busy. How busy? Is it hosted on a
physical host or a virtual machine?
If
1 - 100 of 178 matches
Mail list logo