Hi
Yes, use ngx.exec in lua
On 09/01/17 07:23, pavelvasev wrote:
Have you found a solution for this, Richard?
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,228856,271867#msg-271867
___
nginx mailing list
nginx@nginx.org
http://mailman.ng
Hello
I'm trying to enable this option on a proxy_pass location:
proxy_ssl_trusted_certificate /etc/ssl/certs/ca-certificates.crt;
proxy_ssl_verify on;
proxy_ssl_verify_depth 9
/etc/ssl/certs/ca-certificates.crt is compiled by update-ca-certificates
(http://manpages.ubuntu.com/manpa
Bartenev wrote:
On Sunday 06 December 2015 01:28:15 Richard Kearsley wrote:
Hi
Since 1.9.5,
*) Change: now the "output_buffers" directive uses two buffers by default.
The two buffers do not work with thread_pool/aio, the connection is
closed at 32,768 bytes (size of one buffer)
I'
Hi
Since 1.9.5,
*) Change: now the "output_buffers" directive uses two buffers by default.
The two buffers do not work with thread_pool/aio, the connection is
closed at 32,768 bytes (size of one buffer)
These messages are shown in the error log:
[alert] 126931#126931: task #106 already active
installed in
/usr/sbin)? Depending on how you are starting it the wrong executable may be
being used.
Kind Regards
Andrew
On 27 May 2015, at 13:22, Richard Kearsley wrote:
Hi
First time trying aio threads on linux, and I am getting this error
[emerg] 19909#0: unknown directive "thread_poo
Hi
First time trying aio threads on linux, and I am getting this error
[emerg] 19909#0: unknown directive "thread_pool" in
/usr/local/nginx/conf/nginx.conf:7
Line7 reads:
thread_pool testpool threads=64 max_queue=65536;
Everything indicates it was built --with-threads, so I'm not sure where
Hi
The error you supplied appears to be coming from the backend itself
so the proxy_pass is actually working
Check with your backend logs to find out if the url requested is not
what you expected.. and why the url is invalid
Richard
On 27/08/14 18:37, ricardo.ekm wrote:
Hi All,
I'm trying t
Hi
It seems that if I have 2 server {} sections, one with spdy enabled and
one without, spdy is still accepted on the second
server
{
server_name "";
listen 80;
listen 443 ssl spdy;
}
server
{
server_name "something.com";
listen 80;
listen 443 ssl
Hi
Tested 1.6.1 and 1.7.4
Speed is back to normal
Many thanks!
Richard
On 16/08/14 23:19, Valentin V. Bartenev wrote:
On Saturday 16 August 2014 01:27:19 Richard Kearsley wrote:
attached
Thank you for the report.
Please, try the following patch:
diff -r f1e05e533c8b src/http
Hi
I have been tracing an issue for the past couple of days and have
narrowed down the case to when spdy is being used with aio
Testing using a 1GB file download in chrome and firefox, http and https
download as normal
using spdy, only the first ~250k is downloaded and then a wait of
exactly 60
On 17/06/14 16:12, Eric Feldhusen wrote:
Option A and that's what I figured as well.
If you don't care about sending the upstream response back to the
client, or want to pick one of the two responses to send back
then you can use the nginx lua module to perform some obscure
functionality... it
On 17/06/14 15:13, Eric Feldhusen wrote:
I have a need to adjust a nginx install doing reverse proxy to a
single server now to adjust it to send all requests it receives to two
different upstream servers.
do you mean
a) send each request to both?
b) send each request to one or the other (like l
On 16/06/14 12:49, shahzaib shahzaib wrote:
8MB/s w/r should not be issue for 12X3TB SATA HDD. Maybe i need to
tweak some nginx buffers or kernels in order to reduce the high io wait ?
if you have a high number of concurrent connections and/or use
limit_rate, then expect hdd (sata or sas) t
On 26/03/14 14:09, stremovsky wrote:
I think it can be a great feature for big production environments !
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,248429,248722#msg-248722
exactly..
I noticed a few updates to SNI in the latest releases, do any of them
take us closer to this?
Hi
I came across this 'issue' on the lua module about having the ability to
control which SSL certificate is used based on a Lua module handler:
https://github.com/chaoslawful/lua-nginx-module/issues/331
I believe at the moment, this phase isn't exposed so there is no way to
hand it off to a mo
On 01/02/14 10:48, Jonathan Matthews wrote:
No.
No.
he's right
but this can make powerdns a little more bearable
https://github.com/fredan/luabackend
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hi
I was watching this video by fastly ceo http://youtu.be/zrSvoQz1GOs?t=24m44s
he talks about the nginx ssl handshake versus apache and comes to the
conclusion that apache was more efficient at mass handshakes due to
nginx blocking while it calls back to openssl
I was hoping to get other peop
On 17/12/13 13:04, Maxim Dounin wrote:
Hello!
On Tue, Dec 17, 2013 at 12:58:43PM +, Richard Kearsley wrote:
Hi
If 'gzip off;' on front-end but a proxy_pass to backend gives a gzipped
response, will the front-end decompress it before proxy to client?
No.
But if you wan
Hi
If 'gzip off;' on front-end but a proxy_pass to backend gives a gzipped
response, will the front-end decompress it before proxy to client?
Cheers
Richard
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
On 05/11/13 16:27, Tim Düsterhus wrote:
This sounds like you want to use `include`, i use it myself for general
settings, valid for any domain:
fair point
would it work like this (an include in an include?)
http
{
include www.example.com.conf;
include www.test.com.conf;
include ww
On 05/11/13 13:50, Jonathan Matthews wrote:
Please show a duplicated (i.e. operationally inefficient) config that
you wish to aggregate, as I don't understand the result you're aiming
for. J
something like this is the only way I see to do it currently:
http
{
server
{
listen 8
Hi
I was wondering if there's any way to have a configuration like this?
server
{
listen 80;
listen 443 ssl;
ssl_certificate www.example.com.cer;
ssl_certificate_key www.example.com.key;
ssl_certificate www.test.com.cer;
ssl_certif
On 20/10/13 11:00, talkingnews wrote:
It says to replace "raring" (or whatever) with the latest release.
But if I look at http://nginx.org/packages/ubuntu/dists/ I see that only
raring is the latest.
So, how can I best ensure that I can apt-get update nginx without having to
completely remove
On 11/10/13 12:25, Maxim Dounin wrote:
Closest to what you ask about I can think of is the
$request_completion variable. Though it marks not only timeouts but
whether a request was completely served or not.
http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_completion
thank
Hi
I would like to log an indication of weather a request ended because of
a client timeout - in the access.log
e.g.
log_format normal '$remote_addr - $remote_user [$time_local] '
'"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent"
"$client_send_timed_out"';
where $client_
On 04/10/13 12:04, Indo Php wrote:
allow 127.0.0.1;
denyall;
the url will only work if requested from the server itself...
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Ah, let me guess - is the keepalive number "per worker"?
On 03/09/13 13:42, Richard Kearsley wrote:
Hi
I seem to have an issue where the upstream keepalives aren't being
re-used
It shouldn't ever need more than 500 connections to the upstream, but
it keeps making more?
Hi
I seem to have an issue where the upstream keepalives aren't being re-used
proxy_http_version 1.1;
upstream dev1 {
server 10.0.0.11 max_fails=0;
keepalive 1024;
}
location /
{
proxy_pass http://dev1;
proxy_set_header Connection "";
}
On a separate server I run 'ab -n 500 -c
Hi
I'm using the upstream module - with sole purpose to enable keepalives
to my backend
I don't want to use any of the other features, I only have 1 server in
the upstream {}
Does that mean max_fails is still being used? (defaults to 1?) and
fail_timeout etc..? they both have default values
Wh
On 06/08/13 04:02, Dennis Jacobfeuerborn wrote:
Since I determine the reason for the denied access in lua a way to do
it there would also help. I already tried "nginx.status = 403"
followed by a "nginx.exec('/reason1')" but while the right page is
display the status code returned gets reset to
On 05/08/13 21:13, Rangel, Raul wrote:
The filesystem is AUFS. It's mounted inside of a docker container.
So my assumption is that AUFS does not support writev? So I need to somehow
mount a different filesystem?
Hi
I can't comment about AUFS, but you can change where those temp files
are s
Hi
There's no size limit, it will keep getting bigger until your disk is full
Here's a script I use to rotate the log, run it from cron every hour
hope it helps
#!/bin/sh
PID=`cat /usr/local/nginx/logs/nginx.pid`
LOG="/usr/local/nginx/logs/access.log"
NOW=$(date +"%Y-%m-%d-%H-%M")
NEWLOG="${LOG}.
the port in proxy_pass is not for listening/accepting incoming
connections - it is for connecting outwards to another server/service
You must have something else (another httpd, probably not nginx)
listening on 8009..?
On 23/07/13 17:39, imran_kh wrote:
Hello,
I am using Nginx web server
It's not nice or clean. It's an ugly hack.
appa
On Mon, Jul 8, 2013 at 3:06 PM,
Richard Kearsley <rkears...@blueyonder.co.uk>
wrote:
Hi
I'
Hi
I'm trying to set up spdy so that I can choose weather or not to use it
based on the server location that's accessed
As I understand, the underlying protocol (http/https/spdy) is
established first before any request can be sent (e.g. before we know
which location it will match)
I know this
Hi
I already checked there, I'm getting a different error ("mp4 atom too
large" != "mp4 moov atom is too large")
My error message seems to have been added in this patch
http://nginx.org/download/patch.2012.mp4.txt
In any case, the example given there gives a reasonable example, as
'12583268' i
=../../../lua-nginx-module
before u ask :)
On 28/06/13 17:11, Richard Kearsley wrote:
Hi
I use ngx_http_mp4_module quite heavily, and very occasionally I see
this error for a few files:
mp4 atom too large:723640794
With the number differing.. Is the number the size of the atom in bytes?
If so
Hi
I use ngx_http_mp4_module quite heavily, and very occasionally I see
this error for a few files:
mp4 atom too large:723640794
With the number differing.. Is the number the size of the atom in bytes?
If so, 723640794 is around 690MB and the mp4 file is only around 150MB
The same file works
Hi
I'm pretty sure I have found the cause,
All the videos I see it happening on have short audio (Audio stops
before the video)
On 22/06/13 16:06, Richard Kearsley wrote:
Hi
I've been able to test a few videos myself and can see it happening
Just to be clear, 99%+ seem to be fine an
e question is, which?
Thanks
On 22/06/13 15:41, Maxim Dounin wrote:
Hello!
On Sat, Jun 22, 2013 at 10:40:53AM +0100, Richard Kearsley wrote:
Hi
I’m using the mp4 module quite heavily, and very occasionally (once
every minute or so on a busy website) there is an error written to
error.log and st
Hi
nginx version: nginx/1.4.1
built by gcc 4.2.1 20070831 patched [FreeBSD]
TLS SNI support enabled
configure arguments: --with-debug --with-http_ssl_module
--with-http_stub_status_module --with-file-aio --with-http_flv_module
--with-http_mp4_module --with-http_geoip_module
--add-module=../../
Hi
I’m using the mp4 module quite heavily, and very occasionally (once
every minute or so on a busy website) there is an error written to
error.log and status 500 returned in the access log
[error] 42078#0: *5510811 start time is out mp4 stts samples in ...
(mostly this error)
[error] 42072#0
On 06/05/13 18:47, Gee wrote:
The frustrating thing here is that /tmp/fastcgi.socket does actually
exist. I tried 'touch' and making sure 'wheel' has the appropriate
permissions. The result of 'ls -la /tmp/fastcgi.socket' revealed
nothing awry.
Does anyone have any ideas/hints?
To try
Hi
I read here that keepalives to backend can be enabled with the upstream
module
(http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive)
But can they be used without defining an upstream block? Just a simple
proxy_pass as the backend is a variable in my case: 'proxy_pass $proxy
Hi
Are you sure it's not the linux file/buffer cache that's using all your
ram? (does ps/top show nginx or the worker processes using it directly?)
Linux and most/all other unix variants will fill up unused ram with
cached versions of the most recently used files so they don't have to be
read f
Hi
Is the max value specified in `open_file_cache` on a per-worker basis?
e.g. if I set it to 20,000, will it cache 80,000 open fds with 4 workers?
Thanks
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
any hacker will need to be inside your server or have some
administration over the network to find those ips
On 06/04/13 15:01, Larry wrote:
My concern is that a hacker is able to know my other ips over europe.
My host is not a problem. The real deal is the outgoing packets I don't want
extern
If you run wireshark on your main box, you will be able to see the ips
it connects to (but not the urls because of https). However they would
need to be logged into your box to run wireshark and at this point they
could just run a netstat command to find the ips it is connected to.
If you mean c
Hi
That's a good idea but I think it's not possible
The key is set before request is sent to backend but it can only know
the content length after the request is sent to backend (catch 22)
On 04/04/13 12:59, ntib1 wrote:
Hi,
I'd like to put $content_length in proxy_cache_key in order nginx to
Hi
I'm trying to tune 'kern.maxbcache' with hope of increasing
'vfs.maxbufspace' so that more files can be stored in buffer memory on
freebsd 9.1
It's suggested to tune this value here
http://serverfault.com/questions/64356/freebsd-performance-tuning-sysctls-loader-conf-kernel
and here http:/
Hi
Many (MANY) people use php-fpm and it's fine
If you really need extra performance you should test it yourself on your
own application (not hard to do) and see if proxying to apache actually
gives any benefit
___
nginx mailing list
nginx@nginx.org
what was the article that you read?
you should probably do your own tests to work out the fastest way to do
it if you really need as many dynamic requests as possible
My thoughts at this point (after using nginx for 3+ years) is that I
would avoid using apache - KISS!
On 17/02/13 13:42, mottws
52 matches
Mail list logo