I wrote a module for nginx 4 years ago which limited the number of
connections to upstream in order that the upstream servers would not be too
busy to work efficiently. Now I think this module is stable, and expect it
to be merged into nginx official if possible.
the code is here:
*https://github.
Why not use Lua, collect the data from your variables, run the query (and
optionally store them in a Lua cache) and process it all real-time,
none-blocking and without any (extra) module.
Examples:
http://stackoverflow.com/questions/25955869/how-do-i-use-mysql-for-dynamic-doc-root-with-nginx
https
Hi,
I'm new to nginx development and I have to work on a custom nginx module.
The module is designed to provide a list of variables for the user to use in
the nginx.conf file. Whenever those variables are used, the module makes a
udp request to a helper server to get the correct value of that var
PHP-FPM allows generating its own log files.
The default behavior of having errors sent back the FastCGI tube can be
overridden with proper error logging on PHP-FPM side.
2048 bytes for each line of log is more than enough on the web server side.
Do your homework: read the PHP docs. If you are sti
Are you sure your requests are processed by the right block?
Are you sure the configuration is being loaded? Since v1.9.2, you can use
the -T command-line parameter to show loaded configuration.
On reloading configuration by sending the HUP signal, ensure there is no
error message popping up in th
If you're allowing user-generated output to be written directly to your
logs without any sort of sanitation, you've got bigger problems to worry
about :p Again, it doesn't really make sense to have your fcgi error sent
here- why can't your fcgi process log elsewhere, and leaving the nginx
error lo
Hmm I understand that limitation. But an attacker or a bad application can
hide the important information which we need to identify the source of the
problem.
What about limiting the fastcgi output to 1024 bytes and appending this info
with max 1024 bytes.
client: 127.0.0.1, server: example.com, u
Already tried it, same result. Main page hits the cache, everything else -
bypasses.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,267584,267590#msg-267590
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/ngi