Hi, I have a FASTCGI server listening over UDS to HTTP request from NGINX.
For some reason, the requests stopped reaching the FASTCGI server.
netstats -nap shows one socket connection in LISTENING state (as expected),
one in CONNECTED state (not sure there should be such session hanging
around),
Thank you B.R. I wonder why 505 was not supported.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,260754,260768#msg-260768
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Thank you Francis. The body content did the trick ... not as aesthetically
pleasing to the eyes as the NGINX's "hard-coded reason phrase", but it is
better than a blank page.
I did not understand what you meant by a config to control the reason
phrase.
Thanks again.
Posted at Nginx Forum:
http://
I have my FCGI server send "HTTP/1.1 505 Version Not Supported\r\nStatus:
505 Version Not Supported\r\n\r\n".
In nginx.conf, I have:
fastcgi_intercept_errors on;
error_page 505 /errpage;
location /errpage {
try_files /version_not_supported.html =5
Hi, does NGINX support the generation of the error message for HTTP error
code 505? For example, I see
"401 Authorization Required" when running nginx 1.6.2
but I don't see anything for 505. NGINX would return "505 OK" in the HTTP
response.
Thank you.
Posted at Nginx Forum:
http://forum.nginx.
I also tried
fastcgi_pass_header Status;
along with the fastcgi_intercept_errors directive. NGINX still returned 200
OK instead of the 400 sent by the fastcgi server.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,259436,259464#msg-259464
__
Hi, I expected fastcgi_intercept_errors to return a static error page AND to
have include the HTTP error code (e.g. 400) in the HTTP response header.
>From what I see, it returns the static error page but with 200 OK.
Is it the expected behavior?
If yes, is there a way to have nginx return the erro
Thank you Maxim, that was what I was looking for. However, it is still not
returning the static error page. Does nginx expect a certain response format
from the fcgi server? I tried:
"HTTP/1.1 400 Bad Request\r\nStatus: 400 Bad Request\r\n";
and
"HTTP/1.1 400 Bad Request";
The nginx.conf has:
Hi, I would like nginx to map a fastcgi error response a static error page,
and include the HTTP error code in its HTTP response header; e.g.
1. have nginx return the proper error code in its header to the client.
2. have nginx return the proper error page based on the fastcgi_pass
server's respon
Hi, FASTCGI is 'built in' NGINX. Can someone from the NGINX organization
confirm that there is no plan to retire the FASTCGI support in NGINX? Thank
you!
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,257704,257704#msg-257704
___
nginx mailin
Thank you, that did the trick.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,257508,257619#msg-257619
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hi, given
client --(tcp)-->nginx --(fcgi)-->fcgi server --(tcp)--> back-end server
if the client initiates a TCP disconnect, is there a way for NGINX to carry
out the termination to the fcgi server?
Or if the back-end server disconnects, how can the fcgi server communicate
the disconnect all the w
Hi, I would like nginx to serve all requests of a given TCP connection to
the same FCGI server. If a new TCP connection is established, then nginx
would select the next UDS FCGI server in round-robin fashion.
Can this be achieved with NGINX, and if yes, how?
I thought turning on fastcgi_keep_conn
Hi, is there a way in nginx to set a limit to the number of "buffered"
connections? I am referring to the client's request being buffered on disk)?
I was not able to find a directive for this but wanted to confirm, thank
you.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,257138,257138#
Hi, how do I get a patch for the fastcgi_request_buffering directive support
for nginx version 1.6.2 or any other version going forward? Thank you.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,256644,256644#msg-256644
___
nginx mailing list
I was using 1.7.9 and it was crashing so I now go by the stable version
1.6.2 per http://nginx.org/en/download.html.
Whichever version I use, I will need the fastcgi_request_buffering directive
patch. Thanks.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,256378,256591#msg-256591
_
Hi Kurt, where can I get a patch for nginx version 1.6.2 (the 'official'
stable version as of today)? Thank you!
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,256378,256563#msg-256563
___
nginx mailing list
nginx@nginx.org
http://mailman.ngi
Hi, the situation that I am trying to solve is what happens if the client's
request is larger than the configured client_max_body_size. Turning off
buffering by nginx should resolve the problem as nginx would forward every
packet to the back-end server as it comes in. Did I misunderstand the
purpos
Thanks Kurt.
The patch compiled and got installed fine. I no longer get an unknown
directive error msg. However, the client's POST request of 1.5M of data
still gives me this error "413 Request Entity Too Large"
even though I added "fastcgi_request_buffering off;"
location / {
includ
Thanks Kurt.
In the meantime, is there a way to access the patch? I was not able to
access the link to a patch mentioned in this email thread
http://trac.nginx.org/nginx/ticket/251
Thanks.
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,256378,256380#msg-256380
_
Hi, how can tell nginx not to buffer client's requests? I need this
capability to upload files larger than the nginx's max buffering size. I got
an nginx unknown directive error when I tried the fastcgi_request_buffering
directive. Is the directive supported and I am missing a module in my nginx
bu
Hi, I would like to have the auth_request fastcgi auth server to send some
custom variables to the fastcgi back-end server. For example, the Radius
server returned some parameters which the fastcgi auth server needs to send
to the fastcgi back-end server.
locate / {
auth_request /auth;
In case it will help someone else, the problem turned out to be in the
FastCGI auth server's printf, the last "statement" of the HTTP header should
end with \n\n instead of \r\n.
The following was wrong:
printf("Content-type: text/html\n\n"
"Set-Cookie: name=AuthCookie\r\n"
"FastCGI 9010: Hello!\n
Thank you Maxim, it is much better in the sense that I am not getting an
error at NGINX start time, but the FastCGI back-end server listening at port
9000 does not seem to get the cookie set by the FastCGI auth server, nor any
data from a POST request body or data generated by FastCGI auth app.
O
Hi,
Question 1:
I would like to have an FastCGI authentication app assign a cookie to a
client, and the Fast Auth app is called using auth_request. The steps are as
follows:
1. Client sends a request
2. NGINX auth_request forwards the request to a FastCGI app to
authenticate.
3. The authenticat
Thanks Sergio, that was helpful!
Posted at Nginx Forum:
http://forum.nginx.org/read.php?2,256075,256109#msg-256109
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hi, I am a newbie at nginx and looking at its authentication capabilities.
It appears that when using auth_request, every client request would still
require an invokation to the auth_request fastcgi or proxy_pass server.
Looking at auth_pam, I am not clear on how it works:
1. How does nginx pass t
27 matches
Mail list logo