Hi,
Thanks for the quick reply.
Yes, I know that ngx_http_grpc_module already supports grpc proxy. the
fact is that we have a legacy product running with a pretty old nginx
version and hope that it can be used to proxy grpc as well, so, can stream
module proxy the grpc request/response c
If yes and what are the drawbacks comparing with the grpc support by http
module?
thanks,
Allen
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,293868,293868#msg-293868
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email
Hi,
Can somebody show the minimal configuration to suppport large file upload
with multipart/form-data Content-Type?
When I upload 8GB file with the following global configs, I will always get
600s timeout with the error: upstream timed out (110: Connection timed out)
while reading response header
Hi,
Thanks for the reply!
So, if there is no error and the downstream/upstream didn't actively close
the connection, the nginx won't timeout and close the tcp connection with
the downstream or with the upstream at all? is that correct? and I suppose
the SO_KEEPALIVE option is turn on by default
Hi,
As we know there are some keepalive options in the nginx http modules to
reuse tcp connections,
But are there corresponding options in the nginx stream module to achieve
the same?
How nginx persist tcp connection with downstream?
How nginx persist tcp connection with upstream?
What is the "s
Looking into the big loop code, it may happen that the worker process may
close the keepalive connection before consuming any pending read events?
for ( ;; ) {
if (ngx_exiting) {
if (ngx_event_no_timers_left() == NGX_OK) {
ngx_log_error(NGX_LOG_NOTICE, cycl
Hi Maxim Dounin,
Is it possible that the nginx is closing the keepalive connection while
there is input data queued?
As we know If a stream socket is closed when there is input data queued, the
TCP connection is reset rather than being cleanly closed.
Br,
Allen
Posted at Nginx Forum:
https://f
Can someone elaborate this a little bit?
"NGINX supports WebSocket by allowing a tunnel to be set up between both
client and back-end servers."
what is the "tunnel" here?
Does it mean the client will talk with the back-end server directly after
the http Upgrade handshakes?
Posted at Nginx Forum:
A non root process needs to signal reload to nginx master (as root) without
sudo
I've tried using setcap and setpriv with CAP_KILL, both not work.
# getcap nginx/sbin/nginx
nginx/sbin/nginx = cap_kill+ip
#su user01 -s /bin/sh -c 'nginx/sbin/nginx -s reload'
nginx: [alert] kill(68, 1) failed (1:
Hi,
I found most times using "r" after ngx_http_free_request() won't have any
problem. the core dump would happen once for a while in the high load.
I am wondering if the "ngx.pfree" does not return the memory back to the os
when it's called?
Posted at Nginx Forum:
https://forum.nginx.org/read.p
to be more self assurance,
can somebody confirm that the "r" is no longer accessable after this?
ngx_http_free_request(r, 0);
thank you in advance!
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,289296,289299#msg-289299
___
I was wrong, the request object was created on the fly with the pool
object.
here the pool was detroyed before the r was referenced which caused the core
dump.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,289296,289298#msg-289298
___
nginx
version: 1.17.8
debug log:
2020/09/03 14:09:21 [error] 320#320: *873195 upstream timed out (110:
Connection timed out) wh
Hi Francis,
Thanks for the reply!
w.r.t. the "http://nginx.org/r/proxy_buffering";, the doc does not mention if
the buffering works for header, body or both, I'm wondering if nginx can
postpone the sending of upstream header in any ways? otherwise the client
will get wrong status code in this case
Will nginx buffer the header before receiving of the whole body?
If not, what if error happens in the middle of body receiving? nginx has no
chance to resend the error status then.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,287990,288039#msg-288039
I understand the nginx would proxy the header first and then the body, in
the case the connection with the upstream is broken during the transfer of
body, what status code the client would get? since the nginx would proxy the
200 OK from upstream first to the client, but will nginx send another 5xx
Patrick Wrote:
---
> On 2019-07-07 22:39, allenhe wrote:
> > Per my understanding, the reloading would only replace the old
> workers with
> > new ones, while during testing (constantly reloading), I found the
> output
Hi,
Per my understanding, the reloading would only replace the old workers with
new ones, while during testing (constantly reloading), I found the output of
"ps -ef" shows multiple masters and shutting down workers which would fade
away very quickly, so I guess the master process may undergo the s
Hi,
I found this is valid, and want to know what scenario it's used for.
deny 0.0.0.1;
Thanks,
Allen
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,283857,283857#msg-283857
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/
Nginx version: 1.13.6.1
1) In our use case, the Nginx is reloaded constantly. you will see lots
worker process hanging at "nginx: worker process is shutting down" after
couple days:
58 root 0:00 nginx: master process ./openresty/nginx/sbin/nginx
-p /opt/applicatio
1029 nobody 0:2
I see. so in this case the request was completely sent in single write
without blocking that there is no need to schedule a write timer anymore,
otherwise it is necessary.
Thanks for the explanations!
b.t.w, have you ever seen the work process is listening on the socket?
Posted at Nginx Forum:
I understand the connection establish timer, write timer and read timer
should be setup and removed in order, but where is the write timer? are
there lines in the logs saying I'm going to send the bytes, the sending is
on-going, and the bytes has been sent out?
Posted at Nginx Forum:
https://foru
but it looks to me the timer was set for the write not the read, also the
subsequent message isn't telling the nginx was interrupted while sending the
request?
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,283735,283739#msg-283739
___
nginx
Hi,
The Nginx hangs at proxying request body to the upstream server but with no
error indicating what's happening until the client close the font-end
connection. can somebody here help give me any clue? following is the debug
log snippet:
2019/04/12 14:49:38 [debug] 92#92: *405 epoll add connecti
Hi,
My Nginx is configured with:
proxy_next_upstream error timeout http_429 http_503;
But I find it won't try the next available upstream server with the
following error returned:
2019/04/05 20:11:41 [error] 85#85: *4903418 recv() failed (104: Connection
reset by peer) while reading response hea
Hi,
I understand it is the master process listen to the binding socket since
that's what I see from the netstat output in most time:
tcp0 0 0.0.0.0:28002 0.0.0.0:* LISTEN
12990/nginx: master
while sometimes I found the worker process also doing the same t
26 matches
Mail list logo