Hello!
On Mon, Oct 02, 2023 at 03:25:15PM +0530, Devarajan D via nginx wrote:
> > In general, $request_time minus $upstream_response_time is the
> > slowness introduced by the client.
>
> 1. It's true most of the time. But clients are not willing to
> accept unless
Dear Maxim Dounin, Team & Community,
Thank you for your suggestions.
Would be helpful if you could suggest the following,
> In general, $request_time minus $upstream_response_time is the slowness
> introduced by the client.
1. It's true most of the time. But clients are
Hello!
On Sun, Oct 01, 2023 at 08:20:23PM +0530, Devarajan D via nginx wrote:
> Currently, there is no straightforward way to measure the time
> taken by client to upload the request body.
>
> 1. A variable similar to request_time, upstream_response_time
> can be helpful to
from Nginx Developers/community:
Currently, there is no straightforward way to measure the time taken by client
to upload the request body.
1. A variable similar to request_time, upstream_response_time can be helpful to
easily log this time taken by client.
So it will be easy to prove t
rl -i localhost:8000`, I see these
> response
> > headers:
> >
> > X-Trip-Time: 0.001
> > X-Addr: 127.0.0.1:8001
> > X-Status: 200
> > X-Process-Time: -
> >
> > `cat app.log` shows that upstream was hit successfully, and `cat
> nginx.log`
>
me: -
>
> `cat app.log` shows that upstream was hit successfully, and `cat nginx.log`
> shows that nginx knows the $upstream_response_time at log time, as I get
> this log:
>
> 127.0.0.1:8001 200 0.004
>
> Why does nginx substitute the request time and relevant response metadata
ting nginx with `curl -i localhost:8000`, I see these response
headers:
X-Trip-Time: 0.001
X-Addr: 127.0.0.1:8001
X-Status: 200
X-Process-Time: -
`cat app.log` shows that upstream was hit successfully, and `cat nginx.log`
shows that nginx knows the $upstream_response_time at log time, as I get
this
mselves have multiple workers per server.
> >
> > I've recently started seeing an issue where the reported response_time
> and
> > typically the reported upstream_response_time the nginx access log are
> > drastically different from the reported response on the applicatio
've recently started seeing an issue where the reported response_time and
> typically the reported upstream_response_time the nginx access log are
> drastically different from the reported response on the application servers
> themselves. For example, on some requests the typical aver
orted upstream_response_time the nginx access log are
drastically different from the reported response on the application servers
themselves. For example, on some requests the typical average response_time
would be around 5ms with an upstream_response_time of 4ms. During these
transient periods of high load (approxim
I use nginx(1.15.3) as a reverse-proxy, and encounter a problem that
$upstream_response_time is larger than $request_time" in log files.
According to nginx documentation,
$upstream_response_time
keeps time spent on receiving the response from the upstream server; the
time is kept in se
Hello!
On Tue, Mar 07, 2017 at 04:38:04PM -0500, Jonathan Simowitz via nginx wrote:
> Hello,
>
> I have an nginx server that runs as reverse proxy and I would like to pass
> the $upstream_response_time value in a header. I find that when I do the
> value is actually a linux
On Tue, Mar 07, 2017 at 04:38:04PM -0500, Jonathan Simowitz via nginx wrote:
Hi there,
> I have an nginx server that runs as reverse proxy and I would like to pass
> the $upstream_response_time value in a header. I find that when I do the
> value is actually a linux timestamp with mi
Hello,
I have an nginx server that runs as reverse proxy and I would like to pass
the $upstream_response_time value in a header. I find that when I do the
value is actually a linux timestamp with millisecond resolution instead of
a value of seconds with millisecond resolution. Apparently this is
Cache: $upstream_cache_status";
more_set_headers "X-RTT: $upstream_response_time ms";
}
# common configuration
...
The problem is, the request was cached by nginx and didn't send
response to upstream, so the response of the header was :
age:94
cache-control:max-a
Hello!
On Fri, Nov 20, 2015 at 07:01:45PM +0100, B.R. wrote:
> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_response_time
>
> Does this represents the time from the end of the request forward to the
> upstream until it starts to answer (kinda TTFB)?
> Or does that inc
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#var_upstream_response_time
Does this represents the time from the end of the request forward to the
upstream until it starts to answer (kinda TTFB)?
Or does that include the whole response has been received, excluding the
time taken to fo
ream_status variable might contain wrong data if the
> "proxy_cache_use_stale" or "proxy_cache_revalidate" directives were used.
>
> On MISS requests the variables "$upstream_status" and
> "$upstream_response_time" are eventually returning wrong da
oxy_cache_revalidate" directives were used.
On MISS requests the variables "$upstream_status" and
"$upstream_response_time" are eventually returning wrong data, like the
example bellow:
$upstream_status - “504, 504, 200" or even “-,200"
$upstream_response_time -
On 11.02.2014 11:04, Ruslan Ermilov wrote:
I am using
add_header x-responsetime $upstream_response_time;
to report response times of the back-end to the client. I was
expecting to see the back-end response time (e.g. 0.500 for half a
second), however the headers that I am getting contain
On Mon, Feb 10, 2014 at 02:17:30PM -0800, Jeroen Ooms wrote:
> I am using
>
> add_header x-responsetime $upstream_response_time;
>
> to report response times of the back-end to the client. I was
> expecting to see the back-end response time (e.g. 0.500 for half a
>
I am using
add_header x-responsetime $upstream_response_time;
to report response times of the back-end to the client. I was
expecting to see the back-end response time (e.g. 0.500 for half a
second), however the headers that I am getting contain an epoch
timestamp, e.g:
x-responsetime
Hi again everyone!
Just posting a status update (because I hate coming across old threads with
reports of a problem I'm experiencing, and there is no answer!) What I've
found so far is starting to look like a Linux kernel bug that was fixed for
ipv6, but still remains for ipv4! Here's the relevant
Hi Andrei!
On Tue, Mar 19, 2013 at 2:49 AM, Andrei Belov wrote:
> Hello Jay,
>
> If I understand you right, issue can be repeated in the following cases:
>
> 1) client and server are on different EC2 instances, public IPs are used;
> 2) client and server are on different EC2 instances, private I
Hi Maxim,
On Tue, Mar 19, 2013 at 7:19 AM, Maxim Dounin wrote:
> Hello!
>
> As far as I understand, tcp_max_syn_backlog configures global
> cumulative limit for all listening sockets, while somaxconn limits
> one listening socket backlog. If any of the two is too small -
> you'll see SYN packet
Hello!
On Mon, Mar 18, 2013 at 02:19:26PM -0700, Jay Oster wrote:
> On Sun, Mar 17, 2013 at 4:42 AM, Maxim Dounin wrote:
>
> > On "these hosts"? Note that listen queue aka backlog size is
> > configured in _applications_ which call listen(). At a host level
> > you may only configure somaxcon
Hello Jay,
On Mar 19, 2013, at 2:09 , Jay Oster wrote:
> Hi again!
>
> On Sun, Mar 17, 2013 at 2:17 AM, Jason Oster wrote:
> Hello Andrew,
>
> On Mar 16, 2013, at 8:05 AM, Andrew Alexeev wrote:
>> Jay,
>>
>> You mean you keep seeing SYN-ACK loss through loopback?
>
> That appears to be the
Hi Maxim,
On Sun, Mar 17, 2013 at 4:42 AM, Maxim Dounin wrote:
> Hello!
>
> On "these hosts"? Note that listen queue aka backlog size is
> configured in _applications_ which call listen(). At a host level
> you may only configure somaxconn, which is maximum allowed listen
> queue size (but an
Hello!
On Sun, Mar 17, 2013 at 02:23:20AM -0700, Jason Oster wrote:
[...]
> > 1) A trivial one. Listen queue of your backend service is
> > exhausted, and the SYN packet is dropped due to this. This
> > can be easily fixed by using bigger listen queue, and also
> > easy enough to track as t
Hi again, Maxim!
On Mar 16, 2013, at 4:39 PM, Maxim Dounin wrote:
> Hello!
>
> On Sat, Mar 16, 2013 at 01:37:22AM -0700, Jay Oster wrote:
>
>> Hi Maxim,
>>
>> Thanks for the suggestion! It looks like packet drop is the culprit here.
>> The initial SYN packet doesn't receive a corresponding SY
and on all instance types. This "single server" test is the first time the
software has been run with nginx load balancing to upstream processes on the
same machine.
>> On Fri, Mar 15, 2013 at 1:20 AM, Maxim Dounin wrote:
>> Hello!
>>
>> On Thu, Mar 14, 2013 at
Hello!
On Sat, Mar 16, 2013 at 01:37:22AM -0700, Jay Oster wrote:
> Hi Maxim,
>
> Thanks for the suggestion! It looks like packet drop is the culprit here.
> The initial SYN packet doesn't receive a corresponding SYN-ACK from the
> upstream servers, so after a 1-second timeout (TCP Retransmissio
7:07:20PM -0700, Jay Oster wrote:
>
> [...]
>
> > The access log has 10,000 lines total (i.e. two of these tests with 5,000
> > concurrent connections), and when I sort by upstream_response_time, I get a
> > log with the first 140 lines having about 1s on the u
ss log has 10,000 lines total (i.e. two of these tests with 5,000
> > concurrent connections), and when I sort by upstream_response_time, I
> get a
> > log with the first 140 lines having about 1s on the
> upstream_response_time,
> > and the remaining 9,860 lines show 700m
Hello!
On Thu, Mar 14, 2013 at 07:07:20PM -0700, Jay Oster wrote:
[...]
> The access log has 10,000 lines total (i.e. two of these tests with 5,000
> concurrent connections), and when I sort by upstream_response_time, I get a
> log with the first 140 lines having about
ferer" ' ## User's Referer
'"$http_user_agent" ' ## User's Agent
'$request_time '## NginX Response
'$upstream_response_time '
36 matches
Mail list logo