Hi,
The next mainline release (1.27.5) is planned this week.
The next stable release (1.28.0) is planned next week.
> On 14 Apr 2025, at 10:06 AM, Vishwas Bm wrote:
>
> Is this planned for release this week ?
>
> Thanks & Regards,
> Vishwas
>
> On Tue, Mar 18,
>
>
>
> Regards,
> Vishwas
> ___
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
Roman Arutyunyan
a...@nginx.com
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx
for NGINX.
[1] https://github.com/nginx/nginx
[2] https://github.com/nginx/nginx-tests
[3] https://github.com/nginx/nginx.org
[4] https://github.com/nginx/nginx/blob/master/SECURITY.md
[5]
https://www.f5.com/company/blog/nginx/meetup-recap-nginxs-commitments-to-the-open-source-community
On behalf o
Hello,
> On 11 Jul 2024, at 1:12 PM, Roman Arutyunyan wrote:
>
> Hi Sébastien,
>
>> On 9 Jul 2024, at 5:52 PM, Sébastien Rebecchi
>> wrote:
>>
>> Hi!
>>
>> We are using nginx a lot in our company for high HTTP/2 workloads.
>>
>>
wn (12 worker
> processes, when reload signal is sent then it takes more than 3 minutes until
> the last worker is down), which is a problem in our case.
Yes, this works started in April and was suspended due to switching to other
important tasks.
We will finish it shortly.
Thanks f
Hi,
> On 27 Jun 2024, at 10:17 AM, Riccardo Brunetti Host
> wrote:
>
>
>
>> On 26 Jun 2024, at 17:56, Roman Arutyunyan > <mailto:a...@nginx.com>> wrote:
>>
>> Hi,
>>
>>> On 26 Jun 2024, at 7:21 PM, Riccardo Brunetti Host
>>
Hi,
> On 26 Jun 2024, at 7:21 PM, Riccardo Brunetti Host
> wrote:
>
> Hello, thanks for the answer.
>
>> On 26 Jun 2024, at 16:45, Roman Arutyunyan > <mailto:a...@nginx.com>> wrote:
>>
>> Hi,
>>
>>> On 26 Jun 2024, at 6:15 PM,
his will mislead clients by offering them to switch to unsupported
http/3.
> Nginx version: nginx/1.26.1 on ubuntu 22.04
>
> Thanks.
> Riccardo
> ___
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
Roman Arutyunyan
a...@nginx.com
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx
header should not abort the request. Please check the
error log for the real reason why this is happening.
> On Fri, Apr 26, 2024 at 8:20 AM Roman Arutyunyan wrote:
> >
> > Hi,
> >
> > > On 25 Apr 2024, at 8:10 AM, Saint Michael wrote:
> &
ctive (which is also the default)
explicitly enables skipping them, and this fact is reported in log.
Turn it off and those characters (dot in your case) will pass.
---
Roman Arutyunyan
a...@nginx.com
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx
Hi,
> On 20 Apr 2024, at 6:12 PM, Marcin Wanat wrote:
>
> Hi,
>
> I discovered a patch for QUIC that enables the use of sendmmsg() with
> GSO, authored by Roman Arutyunyan:
>
> https://mailman.nginx.org/pipermail/nginx-devel/2023-July/4ZTXGDMY2LC4VRZRBNBXGULYHS5DMR3
Changes with nginx 1.26.023 Apr 2024
*) 1.26.x stable branch.
Roman Arutyunyan
a...@nginx.com
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx
entails "directly" in "allows passing the accepted connection
> directly to any configured listening socket"?
In case of "pass" there's no proxying, hence zero overhead.
The connection is passed to the new listening sock
rovements.
Thanks to Piotr Sikora.
*) Bugfix: unexpected connection closure while using 0-RTT in QUIC.
Thanks to Vladimir Khomutov.
Roman Arutyunyan
a...@nginx.com
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx
t_headerContent-Length "";
> > proxy_set_headerX-Original-URI $request_uri;
> > }
> >
> > location @error401 {
> > return 302 /login;
> > }
> >
> >
ries. They are
> > libcrypto.{a|so}, and libssl.{a|so}. Those artifacts are usually
> > placed in a lib/ directory, not in separate ssl/ and crypto/
> > directories. (Two separate directories may be a BoringSSL-ism).
> >
> > So I believe the proper flag would be
ication.
>
> I have tried to enable keepalive related parameters as per the nginx config
> above and also check on the OS's TCP tunable and i could not find any
> related settings which make NGINX to kill the TCP connection.
>
> Anyone encountering the same issues?
> ___
> nginx mailing list
> nginx@nginx.org
> https://mailman.nginx.org/mailman/listinfo/nginx
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx
t to open source
software and the Internet itself.
We wish you the best of luck and would be pleased to work with you again
in future.
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx
/src/http/modules/ngx_http_ssl_module.c
As you see in ngx_http_ssl_servername() code, it already assumes that c->data
references a ngx_http_connection_t object, so can you.
> Regards,
> Gabriel
>
> On Wed, Feb 7, 2024 at 11:29 AM Roman Arutyunyan wrote:
>
> > Hi,
> >
that for HTTP/1 as well.
You need to know what's the current connection stage to tell this.
ngx_http_v3_init_session() is called right before initializing QUIC streams for
the session.
When exactly do you call your function?
[..]
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx
ror “unknown
> directive stream”?
> Does open source version of NGNX support stream directive? If yes, how to
> include it in the yocto build?
Stream support in nginx is enabled by "--with-stream" configuration option.
Apparently your nginx is now built without Stream support
) and
leaves the server address (c->local_sockaddr) unchanged.
The behavior is the same for Stream and HTTP and is explained by the fact that
initially the module only supported HTTP fields like
X-Real-IP and X-Forwarded-For, which carry only client address.
Indeed it does look inconsistent in
no buffer
> is ever available in this phase
>
> any input, pointers, or suggestions are really welcomed
If you want to register a content phase handler, assign it to cscf->handler.
A good example is ngx_stream_return() in src/stream/ngx_stream_return_module.c.
--
Roman Arutyunyan
are open for feedback about QUIC in Stream, application protocols and
the features you expect in Stream.
[1] https://nginx.org/en/docs/stream/ngx_stream_core_module.html
[2] https://hg.nginx.org/nginx-quic/
[3] https://www.rfc-editor.org/rfc/rfc92
set_headerHost$host;
> proxy_pass_header Authorization;
> # proxy_set_headerX-Scheme$scheme;
> # proxy_set_headerUpgrade $http_upgrade;
> # proxy_set_header
em by
address/port.
Also, please notice that nginx-de...@nginx.org <mailto:nginx-de...@nginx.org>
is a mailing list for development questions.
You should send user questions to nginx@nginx.org <mailto:nginx@nginx.org>
instead.
Roman Arutyunyan
a...@nginx.com
ten port, no longer default to port 80?
Thanks for reporting this.
Indeed, default listen is broken in nginx-quic branch.
Please try the attached patch which should fix the problem.
--
Roman Arutyunyan
# HG changeset patch
# User Roman Arutyunyan
# Date 1667307635 -14400
#
Hi,
On Sun, Jul 10, 2022 at 11:35:48AM +0300, Maxim Dounin wrote:
> Hello!
>
> On Fri, Jul 08, 2022 at 07:13:33PM +, Lucas Rolff wrote:
>
> > I’m having an nginx instance where I utilise the nginx slice
> > module to slice upstream mp4 files when using proxy_cache.
> >
> > However, I have
o help with the testing of the patch.
We'll be happy to get feedback from you.
> Best regards,
>
> Noam
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,292104,292104#msg-292104
>
> _______
Hi Ryan,
We have committed the patch:
https://hg.nginx.org/nginx-quic/rev/6bd8ed493b85
The issue should be fixed now. Please report back if it’s not.
—
Roman Arutyunyan
a...@nginx.com
> On 30 Jan 2021, at 18:40, Roman Arutyunyan wrote:
>
> Hi Ryan,
>
> We have found a probl
Hi Ryan,
We have found a problem with POSTing request body and already have a patch
that changes body parsing and fixes the issue. It will be committed after
internal review.
Hopefully it’s the same issue. Until then you can just check out older code.
—
Roman Arutyunyan
a...@nginx.com
> On
Hi Ryan,
Thanks for reporting this.
Do you observe any errors in nginx error log?
—
Roman Arutyunyan
a...@nginx.com
> On 27 Jan 2021, at 19:55, Ryan Gould wrote:
>
> hello all you amazing developers,
>
> i check https://hg.nginx.org/nginx-quic every day for new updates bein
Hello,
> On 8 Nov 2020, at 15:42, Ryan Gould wrote:
>
> hello team,
>
> i have found that https://hg.nginx.org/nginx-quic (current as of 06 Nov 2020)
> is having some trouble properly POSTing back to PayPal using php 7.3.24 on
> a Debian Buster box. things work as expected using current mainlin
.0.0.1:8081;
> proxy_cache one;
> add_header X-Cache-Status $upstream_cache_status always;
> }
> }
>
> server {
> listen 8081;
>
> location / {
> add_header cache-control "max-age=5, stale-while-revalidate=10"
>always;
>
> if ($connection = "3") {
> return 204;
> }
>
> return 404;
> }
> }
The fix was committed:
https://hg.nginx.org/nginx/rev/7015f26aef90
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hi,
> On 8 Jul 2020, at 07:14, webber wrote:
>
> Hi,
>
> Thanks for your reply. I have tried the patch that removes the original
> Accept-Ranges in slice filter module, but it is not work as my expected.
> Because I think response header `Accept-Ranges` should be added if client
> send a range
_set(&r->headers_out.accept_ranges->value, "bytes");
>
> return ngx_http_next_header_filter(r);
> ```
>
> I am confused if it is a bug?
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,288569,288569#msg-288569
>
> _
__
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
e a variable that returns the sum of all upstream response sizes
from all subrequests.
Also, there can be multiple upstream servers. And each slice can be fetched
from a different one.
> Thank you
>
> Roman Arutyunyan 于2019年11月26日周二 下午9:10写道:
>
> > Hi,
> >
> > On
f you want combined numbers, use client-side variables like $bytes_sent
instead.
>
> Thank you
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
g, you should use nginx logging API.
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,285729,285733#msg-285733
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/
ist
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
PS: it is better to ask development questions in the nginx-devel mailing list:
http://mailman.nginx.org/mailman/listinfo/nginx-devel
--
Roman Arutyunyan
___
nginx mailing list
ngi
osted at Nginx Forum:
> https://forum.nginx.org/read.php?2,285729,285729#msg-285729
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
; the remote client IP address.
>
>
>
> *TCPDUMP O/P from LB:*
>
>
>
> 11:49:51.999829 IP 10.43.18.116.2231 > 10.43.18.107.2231: UDP, length 43
>
> 11:49:52.000161 IP 10.43.18.107.2231 > 10.43.18.172.2231: UDP, length 43
>
>
>
> *TPCDUM O/P from U
equest to upstream, client: 202.111.0.51, server: , request: "GET
> /inetmanager/v1/configinfo HTTP/1.1", upstream:
> "http://202.111.0.40:1084/inetmanager/v1/configinfo";, host:
> "202.111.0.37:1102"
> 2019/04/12 14:50:17 [debug] 92#92: *405 finalize http u
the nginx was interrupted while sending the
> request?
It is true this message is a bit misleading in this case. Sending the request
was the last thing that nginx did on the upstream connection. If there was any
activity on the read side after that, the message would be different.
--
am connection is closed too while
> sending request to upstream, client: 202.111.0.51, server: , request: "GET
> /inetmanager/v1/configinfo HTTP/1.1", upstream:
> "http://202.111.0.40:1084/inetmanager/v1/configinfo";, host:
> "202.111.0.37:1102"
>
palive=30s:30s:3 backlog=64999;
> proxy_pass $backend_svr:443;
> limit_conn perserver 255;
> ssl_preread on;
> }
The problem is limit_conn is executed at an earlier phase than ssl_preread.
The $ssl_preread_server_name variable is just empty at that moment.
You bas
f error if header is not yet
sent.
PS: it is better to send development questions to nginx-de...@nginx.org
mailing list
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
ank you,
>
> Ottavio
>
> --
> Non c'è più forza nella normalità, c'è solo monotonia
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
--
Roman Arutyunyan
Hi,
On Thu, Dec 06, 2018 at 04:01:36PM +0300, Roman Arutyunyan wrote:
[..]
> This should solve the issue:
>
> location ~ /test/($regular|expression)$ {
> proxy_pass http://127.0.0.1:8010/test/$name;
Sorry, the right syntax is of course this:
uot; 403
> 153 "-" "curl/7.52.1" "-"
> 127.0.0.1 - - [04/Dec/2018:17:44:17 +] "GET /test/regular
> HTTP/1.1" 200 8 "-" "curl/7.52.1" "-"
>
> 127.0.0.1 - - [04/Dec/2018:17:44:19 +] "GET /test/ HTTP/1.0&q
nginx tries to be transparent and do not introduce any changes in the response
and behavior of the origin unless explicitly requested.
> Thanks!
>
> On 14/11/2018, 17.36, "nginx on behalf of Roman Arutyunyan"
> wrote:
>
> Hi,
>
> On Wed, Nov 14, 2018 at
ile to contain the "Accept-Ranges: bytes" header, to be able to do range
> requests to it?
The "proxy_force_ranges" directive enables byte ranges regardless of the
Accept-Ranges header.
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_force_ranges
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
k. is there something else i can do to work around this?
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
sponse is
cached, previous one is discarded.
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
cached in the first client's context, and proxy_cache_lock is
enabled, then this second client will wait for the file to be fully cached by
nginx and only receives it from the cache after that.
[..]
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx
Hi,
On Thu, Sep 27, 2018 at 07:55:40PM +0300, Roman Arutyunyan wrote:
> Hi,
>
> On Thu, Sep 27, 2018 at 02:51:25PM +0200, Marcin Wanat wrote:
> > Hi,
> >
> > i am using latest (1.15.4) nginx with stream module.
> >
> > I am trying to create config with
between each of them up to max_conns limit ?
>
>
> Regards,
> Marcin Wanat
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
ke sure this was actually the case.
> Thanks,
> Eylon Saadon
>
> On Thu, Aug 30, 2018 at 5:28 PM Roman Arutyunyan wrote:
>
> > Hi,
> >
> > On Thu, Aug 30, 2018 at 05:19:53PM +0300, Eylon Saadon wrote:
> > > hi,
> > > thanks for the quick re
e response makes
it look like the response is proxied to multiple client simultaneously.
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
keepalive_timeout 0;
> proxy_pass https://test_server$request_uri;
> }
> }
>
>
> Thanks,
>
> eylon saadon
>
>
> On Thu, Aug 30, 2018 at 4:52 PM Roman Arutyunyan wrote:
>
> > Hi,
> >
> > On Thu, Aug 30, 2018 at 04:34:29P
file and tcp_nopush, it's possible that the response is not
pushed properly because of a mirror subrequest, which may result in a delay.
Turn off sendfile and see if it helps.
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
listen 8082;
>client_max_body_size 20M;
>
>location /console {
> proxy_pass http://console
>}
>
>.
>.
>.
>
>
> }
>
>
>
> Thank you all for your time,
> Joseph Wonesh
>
> --
> This message is private
erver and do a
bunch of other things.
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Hi,
On Fri, Jun 08, 2018 at 05:15:57AM -0400, neuronetv wrote:
> Roman Arutyunyan Wrote:
> ---
>
> > Something like this should work:
> >
> > application /src {
> > live on;
> > exec_push ffmpeg
; http://198.91.92.112:90/mobile/index.m3u8. If I paste this url into google
> chrome it plays but it's small. Is there any way to modify this url so
> chrome plays a larger image? I know google chrome has a zoom function under
> settings but I'd like to do this with m
the same session and upstream connection are
reused for multiple client packets.
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
this rewrite is proxy_pass, then just pass it
a modified uri.
map $request_uri $new_uri {
~^/prd-solr/(.*)$ /solr/$1;
}
location /mirror {
internal;
proxy_pass http://cloud$new_uri;
}
> Best regards,
> Jurian
>
>
> On 28-05-18 17:50, Roman Arutyunyan wrote:
> &g
http://cloud;
> }
Mirror requests can be rewritten. But keep in mind that a mirror subrequest
has a different URI than the original request. In your case it's /mirror.
So unless 'prd-solr' matches it, rewrite will not happen.
Normally $request_uri is use
in your case) does not send the PROXY protocol
header. Remove the "proxy_protocol" parameter from "listen" to fix this.
> ________
> From: nginx on behalf of Roman Arutyunyan
>
> Sent: Monday, May 7, 2018 3:55:59 PM
> To: nginx@nginx.
> anything else I need to do.
For details it's better to look into error.log.
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
configuration "staging.example.com" is resolved in runtime because
the entire URL contains a variable ($request_uri). However, if you put an ip
address instead, you will not need a resolver.
> > On 13 Mar, 2018, at 19:36, Roman Arutyunyan wrote:
> >
> > On Tue, Mar 13, 2018 at 06:
may not see it if, for example,
your log level is too high.
> > On 13 Mar, 2018, at 18:34, Roman Arutyunyan wrote:
> >
> > Hi Kenny,
> >
> > On Tue, Mar 13, 2018 at 05:37:52PM -0300, Kenny Meyer wrote:
> >> Hi,
> >>
> >> I’m having
> What could be the error?
The configuration looks fine.
Are there any errors in error.log?
And what happens if you switch www.example.com and staging.example.com?
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
n ALPN extension, if present.
> >
> > Any feedback is appretiated.
> >
>
> I have just tested this patch and can confirm it's working perfectly fine.
We have committed the patch.
http://hg.nginx.org/nginx/rev/79eb4f7b6725
Thanks for cooperation.
--
Roman Arutyunyan
location / {
autoindex on;
autoindex_format xml;
xslt_stylesheet conf/sort.xslt;
root html;
}
sort.xslt:
http://www.w3.org/1999/XSL/Transform";>
127.0.0.1:61021 connected to
127.0.0.1:8001
> 3. I think that $tcpinfo_* aren't supported in stream. Is there any reason
> for this?
There's a number of http module features still missing in stream.
This is one of them.
--
Roman Arutyunyan
___
"cookie",
> "securitytoken=eyJraWQiOiJ...Swnq3xjEvXodQ");
>
> xhr.send(data);
>
> xhr.onreadystatechange = processRequest;
>
> function processRequest(e) {
> if (xhr.readyState == 4 && xhr.status == 200) {
>
is evaluated at an early request processing stage
(rewrite phase) and no output is normally created by this time.
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
I'm doing, let me know
> which details would be helpful.
Probably your UDP session is not considered finished by nginx.
Did you specify proxy_responses and proxy_timeout in your config?
If proxy_responses is unspecified and proxy_timeout is large, it ma
tried.
The easiest way is to send the status line + header bigger than
proxy_buffer_size bytes. Another way is to send a null byte somewhere in the
response header. You can also try sending broken line and header termination:
CR followed by a non-LF byte.
--
Roman Arutyunyan
___
x.org/read.php?2,278113,278113#msg-278113
>
> _______
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
reuse */
>
> ngx_chain_update_chains(r->pool, &ctx->free, &ctx->busy, &out,
> (ngx_buf_tag_t) &ngx_http_foo_filter_module);
>
> return rc;
> }
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,278094,278094#msg-278094
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
t; For testing I use plain curl GET requests (without ETag, Vary, etc. headers)
> - always the same.
It's not only disk size that matters.
Cache entries may also be evicted when approaching the keyz_zone size.
Try increasing the zone size.
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
eciate feedback on
> whether your phone can play the video (or not) at:
> http://198.91.92.112/hls.html. Please let me know if it plays for you
> or not and what kind of phone you have. Thanks again.
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
--
>
>
>
> Regards,
> Yuan Man
> Trouble is a Friend.
>
>
>
> At 2017-10-26 20:22:13, "Roman Arutyunyan" wrote:
> >Hi,
> >
> >On Thu, Oct 26, 2017 at 03:15:02PM +0800, 安格 wrote:
> >> Dear All,
> >>
> &g
be processed until the
previous request and all its subrequest (including mirror subrequests) finish.
So if you use keep-alive client connections and your mirror subrequests are
slow, you may experience some performance issues.
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
's (for dns load balancing). But I always want to route a user,
> targetting for some url, to the same container.
In the commercial version of nginx we have the sticky module, which can be used
to solve your issue:
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#sticky
&
elects a server based on the
position in the list. However, the consistent hash balancer
(hash $arg_test consistent) makes a selection based on the server name/ip
specified in the "server" directive.
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
lt) to make the requests which
wait longer proceed with uncached proxying. In fact, once this timeout expires,
you will have the last chance to check if the resource is already unlocked.
--
Roman Arutyunyan
___
nginx mailing list
nginx@ngi
--with-google_perftools_module --with-debug --with-cc-opt='-O2 -g -pipe
> -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
> --param=ssp-buffer-size=4 -grecord-gcc-switches
> -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic'
> --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld
> -Wl,-E'
>
> Posted at Nginx Forum:
> https://forum.nginx.org/read.php?2,276344,276344#msg-276344
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
r with this, i looked already.
> >
> >Posted at Nginx Forum:
> >https://forum.nginx.org/read.php?2,276322,276334#msg-276334
> >
> >___
> >nginx mailing list
> >nginx@nginx.org
> >http://mailman.nginx.org/mailman/listin
to nginx standard slice module. Besides, it looks like they
heavily patched nginx (at least the mp4 module).
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
orum.nginx.org/read.php?2,276322,276322#msg-276322
>
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
u want to keep the address while proxying a TCP connection, you can use
the PROXY protocol.
http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_protocol
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.or
ailable as part of our commercial subscription."
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_purge
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
ODELAY
> > option before an SSL handshake.
> >
> >
> > --
> > Maxim Dounin
> > http://nginx.org/
> > ___
> > nginx-announce mailing list
> > nginx-annou...@nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx-announce
> >
> ___
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
was released.
If it does not help, please provide more complete config (the part you sent
earlier does not even have proxy_pass).
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
quot;max_fails=0" statement is essentially
> being ignored causing the error message "no live upstreams while connecting
> to upstream" in my logs.
That's what happens when bar.com is in fact unavailable too.
Even though it's not globally marked as down.
[..]
--
Ro
from the cache, if
> proxy_cache_background_update is on - so something must be wrong with my
> config?!?
Can you check if the update request comes to your backend when user gets the
old cached response?
[..]
--
Roman Arutyunyan
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
1 - 100 of 162 matches
Mail list logo