> On 26 Nov 2018, at 01:20, hmac wrote:
>
> Ohh forgot to add. It does actually load, it just takes around 60 secs what
> normally takes 4 seconds
>
And what if you try to re-run with "worker_processes 1;" ?
--
Sergey Kandaurov
includes the ssl handshake
> time?
No.
> If not, is there any method to get the duration of ssl handshake in Nginx?
I'm not aware of.
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
re a way (i mean a configurable way) to tell NGINX to write just the
> headers, so that header goes out in a single TLS record?
Yes, there's a way to send headers separately.
See http://nginx.org/r/postpone_output for details.
--
Sergey Kandaurov
___
ngx_http_ssl_module);
-if (ngx_ssl_create_connection(&sscf->ssl, c, NGX_SSL_BUFFER)
+if (ngx_ssl_create_connection(&sscf->ssl, c, 0)
!= NGX_OK)
{
ngx_http_close_connection(c);
--
Sergey Kandaurov
___
dule,
that is, specified in the upstream{} block. See for details:
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_timeout
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
> On 23 Jan 2019, at 05:37, Roar wrote:
>
> Thanks Sergey Kandaurov.
> The second problem is that I set grpc_read_timeout and grpc_send_timeout but
> it seems does not take effect. I tested many times and found that if the
> read_timeout less than default 60s, then it wo
ug] 36109#0: *10 http finalize request: 408,
> "/utoProto.idProduce.IdProduce/getUniqueIds?" a:1, c:1
the request is finalized with "408 Request Time-out"
> 2019/01/24 10:50:53 [debug] 36109#0: *10 http terminate request count:1
> 2019/01/24 10:50:53 [debug] 36109#0: *10 http terminate cleanup count:1
> blk:0
> 2019/01/24 10:50:53 [debug] 36109#0: *10 http posted request:
> "/utoProto.idProduce.IdProduce/getUniqueIds?"
> 2019/01/24 10:50:53 [debug] 36109#0: *10 http terminate handler count:1
> 2019/01/24 10:50:53 [debug] 36109#0: *10 http request count:1 blk:0
> 2019/01/24 10:50:53 [debug] 36109#0: *10 http2 close stream 1, queued 0,
> processing 1, pushing 0
> 2019/01/24 10:50:53 [debug] 36109#0: *10 http2 send RST_STREAM frame sid:1,
> status:1
> 2019/01/24 10:50:53 [debug] 36109#0: *10 http close request
> 2019/01/24 10:50:53 [debug] 36109#0: *10 http log handler
> 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 7FBA2E802400
> 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 7FBA2D837800
> 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 7FBA2E02EA00, unused: 0
> 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 7FBA2E800400, unused: 0
> 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 7FBA2E00A600, unused:
> 2778
> 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 7FBA2E02E600, unused:
> 711
> 2019/01/24 10:50:53 [debug] 36109#0: *10 post event 7FBA2F01A938
> 2019/01/24 10:50:53 [debug] 36109#0: posted event 7FBA2F01A938
> 2019/01/24 10:50:53 [debug] 36109#0: *10 delete posted event
> 7FBA2F01A938
> 2019/01/24 10:50:53 [debug] 36109#0: *10 http2 handle connection handler
> 2019/01/24 10:50:53 [debug] 36109#0: *10 http2 frame out: 7FBA2F003268
> sid:0 bl:0 len:4
> 2019/01/24 10:50:53 [debug] 36109#0: *10 writev: 13 of 13
> 2019/01/24 10:50:53 [debug] 36109#0: *10 http2 frame sent: 7FBA2F003268
> sid:0 bl:0 len:4
> 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 7FBA2F003000, unused:
> 2760
> 2019/01/24 10:50:53 [debug] 36109#0: *10 reusable connection: 1
> 2019/01/24 10:50:53 [debug] 36109#0: *10 event timer add: 3:
> 18:267073812
> 2019/01/24 10:50:53 [debug] 36109#0: worker cycle
> 2019/01/24 10:50:53 [debug] 36109#0: kevent timer: 18, changes: 0
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
after 1000 requests been processed,
> nginx will close the tcp connection, because I can find `TIME_WAIT` on the
> nginx side.
> - then gRPC client will report tens of thousands of `TransientFailure` at
> same time
>
See http://ng
h
> case it gives a standard nginx 502 error page rather than a custom page.
Hello,
you may want to try recursive error pages in location / {}
with error_page 502 in @server_b.
See for details: http://nginx.org/r/recursive_error_pages
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
> On 3 May 2019, at 02:12, jarstewa wrote:
>
> Is there an equivalent of max_fails
> (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails) if
> I'm using proxy_pass without an upstream block?
>
Not that I'm awa
> On 22 Aug 2019, at 15:05, aiaa5505 wrote:
>
> [error] 8#8: *256283 upstream rejected request with error 0 while reading
> response header from upstream, HTTP/2.0
>
It is not something nginx gRPC proxy currently supports.
See https://trac.nginx.org/nginx/ticket/1792 for details and proposed
with the response.
Instead, you may want to disable range processing on backend,
by removing the Range request header while proxying requests:
proxy_set_header Range "";
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
gt; error_log /var/log/nginx/website-error.log;
>
> Because I use 404 for doing url rewriting and so they are not errors...
>
> error_page 404 = /url_rewriting.php;
>
http://nginx.org/r/log_not_found
--
Sergey Kandaurov
___
nginx
e off by
> default, so why does nginx care about that certificate?
That's opposite: nginx received a certificate_unknown alert message
from a client for some reason while in handshake.
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
ficate_key /etc/letsencrypt/live/domain.net/privkey.pem; #
> managed by Certbot
>
>
> And on the off chance you need it, heres proxy.conf
>
> client_max_body_size 10m;
> client_body_buffer_size 128k;
> proxy_bind $server_addr;
Try removing this directive, that's lik
ile client already closed and server stay in CLOSE_WAIT.
Hello.
This is a known issue. It would be nice if
you could try and report back if this patch helped you.
# HG changeset patch
# User Sergey Kandaurov
# Date 1534236841 -10800
# Tue Aug 14 11:54:01 2018 +0300
# Node ID b71df78c7dd
ebugging purpose
(you might want to decrypt it first).
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
T, c->log, 0,
"peer shutdown SSL cleanly");
return NGX_DONE;
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
?
I'd replicate this check in ngx_ssl_handshake().
And probably for SSL_read_early_data, SSL_shutdown, SSL_peak,
(ok, we don't use SSL_peak), but this is a moot point.
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
/2 connection
> [crit] ... SSL_read() failed (SSL: error:14191044:SSL
> routines:tls1_enc:internal error) while processing HTTP/2 connection
There was a TLS record decryption error for some reason.
Not much details.
--
Sergey Kandaurov
___
nginx m
tes
$ ./objs/nginx -V
nginx version: nginx/0.7.65
TLS SNI support enabled
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
LS, and if a server_name extension is
sent, then the extension SHOULD contain the same name that was
negotiated in the application protocol. If the server_name is
established in the TLS session handshake, the client SHOULD NOT
attempt to request a different server nam
quot;grpcs://Z.Z.Z.Z:PORT", host: "fqdn1:PORT"
"error 2" means that backend responded with RST_STREAM(INTERNAL_ERROR),
that is, effectively rejected processing request.
You may want to consult with backend error log to find out the reason.
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
:00 [debug] 1374#0: kevent timer: 3000, changes: 1
2020/08/20 23:32:02 [debug] 1374#0: kevent events: 1
2020/08/20 23:32:02 [debug] 1374#0: kevent: 7: ft:-2 fl:0025 ff:
d:49039 ud:62F0E538
2020/08/20 23:32:02 [debug] 1374#0: *1 SSL shutdown handler
2020/08/20 23:32:02 [debug] 1374#0: *1 SSL_shutdown: -1
2020/08/20 23:32:02 [debug] 1374#0: *1 SSL_get_error: 1
2020/08/20 23:32:02 [crit] 1374#0: *1 SSL_shutdown() failed (SSL:
error:1409F07F:SSL routines:ssl3_write_pending:bad write retry) while
processing HTTP/2 connection, client: 127.0.0.1, server: 127.0.0.1:8080
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
> On 22 Aug 2020, at 01:30, Maxim Dounin wrote:
>
> Hello!
>
> On Thu, Aug 20, 2020 at 11:47:08PM +0300, Sergey Kandaurov wrote:
>
>>
>>> On 20 Aug 2020, at 22:16, Maxim Dounin wrote:
>>>
>>> Hello!
>>>
>>> On Thu, A
> On 26 Aug 2020, at 11:57, Xu Yang wrote:
>
> Hi all,
>This is a patch for HTTP/2 GOAWAY frame process, please refer to the
> detail.
> thanks.
Please see a more complete patch below.
# HG changeset patch
# User Sergey Kandaurov
# Date 1598889483 -10800
# Mon Au
if it is something
from Microsoft, then I heard that it still prefers to use TLSv1.
Then you might want to look at this thread as somewhat related:
http://mailman.nginx.org/pipermail/nginx/2018-November/057154.html
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
nt clients such as ngtcp2 or kwik.
It should also be explicitly enabled on server with "ssl_early_data on".
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
> On 7 Oct 2020, at 10:33, Sergey Kandaurov wrote:
>
>>
>> On 7 Oct 2020, at 06:02, Ryan Gould wrote:
>>
>> hello all you amazing developers,
>>
>> i found some old 2013 references to this error relating to SPDY, but have
>> not seen any
relevant configuration details. How to obtain debug log:
http://nginx.org/en/docs/debugging_log.html
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
using kqueue but not so with epoll.
Please try this patch.
# HG changeset patch
# User Sergey Kandaurov
# Date 1602503831 -3600
# Mon Oct 12 12:57:11 2020 +0100
# Branch quic
# Node ID d791f11d1625d6f99d0d0c3272fd4c98d4816f21
# Parent d14e15c33548a4432b682b9bbb4b6ba8df82c0b3
QU
ng by curl
> command propely. I also tried to enable the proxy_ssl_server_name, but
> didn't help.
I'd check what's actually sent in SNI (upstream SSL server name).
You may want to explore debug messages for further insights.
http://nginx.org/en/docs/debugging_log.html
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
g log:
http://nginx.org/en/docs/debugging_log.html
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
x_rtmp_eval, so that I can contact them to fix the source
> code?
>
nginx-rtmp-module is a third-party nginx module.
You may want to report about a build issue to the module author(s).
This is a new error reported in recent gcc versions,
that's why it may not trigg
so http://nginx.org/en/docs/http/websocket.html
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
0
>
> VERSION-rtmp= 1.2.1
> @@ -122,6 +122,8 @@ SUBST_VARS= NGINX_DIR
> .for i in ${MODULE_PACKAGES}
> PREFIX$i= ${NGINX_DIR}/modules
> .endfor
> +
> +CFLAGS+= -DTLS1_3_VERSION=0x0304
>
That is a culprit.
It hijacks an established API expected in ng
coming HTTP/3) the limit also applies to the actually
processed request body chunks in the corresponding protocols,
if the "Content-Length" request header was not specified in a request.
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
quot; 404 555 "-"
> "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko)
> Chrome/89.0.4389.72 Safari/537.36" "-" "h3-29"
>
It using HTTP/1.1 (for whatever reasons).
Usually that means a failure to negotiate HTTP/3.
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
ng.
http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
s/http-bind";));
>
> to websocket:
>
>let [conn, setConn] = React.useState(new
> Strophe.Connection("wss://grasp.deals/xmpp-websocket"));
>
> I get this error:
>
>WebSocket connection to 'wss://grasp.deals/xmpp-websocket' failed: Error
> during
t; If nginx doesn't support bootstrapping WebSockets with HTTP/2, what should
> I do?
Are you actually using HTTP/2 ?
What if you try disabling http2 in the listen directive, to be sure.
"Unexpected response code: 403" could mean misconfigu
nd.
As of now, nginx-quic supports QUIC and HTTP/3 termination only,
it doesn't support communicating to a QUIC (and thus - HTTP/3) backend.
[..]
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
CERTIFICATE) while SSL handshaking, client: 127.0.0.1, server:
> 0.0.0.0:19099
The error indicates an empty value.
This is because "set" variables are not yet handled while SSL handshaking.
You might want to replace it with e.g. geo or map that use global context.
T
ation timed out) while sending response to client"
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
run/php/php-fpm.iserv-helloworld.sock;
>}
>
> }
>
> [4] https://bz.apache.org/bugzilla/show_bug.cgi?id=53332
> [5] https://bz.apache.org/bugzilla/show_bug.cgi?id=57087
> [6] https://bugs.php.net/bug.php?id=60826
> [7] https://trac.nginx.org/nginx/ticket/1344
> [8] h
file or
> directory:fopen('/etc/letsencrypt/live/$host/fullchain.pem','r')
> error:2006D080:BIO routines:BIO_new_file:no such file)
The error suggests you have too old nginx version.
Variables support in the "ssl_certificate"
> On 5 Dec 2021, at 18:24, wordlesswind wrote:
>
> Hello,
>
>
> I noticed that the certificate of quic.nginx.org has expired.
>
It should be fixed now.
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mail
ons KTLS;
>..
> }
> #
> What am I doing wrong?
>
Make sure you have enabled sendfile in configuration.
Note that Linux 4.18 as distributed with Centos8
implements no KTLS for TLSv1.3 ciphers,
and quite limited number of ciphers for TLSv1.2.
--
Sergey Kandaurov
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org
ne is applied.
To specify several values: --with-openssl-opt="opt1 opt2"
It's also useful to know the actually negotiated ciphersuite.
--
Sergey Kandaurov
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org
t specify OpenSSL version, so I assume this
belongs to https://trac.nginx.org/nginx/ticket/2071#comment:1
--
Sergey Kandaurov
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org
CLIBSSL \
> --with-http_realip_module \
> --with-http_auth_request_module \
> --with-http_gzip_static_module \
> --with-http_v2_module \
> --with-http_sub_module \
> --with-libatomic \
> --with-file-aio \
> --with-http_xslt_module \
> --with-http_flv_module \
> --with-
ror while reading back
from upstream. This means no next upstream logic for UDP.
The waiting time can be shortened, if the peer reports
back with the ICMP error, such as "port unreachable".
In this case, it is seen as recv() error immediately,
without waiting for connection timeout.
Any wa
s
> simply unavailable with udp?
>
> Thanks
>
I believe there's no such option for UDP with passive checks.
--
Sergey Kandaurov
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org
> On 6 May 2022, at 03:54, Alan Jackson wrote:
>
> Hi,
>
> I'm trying to use rate limiting on an nginx mail proxy->nginx fastcgi backend
> to restrict the number of concurrent connections from a client's IP.
> Unfortunately, I can't use proxy_protocol on the mail proxy side due to the
> ngi
ated for a given locale,
overwritten by intermediaries, or discarded
So it should be fine to have it absent.
--
Sergey Kandaurov
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org
ake sure you have no the "http2" option on a particular IP:1443 elsewhere,
as "http2" attributes to all virtual servers sharing such IP:PORT.
--
Sergey Kandaurov
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org
>
> Thank you.
>
It's roughly 500 bytes per upstream server in a simple case.
The exact numbers depend on nginx version and configuration.
Also, currently upstream zone cannot be configured to be less
than 8 page sizes.
--
Sergey Kandaurov
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org
t;
> In my ngx_http_form_read, I have the following:
> {
>#if defined(ngx_version) && nginx_version >= 8011
>r->main->count--;
>#endif
>
> //form parsing data
> // no return, this is a void?
>
> }
>
I see numerous issues in your c
of standard support
of the relevant Linux distribution.
For details on Ubuntu, you can check https://wiki.ubuntu.com/Releases
--
Sergey Kandaurov
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org
.1 on insecure.example.com?
>
> Is this an OpenSSL 3 issue? Does it work with OpenSSL 1.1.1?
>
TLS 1.0 and 1.1 are de-facto disabled by default in OpenSSL 3.0+.
See for more details: https://trac.nginx.org/nginx/ticket/2250
--
Sergey Kandaurov
___
further updates of this code, including features,
potential changes in behaviour, bug fixes, and refactoring.
There are still rough edges but basically it works.
> Can anyone help me with that? If this goes forward, we'll be happy to share
> anything useful we find on our side as we
/ngx_event_quic_transport.h:314:49: error: field ‘level’ has
> incomplete type
> 314 | enum ssl_encryption_level_t level;
> | ^
Make sure to provide correct OpenSSL path(s).
--
Sergey Kandaurov
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org
ing next available client Connection ID,
but there were no any, as seen in the above error.
Hope that helps.
--
Sergey Kandaurov
___
nginx mailing list -- nginx@nginx.org
To unsubscribe send an email to nginx-le...@nginx.org
> On 28 Sep 2022, at 01:04, Maxim Dounin wrote:
>
> Hello!
>
> On Tue, Sep 27, 2022 at 04:05:54PM +0400, Sergey Kandaurov wrote:
>
>>> On 27 Sep 2022, at 14:11, João Sousa Andrade
>>> wrote:
>>>
>>> Thank you for the clarification Serg
On Tue, Nov 01, 2022 at 05:02:51PM +0400, Roman Arutyunyan wrote:
[..]
> # HG changeset patch
> # User Roman Arutyunyan
> # Date 1667307635 -14400
> # Tue Nov 01 17:00:35 2022 +0400
> # Branch quic
> # Node ID 40777e329eea363001186c4bf609d2ef0682bcee
> # Parent 598cbf105892bf9d7acc0fc3278b
instructions:
https://quic.nginx.org/packages.html
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx
1 && alloc % 512 != 0 && alloc < 8192) {
> -
> +if (items == 1 && alloc % 512 != 0 && alloc < 8192
> +&& !ctx->state_allocated)
> +{
> /*
> * The zlib deflate_state allocation, it takes about 6K,
>
arallel’ SSL mode, what’s the correct usage for
> ‘ssl_trusted_certificate'?
>
The directive specifies a file with trusted CA certificates.
See for details:
http://nginx.org/r/ssl_trusted_certificate.
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
force Nginx to verify the upstream ssl
> certificate against the server hostnames provided in the upstream server
> block, instead of the pattern present in the proxy_pass directive?
Use the proxy_ssl_name directive to override.
See for more details: http://nginx.org/r/proxy_ssl_name
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
irst;
~\Q192.0.2.2:8000\E$ second;
}
proxy_ssl_name $name;
Note well that $upstream_addr may contain multiple addresses, use it
with a special care. See for details: http://nginx.org/r/$upstream_addr
--
Sergey Kandaurov
___
nginx
ample.com {
:server 10.0.0.76:9000;
:}
:server {
:listen 443;
:
:proxy_pass $ssl_preread_server_name;
:}
OTOH, you may still want map{} to provide a default value,
if client didn’t sent SNI, or something, e.g.:
:map $ssl_preread_server_name $name {
:
eived early) while SSL
> handshaking, client: 10.210.128.122, server: 0.0.0.0:443
>
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
solver
OTOH, if that name is used within fastcgi_pass literally,
it would be resolved at startup by system resolver instead.
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
ted 5a3ab1b5804b, 46ddff109e72, and 924b6ef942bf and they have the
> same problem.
>
> Configuration is pretty much default with HTTPS and HTTP/2 server blocks.
Please check if reverting 12cadc4669a7 helps.
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
l error:SSL alert number 80) while reading response header from
> upstream”
>
You received an “internal_error” TLS alert from the peer.
You may want to check error logs on upstream side.
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
ot always
contain up-to-date and correct information.
See reference documentation:
http://nginx.org/r/proxy_cache_valid
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
og_format`, the entry in the
> access_log is always 0. As far as I can tell, a lot of data went through the
> websocket, so clearly `$body_bytes_sent` does not include data sent over a
> websocket.
>
> [..]
Make sure to run a recent enough version of nginx, at
eading client request headers,
>
> Is there anyway to ignore the time check?
No way, but you may want to try “optional_no_ca” if it’s also not trusted.
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
ybe OpenSSL source has to be
> edited?
This may be caused by TLSv1.3 version draft mismatch as found
in CH supported_versions. You may want to update OpenSSL.
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
lhost",
all of them will be used in a round-robin fashion.
See for details:
http://nginx.org/r/proxy_pass
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
.1/.openssl/
>
> The make DESTDIR=/opt/test install works fine in nginx 1.13.x with OpenSSL
> 1.0.2p
> I am not sure the change is caused by nginx 1.15.3 or openssl-1.1.1 to be
> honest
What effect do you expect from DESTDIR?
Starting from OpenSSL 1.1.0, it is used there as instal
1.0, it is used there as install prefix. ==> This
> may be an after effect of this
As previously noted. You can also find this note in CHANGES:
*) The INSTALL_PREFIX Makefile variable has been renamed to
DESTDIR. That makes for less confusion on what this variable
is for. Also,
.
I could try to kldload the module sem.ko, or better upgrade your system
canonically which means rebuilding nginx, as said in another mail.
See http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/makeworld.html
for the canonical way to update your system.
--
Sergey Kandaurov
pluk...@nginx
rther increase it by setting kern.nbuf in /boot/loader.conf
With your current maxbufspace value, it's kern.nbuf=210024 now.
--
Sergey Kandaurov
pluk...@nginx.com
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
%util
> sda 0.00 4.33 560.331.33 35712.0045.3363.66
> 37.20 65.06 1.78 100.00
>
You are likely hitting the IOPS limit. 550r/s is quite enough to saturate
4xSATA 7200 in RAID10. There are reads of 64 of something per request
in average. I
rc;
> ngx_str_t referer;
>
> -referer.len = len - 7;
> +referer.len = len;
> referer.data = ref;
>
> rc = ngx_regex_exec_array(rlcf->regex, &referer, r->connection->log);
Committed, thanks!
--
Sergey Kandaurov
p
On 03.10.2014 13:42, Grzegorz Kulewski wrote:
Hello,
Is it true that a GET request that satisfies proxy_cache_bypass (and generates
BYPASS cache status in the access log) should also refresh proxy cache for that
URL?
There are several tutorials on the Internet that advise that it works. Also
On Oct 16, 2014, at 1:38 PM, igorb wrote:
> [...]
> So what is wrong with the usage of try_files in the initial regexp-based
> location config?
That is because a location defined with a regular expression has no fixed
length to make a replacement in try_files, which is what alias do.
-
it was there in 0.0.10.
http://hg.nginx.org/nginx/rev/0d08eabe5c7b
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
On 29.04.2015 00:13, itpp2012 wrote:
Hmm, following: http://nginx.com/resources/admin-guide/tcp-load-balancing/
I get a nginx: [emerg] "stream" directive is not allowed here
eventhough its within the http context
You need to define it in the main context.
See http://nginx.org/r/stream for deta
nginx with zlib library sources specified
manually with —-with-zlib option, and that’s an issue in zlib, not nginx.
If such a warning bothers you, you may want to look at this change:
https://github.com/madler/zlib/commit/e54e12
--
Sergey Kandaurov
___
origin fetches were actually done.
Depending on fastcgi_cache_revalidate setting, EXPIRED is either
simply due to an outdated cached response, or a failed revalidation.
In either way, a full response is served from an upstream (origin) server.
--
Sergey Kandaurov
__
cgi parser: 0
> http fastcgi header: ":: "
So, the header field name output as generated with php
(and previously guessed by Valentin),
is invalid as per 7230, which is in turn referenced in 7540.
: field-name = token
: token = 1*tchar
: tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" / "+" / "-" / "." /
:"^" / "_" / "`" / "|" / "~" / DIGIT / ALPHA
Not much to discuss.
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
x_http_v2_prefix(3);
+prefix = ngx_http_v2_prefix(4);
}
value = ngx_http_v2_parse_int(h2c, &pos, end, prefix);
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
On 12.12.2015 18:05, chi...@gmail.com wrote:
A exciting feature for ngx_http_slice_module, I test it,but too many
upsteam connections,could I limit the count of connection?
You could cache upstream connections. See for details:
http://nginx.org/r/keepalive
___
rypted ClientHello: no
Cert subject: CN = localhost
Cert issuer: CN = localhost
HTTP/1.1 200 OK
Server: nginx/1.25.2
Date: Mon, 21 Aug 2023 14:58:40 GMT
Content-Type: text/plain
Content-Length: 23
Connection: close
X25519Kyber768Draft00
--
Sergey Kandaurov
_
a special treatment to preserve SP before empty reason
phrase. The below patch should help; although it doesn't look
efficient and can be polished, I think this is quite enough for
valid use cases.
# HG changeset patch
# User Sergey Kandaurov
# Date 169323
> On 29 Aug 2023, at 08:33, Maxim Dounin wrote:
>
> Hello!
>
> On Mon, Aug 28, 2023 at 08:59:28PM +0400, Sergey Kandaurov wrote:
>
>>
>>> On 26 Aug 2023, at 18:21, Jérémy Lal wrote:
>>>
>>> Hi,
>>>
>>> https://bugs.d
> On 31 Aug 2023, at 14:28, Maxim Dounin wrote:
>
> Hello!
>
> On Wed, Aug 30, 2023 at 04:20:15PM +0400, Sergey Kandaurov wrote:
>
>>> On 29 Aug 2023, at 08:33, Maxim Dounin wrote:
>>>
>>> On Mon, Aug 28, 2023 at 08:59:28PM +0400, Sergey Kandaur
ules/ngx_http_uwsgi_module.c
> @@ -1381,7 +1381,10 @@ ngx_http_uwsgi_process_header(ngx_http_r
> }
>
> u->headers_in.status_n = status;
> -u->headers_in.status_line = *status_line;
> +
> +if (status_line->len > 3) {
> +u->headers_in.status_line = *status_line;
> +}
>
> } else if (u->headers_in.location) {
> u->headers_in.status_n = 302;
>
>
After discussion in the adjacent thread,
I think the change is fine.
--
Sergey Kandaurov
___
nginx mailing list
nginx@nginx.org
https://mailman.nginx.org/mailman/listinfo/nginx
1 - 100 of 115 matches
Mail list logo