I saw 4k, 16k, and 32k buffer sizes in the response chain, why not keep all
buffers in the same size? Are these sizes of buffer relevant to the chunked
HTTP transfer encoding?
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,294421,294421#msg-294421
_
Hi,
Never mind, I figured it out by myself. The subrequest will enter the fail
label which will return an NGX_ERR that caused the above-mentioned error.
Thanks.
Posted at Nginx Forum:
https://forum.nginx.org/read.php?2,294374,294392#msg-294392
___
ngi
Hi,
I also did these in my header_filter to make sure that the modified response
is sent to the client with chunked encoding.
ngx_http_clear_content_length(r);
ngx_http_clear_accept_ranges(r);
ngx_http_clear_etag(r);
ngx_table_elt_t *header_entry =
ngx_list_push(&
Hi Maxim,
Thanks for your reply. Your guide made me understand thoroughly the role of
calling ngx_http_next_body_filter(r, NULL) in the gzip module which helped a
lot. The buffer now can be reused but I still got one issue that confused me
a lot.
I got curl: (18) transfer closed with outstanding
Hi,
I am writing my own filter module based on the gzip filter module. My filter
module would first insert a long text (200 to 1024 KB based on the
situation) at the beginning of the original response and then do some other
manipulations to the original response. The pre-configured number of buffe