On 2014-11-12 07:45, Roberto De Ioris wrote:
I tried hard, but i was not able to reproduce it (ubuntu trusty 64bit
default perl).
Well... I've switched to trusty too (and ran apt-get dist-upgrade, just
in case), to check if this is a perl issue, and I still can reproduce
it.
Results (without writing to stream):
After 1st request:
{address space usage: 66813952 bytes/63MB} {rss usage: 8884224
bytes/8MB} [pid: 13449|app: 0|req: 1/1] 127.0.0.1 () {24 vars in 247
bytes}
After 100000 requests:
{address space usage: 76546048 bytes/73MB} {rss usage: 18571264
bytes/17MB} [pid: 13449|app: 0|req: 100001/100001] 127.0.0.1 () {24 vars
in 251 bytes}
And another 100000 requests:
{address space usage: 86142976 bytes/82MB} {rss usage: 28270592
bytes/26MB} [pid: 13449|app: 0|req: 200001/200001] 127.0.0.1 () {24 vars
in 251 bytes}
As you can see, it is definitely growing.
valgrind (valgrind --log-file=0vg --leak-check=full
--show-leak-kinds=all --) reveals this (after 100000 requests):
....
==13832== 2,411,280 bytes in 591 blocks are still reachable in loss
record 2,311 of 2,313
==13832== at 0x4C2AB80: malloc (in
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==13832== by 0x67AE864: Perl_safesysmalloc (in
/usr/lib/libperl.so.5.18.2)
==13832== by 0x67D4DCD: ??? (in /usr/lib/libperl.so.5.18.2)
==13832== by 0x67D57C4: Perl_sv_newmortal (in
/usr/lib/libperl.so.5.18.2)
==13832== by 0x48137E: XS_stream (psgi_loader.c:86)
==13832== by 0x67D3865: Perl_pp_entersub (in
/usr/lib/libperl.so.5.18.2)
==13832== by 0x67CBE85: Perl_runops_standard (in
/usr/lib/libperl.so.5.18.2)
==13832== by 0x675D48F: Perl_call_sv (in /usr/lib/libperl.so.5.18.2)
==13832== by 0x489150: uwsgi_perl_call_stream (psgi_plugin.c:169)
==13832== by 0x48CFA5: uwsgi_perl_request (psgi_plugin.c:576)
==13832== by 0x41B801: wsgi_req_recv (utils.c:1427)
==13832== by 0x462C73: simple_loop_run (loop.c:144)
==13832==
==13832== 2,415,360 bytes in 592 blocks are still reachable in loss
record 2,312 of 2,313
==13832== at 0x4C2AB80: malloc (in
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==13832== by 0x67AE864: Perl_safesysmalloc (in
/usr/lib/libperl.so.5.18.2)
==13832== by 0x67D4DCD: ??? (in /usr/lib/libperl.so.5.18.2)
==13832== by 0x67DC720: Perl_newSV_type (in
/usr/lib/libperl.so.5.18.2)
==13832== by 0x67DC73D: Perl_newRV_noinc (in
/usr/lib/libperl.so.5.18.2)
==13832== by 0x481397: XS_stream (psgi_loader.c:86)
==13832== by 0x67D3865: Perl_pp_entersub (in
/usr/lib/libperl.so.5.18.2)
==13832== by 0x67CBE85: Perl_runops_standard (in
/usr/lib/libperl.so.5.18.2)
==13832== by 0x675D48F: Perl_call_sv (in /usr/lib/libperl.so.5.18.2)
==13832== by 0x489150: uwsgi_perl_call_stream (psgi_plugin.c:169)
==13832== by 0x48CFA5: uwsgi_perl_request (psgi_plugin.c:576)
==13832== by 0x41B801: wsgi_req_recv (utils.c:1427)
==13832==
==13832== 4,798,080 bytes in 1,176 blocks are still reachable in loss
record 2,313 of 2,313
==13832== at 0x4C2AB80: malloc (in
/usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==13832== by 0x67AE864: Perl_safesysmalloc (in
/usr/lib/libperl.so.5.18.2)
==13832== by 0x67D5216: Perl_more_bodies (in
/usr/lib/libperl.so.5.18.2)
==13832== by 0x67DC3C6: Perl_sv_upgrade (in
/usr/lib/libperl.so.5.18.2)
==13832== by 0x67E5E7F: Perl_sv_bless (in /usr/lib/libperl.so.5.18.2)
==13832== by 0x4813B3: XS_stream (psgi_loader.c:86)
==13832== by 0x67D3865: Perl_pp_entersub (in
/usr/lib/libperl.so.5.18.2)
==13832== by 0x67CBE85: Perl_runops_standard (in
/usr/lib/libperl.so.5.18.2)
==13832== by 0x675D48F: Perl_call_sv (in /usr/lib/libperl.so.5.18.2)
==13832== by 0x489150: uwsgi_perl_call_stream (psgi_plugin.c:169)
==13832== by 0x48CFA5: uwsgi_perl_request (psgi_plugin.c:576)
==13832== by 0x41B801: wsgi_req_recv (utils.c:1427)
Without streaming everything is fine (expectable). Just in case, I've
tried 2.0.1 and 2.0.5 - similar results.
It does not look like it has something to do with perl itself (5.16.1 in
CentOS and 5.18.2 in trusty).
The only thing in common between those two systems (CentOS 7 & Trusty)
is that both are running in Proxmox/openvz, but I doubt that this has
(or could have) any impact, especially after looking at valgrind
results.
Additional info:
################# uWSGI configuration #################
pcre = True
kernel = Linux
malloc = libc
execinfo = False
ifaddrs = True
ssl = True
zlib = True
locking = pthread_mutex
plugin_dir = .
timer = timerfd
yaml = embedded
json = False
filemonitor = inotify
routing = True
debug = False
capabilities = False
xml = libxml2
event = epoll
############## end of uWSGI configuration #############
*** Starting uWSGI 2.0.8 (64bit) on [Thu Nov 13 06:57:27 2014] ***
compiled with version: 4.8.2 on 13 November 2014 06:54:03
os: Linux-2.6.32-33-pve #1 SMP Fri Sep 26 08:02:30 CEST 2014
nodename: u1404
machine: x86_64
clock source: unix
pcre jit disabled
detected number of CPU cores: 2
current working directory: /home/aldem/src/perl
detected binary path: /home/aldem/src/uwsgi/uwsgi-2.0.8/uwsgi
*** WARNING: you are running uWSGI without its master process manager
***
your processes number limit is 127542
your memory page size is 4096 bytes
detected max file descriptor number: 1024
lock engine: pthread robust mutexes
thunder lock: disabled (you can enable it with --thunder-lock)
uwsgi socket 0 bound to TCP address :9090 fd 3
initialized Perl 5.18.2 main interpreter at 0x1682b30
your server socket listen backlog is limited to 100 connections
your mercy for graceful operations on workers is 60 seconds
mapped 72768 bytes (71 KB) for 1 cores
*** Operational MODE: single process ***
Plack::Util is not installed, using "do" instead of "load_psgi"
2014-11-13 06:57:27.0.240637 [14603] Worker started
PSGI app 0 (psgi-streamer.pl) loaded in 0 seconds at 0x18583e8
(interpreter 0x1682b30)
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI worker 1 (and the only) (pid: 14603, cores: 1)
Regards,
Alexander.
_______________________________________________
uWSGI mailing list
[email protected]
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi