Stuart Henderson <s...@spacehopper.org> wrote:

> On 2021/02/27 11:50, Theo de Raadt wrote:
> > To see the problem, It is better to look at "UVM amap" in "vmstat -m"
> > 
> >       UVM amap 32835  1690K   2636K 78644K 26812908    0  
> > 16,32,64,128,256,512,1024,4096,8192
> > 
> >                ^^^^ this number is way too big, it should be 500 to 2000 
> > ish.
> 
> Some things I'm running use a bunch more.
> 
> UVM amap125046  8548K   8898K 78644K147170477    0  
> 16,32,64,128,256,512,1024,2048,4096,8192,16384,32768,65536,524288
> 
> I wouldn't be surprised if there's some kind of leak on this system but
> most of these do go away after closing things.. (it's not Iris in this
> case). After closing firefox (with a lot of tabs), chrome, mysqld, a
> couple of Java and Perl things:
> 
> UVM amap  7201   411K   8898K 78644K147254716    0  
> 16,32,64,128,256,512,1024,2048,4096,8192,16384,32768,65536,524288
> 
> mutt and some other things that I didn't close use quite a few too.
> 
> (I have some work software using perl pdf::api2 with large files which
> hits amap *really* hard, stefan@ did a uvm commit in 2016 that helped a
> lot with stopping that from killing the kernel)

the amap INUSE is the sum of all userland sub-address-space mappings (because
we are so aggressive doing address space randomization for all objects).

since >pagesize malloc (and mmaps) will be naturally guarded, each one
is an amap.  If you leak, this will grow uncomfortably.  If you don't
leak, it still grows until the process releases the full-range of the
amap, or exits.

Having chosen many years ago to do aggressive address space
randomization, we kind of accept the kernel datastructure cost.

Right now we are not chasing how to handle issues with natural growth,
first we go after the leaks.  In the X server, in particular.

Reply via email to