[EMAIL PROTECTED] wrote:
> total used free shared buffers cached
>Mem: 257660 253556 4104 33052 81252 149412
>-/+ buffers/cache: 22892 234768
>Swap: 530104 9612 520492
>
> I have been observing the memory usage for the last few weeks, and this
> happens consistently. Over the matter of days, buffers and cache grow to
> fill and exceed the available memory (excepting a tiny amount).
This is actually what it is supposed to be doing. In this snapshot, you
are "committing" about 23 MB of physical memory and 9 MB of
swap for real processes. Linux uses any "leftovers" as I/O cache and
buffers, improving performance. Roughly, disk I/O is cached until
physical RAM is approximately filled. This memory is freed readily when
applications need it.
Try this. In a situation like the above, open GIMP and open a new, large
RGB image. You will see that GIMP will suddenly need lots of memory, and
the system will allocate that application memory from cache. Here's an
example on a 128 MB box running kernel 2.4.2:
Before GIMP:
total used free shared buffers cached
Mem: 127352 125880 1472 0 1092 81400
-/+ buffers/cache: 43388 83964
Swap: 136512 0 136512
After loading a big (2k x 2k) RGB image:
total used free shared buffers cached
Mem: 127352 125892 1460 0 892 44008
-/+ buffers/cache: 80992 46360
Swap: 136512 0 136512
And closing GIMP back down (and freeing all of those pages):
total used free shared buffers cached
Mem: 127352 59400 67952 0 668 19368
-/+ buffers/cache: 39364 87988
Swap: 136512 0 136512
Side issue: There's actually _less_ memory committed afterwards (39 MB
versus 43 MB previous) because the dentry and inode caches also shrank
under the memory pressure (this is kernel 2.4; the equivalent caches in
kernel 2.2 are rather restricted). See /proc/slabinfo for details...
This is also why so much memory seems to be "used" after slocate indexes
your disk contents, under kernel 2.4. The memory looks as though
it's used by applications, but is actually held by the dentry and inode
caches, and is readily freed. You don't notice this in kernel 2.2 since
these slab caches are limited.
One of the balancing issues in kernel VM is how much priority to give
application memory over cached pages -- or coursely stated, how far do you
let an application eat the I/O cache before you decide to start swapping?
If you swap hard instead of freeing cached pages, you "hit the wall" and
induce memory pressure and bad performance too early. However, if you
sacrifice the cache entirely before swapping, performance may be
detrimented because you may need to read files in again that otherwise
would have been mapped in cache. So swapping the "right pages" out is
also really important. You don't want to swap stuff out that you'll need
to swap right back in again a few cycles later.
The new page aging and multiqueue VM in kernel 2.4 does a pretty good job
in a wide variety of loads. Kernel 2.2.19(pre) does wonders for the 2.2
series, which for some time now has had pretty poor VM performance
in some circumstances (IMHO).
Sorry if that's a bit coarse, but I hope it's illuminating to your
question.
Craig Kulesa
[EMAIL PROTECTED]
http://loke.as.arizona.edu/~ckulesa/
_______________________________________________
Redhat-list mailing list
[EMAIL PROTECTED]
https://listman.redhat.com/mailman/listinfo/redhat-list