Hi, i was monitoring squid today and i noticed that the Process Data Segment Size was increasing and when it reaches more thane 2G cachemgr started to display negative values
Resource usage for squid: UP Time:25403.457 seconds CPU Time:20538.541 seconds CPU Usage:80.85% CPU Usage, 5 minute avg:97.66% CPU Usage, 60 minute avg:96.60% Process Data Segment Size via sbrk(): -2037196 KB Maximum Resident Size: 0 KB Page faults with physical i/o: 0 Memory usage for squid via mallinfo(): Total space in arena: -2037196 KB Ordinary blocks: -2037695 KB 98 blks Small blocks: 0 KB 0 blks Holding blocks: 17176 KB 2 blks Free Small blocks: 0 KB Free Ordinary blocks: 498 KB Total in use: -2020519 KB 100% Total free: 498 KB 0% Total size: -2020020 KB Memory accounted for: Total accounted: 1991653 KB memPoolAlloc calls: 338930824 memPoolFree calls: 322167199 why do cachemgr display these negative vaules? I switched to the cachemgr mem page to see what memory pool has the bigest amount: mem_node has 75% impact is this normal? and what is the mem_node pool? On Sun, 9 Jan 2005 10:57:07 +0200, Houssam Melhem <[EMAIL PROTECTED]> wrote: > On Sat, 8 Jan 2005 23:28:19 +0100 (CET), Henrik Nordstrom > <[EMAIL PROTECTED]> wrote: > > On Sat, 8 Jan 2005, Houssam Melhem wrote: > > > > > BTW: > > > i have 8GB of RAM and > > > > What CPU type? > dual Xeonâ Processors at 3.8GHz > > > > > Is Squid compiled as 32-bit or 64-bit? > > > 32-bit > > > How large is the Squid process? > > > How to get this info, from top or from cachemgr? > > cachemgr says now: > Process Data Segment Size via sbrk(): 536056 KB > > > My guess is that you are running into various 2GB barrier problems. > > Computers are still somewhat troublematic in dealing with values larger > > than 2GB. > > > > How to deal with that? > is it better to use diskd as it forks other processes to deal with IO > rather than using aufs that make threads in the same squid process? > > > Regards > > Henrik > > >
