Package: openafs
Version: 1.4.0-4

Hi,

We noticed a problem with openafs 1.4.  Basically, when you have a large
cache size, and leave afs.conf at the defaults, afsd hangs during startup.

For example, if you have /var/cache/openafs on a separate partition (say
100gb partition), and have "/afs:/var/cache/openafs:80000000" in your
/etc/openafs/cacheinfo, when openafs-client runs, afs's cachetrim
process will consume 100% cpu and loop endlessly (and /afs will be
unusable).

The problem is that internally, the cache size is represented in an
afs_int32 (representing 1k blocks).  Use that in 32bit math, and it
quickly overflows.

For example, in src/afs/afs.h:

extern afs_int32 afs_cacheBlocks;       /*1K blocks in cache */
#define CM_DCACHECOUNTFREEPCT   95      /* max pct of chunks in use */
#define afs_CacheIsTooFull() \
    (afs_blocksUsed - afs_blocksDiscarded > \
        (CM_DCACHECOUNTFREEPCT*afs_cacheBlocks)/100 || \
     afs_freeDCCount - afs_discardDCCount < \
        ((100-CM_DCACHECOUNTFREEPCT)*afs_cacheFiles)/100)


(95*afs_cacheBlocks) will overflow a 32bit integer; thus this
computation becomes meaningless when dealing w/ large caches.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to