On Sunday 24 August 2003 07:07 pm, Jason Dixon wrote: > Sorry, I'm joining this thread way after the fact. The only thing I'll > mention is that I *have* seen certain applications zero out a very large > filesize in preparation for filling up that space with a series of > chunks. Bit-torrent is the *perfect* example of that. Say you start to > download a 500M ISO image. It breaks it into chunks so it can perform > parallel downloads from multiple clients. Even though the total > download at any one time may only be a fraction of that size, the file > is reserved at its maximum size. I don't know how it does it, but it > does. :)
Dunno if that's his problem, but it's quite common. They are called "sparse files". The application fopen()s the file, does an fseek() or fsetpos() to how big the file should be. This is done a lot for buffers that need to work fast, like DBMS space, ring buffers, etc. Note that inodes are only allocated as the space is used, so this is not as wasteful as you would think. The same thing happens with RAM. You can malloc() much more ram than you have (even virtual RAM), but it doesn't actually allocate memory until you write to it. This can have "unfortunate effects" if you actually use more memory than you have, but it's generally not a problem for a correctly-speced system. But your program can run out of memory at write time instead of malloc time for this reason. RAM allocation is even more complicated, because the OS takes unused memory and uses it for buffers and cache, and gives them back as needed. So you have RAM that is free, RAM that is allocated, and RAM that is "borrowed" to be returned as needed. -- DDDD David Kramer [EMAIL PROTECTED] http://thekramers.net DK KD DKK D The avalanche has already begun. DK KD It is too late for the pebbles to vote. DDDD - Kosh, babylon 5 -- redhat-list mailing list unsubscribe mailto:[EMAIL PROTECTED] https://www.redhat.com/mailman/listinfo/redhat-list