Jens wrote: > Just out of curiosity, what exactly did you do?
I'll have to dig up the configs, will send a reply when I find them.. > That was an advantage, true. > Nowadays you can mount using UUIDs or disk labels which also works > fine. Yeah that's true.. > OK, I do this with quotas mainly. I haven't yet come across a scenario > where these were not sufficient any more. (Except maybe for /tmp and / > var which have their own partitions). Well this is for the file system itself rather than users on the system. e.g. allocate 1TB volume to file system thinly provisioned format volume (size taken on array ~20MB) Write 100GB of data to volume (size taken on array 100GB) delete 100GB of data from volume (size taken on array 100GB) The LVM restrictions is mainly to compensate for the file systems inherit inefficiency in not re-using existing blocks that were freed before allocating new blocks(sometimes it does but it doesn't do a perfect job at it). It's a fairly new technology not many storage systems implement it yet, though it's a wonderful way to grow on demand. At my last company I was able to provision 400% more storage to the servers than I actually had capacity. As we got closer to the limit of the installed storage system we added more space and re-balanced the array for maximum performance, no downtime, no impact. I plan to get an evaluation storage system in next month at my company that implements the next generation of thin provisioning, which will allow the storage array to automatically reclaim space in the file system if it is zeroed out. May take a year or two for the software vendors to catch up but when they do I'll be ready for even more storage efficiency ! I love technology. nate nate -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]