I'd like to publicly thank all those who are contributing to this
thread -- the discussion is very informative.
I suggested initially creating the imagefile with
[5] # dd if=/dev/arandom bs=1m of=/mnt/imagefile count=...
Several people have commented on this from the perspective of
cryptographic security (not leaking where data has & hasn't been
written). However, I actually had a rather different goal in mind:
I'm thinking of squeezing 5-10% more space out of a given-size disk
by tuning the underlying filesystem parameters to 'newfs':
(a) Since the underlying filesystem will ony hold a single huge
'imagefile', it only needs one inode (or maybe a handful to allow
for directories), so I can specify something like 'newfs -i 1048576'
or even 'newfs -i 1073741824'.
(b) If I pre-allocate the imagefile with dd from /dev/arandom, all its
blocks will actually be allocated, so it won't grow thereafter, and
hence no more block allocations will be needed, so I (I think) can
save the default 5% freespace via 'newfs -m 0'. In contrast, an
initially-zeroed imagefile would be sparse, with most blocks not
actually allocated, so I'd need the freespace reserve to make
imagefile block allocation reasonably fast & vaguely-contiguous-on-disk
as the encrypted filesystem is used.
Browsing newfs(8), '-g very_big_number -h small_number' also look useful.
Perhaps I'm being overly agressive in my disk-space optimization...
but I've been using computers for 30+ years, and every disk I've ever
used has reached an equilibrium of "over-full", so an easy 5-10% is
tempting...
--
-- "Jonathan Thornburg [remove -animal to reply]" <[EMAIL PROTECTED]>
t <= 31.Aug.2008: School of Mathematics, U of Southampton, England
t > 1.Sep.2008: Dept of Astronomy, Indiana University, Bloomington, USA
"Washing one's hands of the conflict between the powerful and the
powerless means to side with the powerful, not to be neutral."
-- quote by Freire / poster by Oxfam