On Sun, Feb 25, 2001 at 02:08:38PM -0500, Stan Brown wrote: > Let;s just cut to the chase on this. > > I need to be able to create, and work with larg files (> 2G) under > Debian Linux. Secondly I need the moststable system for doing this, > as it will be a production machine.
You need many things from 'unstable' (though perhaps most of them are now in 'testing', I haven't looked). The biggies: a 2.4.x kernel glibc2.1 lots of little things like the current fileutils, etc Remember 'unstable' doesn't mean "it crashes like Windows" -- it means that it is constantly changing. It's probably fine for a production machine though you'll have to keep up on security updates yourself. (And be careful of 'apt-get upgrade' since some days things may be broken. :)) > I have no particular bias as to what filesystem type I use for this. It doesn't matter..... the limit isn't related to the filesystem (despite what some people keep saying), but to the kernel API and glibc. ext2 has supported huge files on Alpha forever... because the length of an 'int' is 64 bits on Alpha.... so glibc and the kernel handle it without any special API's. > How do I go about seting up a machine to do this? See above. It works fine on 'unstable', and probably works on 'testing' if you upgrade the kernel to 2.4. > I have already set up a test machine to test this on. I installed > stable, and upgraded to testing. It's a pretty minimal install so > far. > > What is the best path to achieve this? You'll also have to compile your code so that it knows you have >31 bits for file pointer functions. (Search google for "large filesystem summit" and you'll find the sorts of options for dealing with this, either through new function calls or through compiler switches to make it 'transparent'.) -- CueCat decoder .signature by Larry Wall: #!/usr/bin/perl -n printf "Serial: %s Type: %s Code: %s\n", map { tr/a-zA-Z0-9+-/ -_/; $_ = unpack 'u', chr(32 + length()*3/4) . $_; s/\0+$//; $_ ^= "C" x length; } /\.([^.]+)/g;