On Friday 22 April 2005 16:32, Chris Mason wrote:
> If I pack every 64k (uncompressed), the checkout-tree time goes down to
> 3m14s. That's a very big difference considering how stupid my code is .git
> was only 20% smaller with 64k chunks. I should be able to do better...I'll
> do one more run.
On Thu, 21 Apr 2005, Chris Mason wrote:
>
> We can sort by the files before reading them in, but even if we order things
> perfectly, we're spreading the io out too much across the drive.
No we don't.
It's easy to just copy the repository in a way where this just isn't true:
you sort the ob
Chris Mason <[EMAIL PROTECTED]> writes:
> Shrug, we shouldn't need help from the kernel for something like this.
> git as
> a database hits worst case scenarios for almost every FS.
Not sure.
> 1) subdirectories with lots of files
Correct. But git doesn't search dirs so it's not that bad.
>
Linus Torvalds <[EMAIL PROTECTED]> writes:
> And dammit, if I'm the original author and likely biggest power-user, and
> _I_ can't be bothered to use special filesystems, then who can? Nobody.
If someone is motivated enough, and if the task is quite trivial (as it
seems to be) someone may try it
On Thursday 21 April 2005 18:47, Linus Torvalds wrote:
> On Thu, 21 Apr 2005, Chris Mason wrote:
> > Shrug, we shouldn't need help from the kernel for something like this.
> > git as a database hits worst case scenarios for almost every FS.
[ ... ]
We somewhat agree on most of this, I snipped ou
On Thu, 21 Apr 2005, Chris Mason wrote:
>
> Shrug, we shouldn't need help from the kernel for something like this. git
> as
> a database hits worst case scenarios for almost every FS.
I really disagree.
> We've got:
>
> 1) subdirectories with lots of files
> 2) wasted space for tiny files
On Thursday 21 April 2005 15:28, Krzysztof Halasa wrote:
> Linus Torvalds <[EMAIL PROTECTED]> writes:
> > Wrong. You most definitely _can_ lose: you end up having to optimize for
> > one particular filesystem blocking size, and you'll lose on any other
> > filesystem. And you'll lose on the special
On Thu, 21 Apr 2005, Krzysztof Halasa wrote:
>
> If someone needs better on-disk ratio, (s)he can go with 1 KB filesystem
> or something like that, without all the added complexity of packing.
I really think the argument that "you can use filesystem feature XYZ" is
bogus.
I know that I'm not
Linus Torvalds <[EMAIL PROTECTED]> writes:
> Wrong. You most definitely _can_ lose: you end up having to optimize for
> one particular filesystem blocking size, and you'll lose on any other
> filesystem. And you'll lose on the special filesystem of "network
> traffic", which is byte-granular.
If
On Thursday 21 April 2005 11:41, Linus Torvalds wrote:
> On Thu, 21 Apr 2005, Chris Mason wrote:
> > There have been a few threads on making git more space efficient, and
> > eventually someone mentions tiny files and space fragmentation. Now that
> > git object names are decoupled from their comp
On Thu, 21 Apr 2005, Chris Mason wrote:
>
> There have been a few threads on making git more space efficient, and
> eventually someone mentions tiny files and space fragmentation. Now that git
> object names are decoupled from their compression, it's easier to consider a
> a variety of compr
Hello,
There have been a few threads on making git more space efficient, and
eventually someone mentions tiny files and space fragmentation. Now that git
object names are decoupled from their compression, it's easier to consider a
a variety of compression algorithms. I whipped up a really sil
12 matches
Mail list logo