> From: David Lang
> well, as others noted, the problem is actually caused by doing the diffs, and
> that is something that is a very common thing to do with source code.
To some degree, my attitude comes from When I Was A Boy, when you got
16k for both your bytecode and your data, so you never
On Wed, 28 May 2014, Junio C Hamano wrote:
David Lang writes:
On Wed, 28 May 2014, Dale R. Worley wrote:
It seems that much of Git was coded under the assumption that any file
could always be held entirely in RAM. Who made that mistake? Are
people so out of touch with reality?
Git was d
On Wed, 28 May 2014, Dale R. Worley wrote:
From: David Lang
Git was designed to track source code, there are warts that show up
in the implementation when you use individual files >4GB
I'd expect that if you want to deal with files over 100k, you should
assume that it doesn't all fit in me
David Lang writes:
> On Wed, 28 May 2014, Dale R. Worley wrote:
>
>> It seems that much of Git was coded under the assumption that any file
>> could always be held entirely in RAM. Who made that mistake? Are
>> people so out of touch with reality?
>
> Git was designed to track source code, ther
> From: David Lang
> Git was designed to track source code, there are warts that show up
> in the implementation when you use individual files >4GB
I'd expect that if you want to deal with files over 100k, you should
assume that it doesn't all fit in memory.
Dale
--
To unsubscribe from this lis
On Wed, 28 May 2014, Dale R. Worley wrote:
From: Duy Nguyen
I don't know how many commands are hit by this. If you have time and
gdb, please put a break point in die_builtin() function and send
backtraces for those that fail. You could speed up the process by
creating a smaller file and set
> From: Junio C Hamano
> You need to have enough memory (virtual is fine if you have enough
> time) to do fsck. Some part of index-pack could be refactored into
> a common helper function that could be called from fsck, but I think
> it would be a lot of work.
How much memory is "enough"? And
> From: Duy Nguyen
> I don't know how many commands are hit by this. If you have time and
> gdb, please put a break point in die_builtin() function and send
> backtraces for those that fail. You could speed up the process by
> creating a smaller file and set the environment variable
> GIT_ALLOC_L
Duy Nguyen writes:
>> $ git fsck --full --strict
>> notice: HEAD points to an unborn branch (master)
>> Checking object directories: 100% (256/256), done.
>> fatal: Out of memory, malloc failed (tried to allocate 21474836481 bytes)
>
> Back trace for this one
> ...
> Not easy to f
Am 27.05.2014 18:47, schrieb Dale R. Worley:
> Even doing a 'git reset' does not put the repository in a state where
> 'git fsck' will complete:
You have to remove the offending commit also from the reflog.
The following snipped creates an offending commit, big_file is 2GB which
is too large for
On Tue, May 27, 2014 at 11:47 PM, Dale R. Worley wrote:
> I've discovered a problem using Git. It's not clear to me what the
> "correct" behavior should be, but it seems to me that Git is failing
> in an undesirable way.
>
> The problem arises when trying to handle a very large file. For
> examp
I've discovered a problem using Git. It's not clear to me what the
"correct" behavior should be, but it seems to me that Git is failing
in an undesirable way.
The problem arises when trying to handle a very large file. For
example:
$ git --version
git version 1.8.3.1
$ mkdir $$
12 matches
Mail list logo