Jeff King writes:
> On Wed, Apr 03, 2019 at 07:06:02PM +0700, Duy Nguyen wrote:
>
>> On Wed, Apr 3, 2019 at 6:36 PM Jeff King wrote:
>> > I suspect we could do even better by storing and reusing not just the
>> > original blob between diffs, but the intermediate diff state (i.e., the
>> > hashes
On Wed, Apr 03, 2019 at 07:06:02PM +0700, Duy Nguyen wrote:
> On Wed, Apr 3, 2019 at 6:36 PM Jeff King wrote:
> > I suspect we could do even better by storing and reusing not just the
> > original blob between diffs, but the intermediate diff state (i.e., the
> > hashes produced by xdl_prepare(),
On Wed, Apr 3, 2019 at 6:36 PM Jeff King wrote:
> I suspect we could do even better by storing and reusing not just the
> original blob between diffs, but the intermediate diff state (i.e., the
> hashes produced by xdl_prepare(), which should be usable between
> multiple diffs). That's quite a bit
On Wed, Apr 03, 2019 at 04:32:30PM +0700, Duy Nguyen wrote:
> That might explain why I could not see significant gain when blaming
> linux.git's MAINTAINERS file (0.5s was shaved out of 13s) even though
> the number of objects read was cut by half (8424 vs 15083).
I did a few timings, too, and ma
Junio C Hamano writes:
> David Kastrup writes:
>
>> When a parent blob already has chunks queued up for blaming, dropping
>> the blob at the end of one blame step will cause it to get reloaded
>> right away, doubling the amount of I/O and unpacking when processing a
>> linear history.
>>
>> Keep
On Wed, Apr 3, 2019 at 2:45 PM Junio C Hamano wrote:
>
> David Kastrup writes:
>
> > When a parent blob already has chunks queued up for blaming, dropping
> > the blob at the end of one blame step will cause it to get reloaded
> > right away, doubling the amount of I/O and unpacking when processi
David Kastrup writes:
> When a parent blob already has chunks queued up for blaming, dropping
> the blob at the end of one blame step will cause it to get reloaded
> right away, doubling the amount of I/O and unpacking when processing a
> linear history.
>
> Keeping such parent blobs in memory se
When a parent blob already has chunks queued up for blaming, dropping
the blob at the end of one blame step will cause it to get reloaded
right away, doubling the amount of I/O and unpacking when processing a
linear history.
Keeping such parent blobs in memory seems like a reasonable optimization
Johannes Schindelin writes:
> Hi David,
>
> On Sat, 28 May 2016, David Kastrup wrote:
>
>> > The short version of your answer is that you will leave this patch in
>> > its current form and address none of my concerns because you moved on,
>> > correct? If so, that's totally okay, it just needs to
Hi David,
On Sat, 28 May 2016, David Kastrup wrote:
> > The short version of your answer is that you will leave this patch in
> > its current form and address none of my concerns because you moved on,
> > correct? If so, that's totally okay, it just needs to be spelled out.
>
> Yes, that's it.
Johannes Schindelin writes:
> On Fri, 27 May 2016, David Kastrup wrote:
>
>> Johannes Schindelin writes:
>>
>> > On Fri, 27 May 2016, David Kastrup wrote:
>> >
>> >> pressure particularly when the history contains lots of merges from
>> >> long-diverged branches. In practice, this optimization
Hi David,
On Fri, 27 May 2016, David Kastrup wrote:
> Johannes Schindelin writes:
>
> > On Fri, 27 May 2016, David Kastrup wrote:
> >
> >> pressure particularly when the history contains lots of merges from
> >> long-diverged branches. In practice, this optimization appears to
> >> behave quit
Johannes Schindelin writes:
> On Fri, 27 May 2016, David Kastrup wrote:
>
>> pressure particularly when the history contains lots of merges from
>> long-diverged branches. In practice, this optimization appears to
>> behave quite benignly,
>
> Why not just stop here?
Because there is a caveat.
Hi David,
it is good practice to Cc: the original author of the code in question, in
this case Junio. I guess he sees this anyway, but that is really just an
assumption.
On Fri, 27 May 2016, David Kastrup wrote:
> When a parent blob already has chunks queued up for blaming, dropping
> the blob a
When a parent blob already has chunks queued up for blaming, dropping
the blob at the end of one blame step will cause it to get reloaded
right away, doubling the amount of I/O and unpacking when processing a
linear history.
Keeping such parent blobs in memory seems like a reasonable
optimization.
15 matches
Mail list logo