On Sun, 27 Sep 2020, Rong Chen wrote:
> Hi Nicolas,
>
> Thanks for the feedback, the error still remains with gcc 10.2.0:
I've created the simplest test case that can be. You won't believe it.
Test case:
$ cat test.c
unsigned int test(unsigned int x, unsigned long long y)
{
y /= 0x2000
On Wed, 5 Dec 2007, Harvey Harrison wrote:
>
> > git repack -a -d --depth=250 --window=250
> >
>
> Since I have the whole gcc repo locally I'll give this a shot overnight
> just to see what can be done at the extreme end or things.
Don't forget to add -f as well.
Nicolas
On Thu, 6 Dec 2007, Jeff King wrote:
> On Thu, Dec 06, 2007 at 01:47:54AM -0500, Jon Smirl wrote:
>
> > The key to converting repositories of this size is RAM. 4GB minimum,
> > more would be better. git-repack is not multi-threaded. There were a
> > few attempts at making it multi-threaded but no
On Thu, 6 Dec 2007, Theodore Tso wrote:
> Linus later pointed out that what we *really* should do is at some
> point was to change repack -f to potentially retry to find a better
> delta, but to reuse the existing delta if it was no worse. That
> automatically does the right thing in the case whe
On Thu, 6 Dec 2007, Jeff King wrote:
> On Thu, Dec 06, 2007 at 09:18:39AM -0500, Nicolas Pitre wrote:
>
> > > The downside is that the threading partitions the object space, so the
> > > resulting size is not necessarily as small (but I don't know that
> >
On Thu, 6 Dec 2007, Jon Smirl wrote:
> On 12/6/07, Linus Torvalds <[EMAIL PROTECTED]> wrote:
> >
> >
> > On Thu, 6 Dec 2007, Jeff King wrote:
> > >
> > > What is really disappointing is that we saved only about 20% of the
> > > time. I didn't sit around watching the stages, but my guess is that we
On Thu, 6 Dec 2007, Jon Smirl wrote:
> On 12/6/07, Nicolas Pitre <[EMAIL PROTECTED]> wrote:
> > > When I lasted looked at the code, the problem was in evenly dividing
> > > the work. I was using a four core machine and most of the time one
> > > core wo
On Thu, 6 Dec 2007, Jon Smirl wrote:
> On 12/6/07, Nicolas Pitre <[EMAIL PROTECTED]> wrote:
> > On Thu, 6 Dec 2007, Jon Smirl wrote:
> >
> > > On 12/6/07, Nicolas Pitre <[EMAIL PROTECTED]> wrote:
> > > > > When I lasted looked at the code, the p
On Thu, 6 Dec 2007, Jon Smirl wrote:
> I have a 4.8GB git process with 4GB of physical memory. Everything
> started slowing down a lot when the process got that big. Does git
> really need 4.8GB to repack? I could only keep 3.4GB resident. Luckily
> this happen at 95% completion. With 8GB of memor
On Fri, 7 Dec 2007, Jon Smirl wrote:
> On 12/7/07, Linus Torvalds <[EMAIL PROTECTED]> wrote:
> >
> >
> > On Thu, 6 Dec 2007, Jon Smirl wrote:
> > > >
> > > > time git blame -C gcc/regclass.c > /dev/null
> > >
> > > [EMAIL PROTECTED]:/video/gcc$ time git blame -C gcc/regclass.c > /dev/null
On Mon, 10 Dec 2007, Gabriel Paubert wrote:
> On Fri, Dec 07, 2007 at 04:47:19PM -0800, Harvey Harrison wrote:
> > Some interesting stats from the highly packed gcc repo. The long chain
> > lengths very quickly tail off. Over 60% of the objects have a chain
> > length of 20 or less. If anyone w
On Tue, 11 Dec 2007, Jon Smirl wrote:
> I added the gcc people to the CC, it's their repository. Maybe they
> can help up sort this out.
Unless there is a Git expert amongst the gcc crowd, I somehow doubt it.
And gcc people with an interest in Git internals are probably already on
the Git maili
On Tue, 11 Dec 2007, Jon Smirl wrote:
> Switching to the Google perftools malloc
> http://goog-perftools.sourceforge.net/
>
> 10% 30 828M
> 20% 15 831M
> 30% 10 834M
> 40% 50 1014M
> 50% 80 1086M
> 60% 80 1500M
> 70% 200 1.53G
> 80% 200 1.85G
> 90% 260 1.87G
> 95% 520 1.97G
On Tue, 11 Dec 2007, Nicolas Pitre wrote:
> And yet, this is still missing the actual issue. The issue being that
> the 2.1GB pack as a _source_ doesn't cause as much memory to be
> allocated even if the _result_ pack ends up being the same.
>
> I was able to repack
On Tue, 11 Dec 2007, Nicolas Pitre wrote:
> OK, here's something else for you to try:
>
> core.deltabasecachelimit=0
> pack.threads=2
> pack.deltacachesize=1
>
> With that I'm able to repack the small gcc pack on my machine with 1GB
> of ram
On Tue, 11 Dec 2007, Linus Torvalds wrote:
> That said, I suspect there are a few things fighting you:
>
> - threading is hard. I haven't looked a lot at the changes Nico did to do
>a threaded object packer, but what I've seen does not convince me it is
>correct. The "trg_entry" access
On Tue, 11 Dec 2007, David Miller wrote:
> From: Nicolas Pitre <[EMAIL PROTECTED]>
> Date: Tue, 11 Dec 2007 12:21:11 -0500 (EST)
>
> > BUT. The point is that repacking the gcc repo using "git repack -a -f
> > --window=250" has a radically different memory
On Tue, 11 Dec 2007, Jon Smirl wrote:
> This makes sense. Those runs that blew up to 4.5GB were a combination
> of this effect and fragmentation in the gcc allocator.
I disagree. This is insane.
> Google allocator appears to be much better at controlling fragmentation.
Indeed. And if fragment
On Tue, 11 Dec 2007, Jon Smirl wrote:
> On 12/11/07, Nicolas Pitre <[EMAIL PROTECTED]> wrote:
> > On Tue, 11 Dec 2007, Nicolas Pitre wrote:
> >
> > > OK, here's something else for you to try:
> > >
> > > core.deltabasecachelimit=0
> &g
On Wed, 12 Dec 2007, Nicolas Pitre wrote:
> Add memory fragmentation to that and you have a clogged system.
>
> Solution:
>
> pack.deltacachesize=1
> pack.windowmemory=16M
>
> Limiting the window memory to 16MB will automatically shrink the window
>
On Wed, 12 Dec 2007, Nicolas Pitre wrote:
> I did modify the progress display to show accounted memory that was
> allocated vs memory that was freed but still not released to the system.
> At least that gives you an idea of memory allocation and fragmentation
> with glibc
On Fri, 14 Dec 2007, Paolo Bonzini wrote:
> > Hmmm... it is even documented in git-gc(1)... and git-index-pack(1) of
> > all things.
>
> I found that the .keep file is not transmitted over the network (at least I
> tried with git+ssh:// and http:// protocols), however.
That is a local policy.
22 matches
Mail list logo