On 14 November 2006 19:40, Joe Buck wrote:
> On Tue, Nov 14, 2006 at 07:15:19PM -, Dave Korn wrote:
>> Geert's followup explained this seeming anomaly: he means that the crude
>> high-level granularity of "make -j" is enough to keep all cpus busy at
>> 100%, and I'm fairly persuaded by the a
On Tue, Nov 14, 2006 at 07:15:19PM -, Dave Korn wrote:
> Geert's followup explained this seeming anomaly: he means that the crude
> high-level granularity of "make -j" is enough to keep all cpus busy at 100%,
> and I'm fairly persuaded by the arguments that, at the moment, that's
> sufficient
"Dave Korn" <[EMAIL PROTECTED]> writes:
> > It's irrelevant to the main discussion here, but in fact there is a
> > fair amount of possible threading in the linker proper, quite apart
> > from LTO. The linker spends a lot of time reading large files, and
> > the I/O wait can be parallelized.
>
>
On 14 November 2006 15:38, Robert Dewar wrote:
> Geert Bosch wrote:
>
>> Given that CPU usage is at 100% now for most jobs, such as
>> bootstrapping GCC, there is not much room for any improvement
>> through threading.
>
> Geert, I find this a bit incomprehensible, the whole point
> of threading
On 14 November 2006 18:30, Ian Lance Taylor wrote:
> "Dave Korn" <[EMAIL PROTECTED]> writes:
>
>>> The main place where threading may make sense, especially
>>> with LTO, is the linker. This is a longer lived task, and
>>> is the last step of compilation, where no other parellel
>>> processes are
"Dave Korn" <[EMAIL PROTECTED]> writes:
> > The main place where threading may make sense, especially
> > with LTO, is the linker. This is a longer lived task, and
> > is the last step of compilation, where no other parellel
> > processes are active. Moreover, linking tends to be I/O
> > intensive
On Nov 14, 2006, at 12:49, Bill Wendling wrote:
I'll mention a case where compilation was wickedly slow even
when using -j#. At The MathWorks, the system could take >45 minutes
to compile. (This was partially due to the fact that the files were
located on an NFS mounted drive. But also because C
On Nov 10, 2006, at 9:08 PM, Geert Bosch wrote:
Most people aren't waiting for compilation of single files.
If they do, it is because a single compilation unit requires
parsing/compilation of too many unchanging files, in which case
the primary concern is avoiding redoing useless compilation.
T
Geert Bosch wrote:
Given that CPU usage is at 100% now for most jobs, such as
bootstrapping GCC, there is not much room for any improvement
through threading.
Geert, I find this a bit incomprehensible, the whole point
of threading is to increase CPU availability by using
multiple cores.
Even
On Nov 13, 2006, at 21:27, Dave Korn wrote:
To be fair, Mike was talking about multi-core SMP, not threading
on a single
cpu, so given that CPU usage is at 100% now for most jobs, there is
an Nx100%
speedup to gain from using 1 thread on each of N cores.
I'm mostly building GCC on multip
On 14 November 2006 01:51, Geert Bosch wrote:
> On Nov 11, 2006, at 03:21, Mike Stump wrote:
>> The cost of my assembler is around 1.0% (ppc) to 1.4% (x86)
>> overhead as measured with -pipe -O2 on expr.c,. If it was
>> converted, what type of speedup would you expect?
>
> Given that CPU usage i
On Nov 11, 2006, at 03:21, Mike Stump wrote:
The cost of my assembler is around 1.0% (ppc) to 1.4% (x86)
overhead as measured with -pipe -O2 on expr.c,. If it was
converted, what type of speedup would you expect?
Given that CPU usage is at 100% now for most jobs, such as
bootstrapping GCC,
[EMAIL PROTECTED] wrote:
Each of the functions in a C/C++ program is dependent on
the global environment, but each is independent of each other.
Separate threads could process the tree/RTL for each function
independently, with the results merged on completion. This
may interact adversely with so
Paul Brook wrote:
> >For other optimisations I'm not convinced there's an easy win compared
> >with make -j. You have to make sure those passes don't have any global
> >state, and as other people have pointed out garbage collection gets messy.
> >The compile server project did something similar
Paul Brook wrote:
For other optimisations I'm not convinced there's an easy win compared with
make -j. You have to make sure those passes don't have any global state, and
as other people have pointed out garbage collection gets messy. The compile
server project did something similar, and that
Ross Ridge wrote:
>Umm... those 80 processors that Intel is talking about are more like the
>8 coprocessors in the Cell CPU.
Michael Eager wrote:
>No, the Cell is asymmetrical (vintage 2000) architecture.
The Cell CPU as a whole is asymmetrical, but I'm only comparing the
design to the 8 identic
> Each of the functions in a C/C++ program is dependent on
> the global environment, but each is independent of each other.
> Separate threads could process the tree/RTL for each function
> independently, with the results merged on completion. This
> may interact adversely with some global optimiz
Geert Bosch wrote:
Most of my compilations (on Linux, at least) use close
to 100% of CPU. Adding more overhead for threading and
communication/synchronization can only hurt.
On a single-processor system, adding overhead for multi-
threading does reduce performance. On a multi-processor
system
Mike Stump wrote:
Thoughts?
Parallelizing GCC is an interesting problem for a couple
reasons: First, the problem is inherently sequential.
Second, GCC expects that each step in the process happens
in order, one after the other.
Most invocations of GCC are part of a "cluster" of similar
invoc
Ross Ridge wrote:
Mike Stump writes:
We're going to have to think seriously about threading the compiler. Intel
predicts 80 cores in the near future (5 years). [...] To use this many
cores for a single compile, we have to find ways to split the work. The
best way, of course is to have make -j80
> > whole-program optimisation and SMP machines have been around for a
> > fair while now, so I'm guessing not.
>
> I don't know of anything that is particularly hard about it, but, if
> you know of bits that are hard, or have pointer to such, I'd be
> interested in it.
You imply you're consider
On Sat, Nov 11, 2006 at 04:16:19PM +, Paul Brook wrote:
> I don't know how much of the memory allocated is global readonly data (ie.
> suitable for sharing between threads). I wouldn't be surprised if it's a
> relatively small fraction.
I don't have numbers on global readonly, but in typical
> Let's just say, the CPU is doomed.
So you're building consensus for something that is doomed?
> > Seriously thought I don't really understand what sort of response
> > you're expecting.
>
> Just consensus building.
To build a consensus you have to have something for people to agree or
disagre
On Nov 10, 2006, at 9:08 PM, Geert Bosch wrote:
I'd guess we win more by writing object files directly to disk like
virtually every other compiler on the planet.
The cost of my assembler is around 1.0% (ppc) to 1.4% (x86) overhead
as measured with -pipe -O2 on expr.c,. If it was converted,
On Fri, 10 Nov 2006, Mike Stump wrote:
On Nov 10, 2006, at 12:46 PM, H. J. Lu wrote:
Will use C++ help or hurt compiler parallelism? Does it really matter?
I'm not an expert, but, in the simple world I want, I want it to not matter
in the least. For the people writing most code in the compi
On Nov 10, 2006, at 5:43 PM, Paul Brook wrote:
Can you make it run on my graphics card too?
:-) You know all the power on a bleeding edge system is in the GPU
now. People are already starting to migrate data processing for
their applications to it. Don't bet against it. In fact, we hide
On Sat, 2006-11-11 at 00:08 -0500, Geert Bosch wrote:
> Most of my compilations (on Linux, at least) use close
> to 100% of CPU. Adding more overhead for threading and
> communication/synchronization can only hurt.
In my daily work, I take processes that run 100% and make them use 100%
in less tim
On 2006-11-11, at 06:08, Geert Bosch wrote:
Just compiling
int main() { puts ("Hello, world!"); return 0; }
takes 342 system calls on my Linux box, most of them
related to creating processes, repeated dynamic linking,
and other initialization stuff, and reading and writing
temporary files for
On Nov 10, 2006, at 9:08 PM, Geert Bosch wrote:
The common case is that people just don't use the -j feature
of make because
1) they don't know about it
2) their IDE doesn't know about it
3) they got burned by bad Makefiles
4) it's just too much typing
Don't forget:
5) running 4 GCC
Most people aren't waiting for compilation of single files.
If they do, it is because a single compilation unit requires
parsing/compilation of too many unchanging files, in which case
the primary concern is avoiding redoing useless compilation.
The common case is that people just don't use the -
On 11/10/06, Mike Stump <[EMAIL PROTECTED]> wrote:
On Nov 10, 2006, at 12:46 PM, H. J. Lu wrote:
> Will use C++ help or hurt compiler parallelism? Does it really matter?
I'm not an expert, but, in the simple world I want, I want it to not
matter in the least. For the people writing most code in
> The competition is already starting to make progress in this area.
>
> We don't want to spend time in locks or spinning and we don't want to
> liter our code with such things, so, if we form areas that are fairly
> well isolated and independent and then have a manager, manage the
> compilation pr
Mike Stump writes:
>We're going to have to think seriously about threading the compiler. Intel
>predicts 80 cores in the near future (5 years). [...] To use this many
>cores for a single compile, we have to find ways to split the work. The
>best way, of course is to have make -j80 do that for us, t
On Nov 10, 2006, at 2:19 PM, Kevin Handy wrote:
What will the multi-core compiler design do to the old processors
(extreme slowness?)
Roughly speaking, I want it to add around 1000 extra instructions per
function compiled, in other words, nothing. The compile speed will
be what the compil
On Fri, 2006-11-10 at 22:49 +0100, Marcin Dalecki wrote:
> > I don't think it can possibly hurt as long as people follow normal C++
> > coding rules.
>
> Contrary to C there is no single general coding style for C++. In
> fact for a project
> of such a scale this may be indeed the most significa
Mike Stump wrote:
...
Thoughts?
Raw thoughts:
1. Threading isn't going to help for I/O bound portions.
2. The OS should already be doing some of the work of threading.
Some 'parts' of the compiler should already be using CPUs: 'make',
the front-end (gcc) command, the language compiler, t
Le Fri, Nov 10, 2006 at 01:33:42PM -0800, Sohail Somani écrivait/wrote:
> I don't think it can possibly hurt as long as people follow normal C++
> coding rules.
>
> The main issue is not really language choice though. The main issues
> would likely be defining data to be isolated enough to be use
On 2006-11-10, at 22:33, Sohail Somani wrote:
On Fri, 2006-11-10 at 12:46 -0800, H. J. Lu wrote:
On Fri, Nov 10, 2006 at 12:38:07PM -0800, Mike Stump wrote:
How many hunks do we need, well, today I want 8 for 4.2 and 16 for
mainline, each release, just 2x more. I'm assuming nice, equal
siz
On 2006-11-10, at 21:46, H. J. Lu wrote:
On Fri, Nov 10, 2006 at 12:38:07PM -0800, Mike Stump wrote:
How many hunks do we need, well, today I want 8 for 4.2 and 16 for
mainline, each release, just 2x more. I'm assuming nice, equal sized
hunks. For larger variations in hunk size, I'd need eve
On Fri, 2006-11-10 at 13:31 -0800, Mike Stump wrote:
> On Nov 10, 2006, at 12:46 PM, H. J. Lu wrote:
> > Will use C++ help or hurt compiler parallelism? Does it really matter?
>
> I'm not an expert, but, in the simple world I want, I want it to not
> matter in the least. For the people writing
On Fri, 2006-11-10 at 12:46 -0800, H. J. Lu wrote:
> On Fri, Nov 10, 2006 at 12:38:07PM -0800, Mike Stump wrote:
> > How many hunks do we need, well, today I want 8 for 4.2 and 16 for
> > mainline, each release, just 2x more. I'm assuming nice, equal sized
> > hunks. For larger variations in
On Nov 10, 2006, at 12:46 PM, H. J. Lu wrote:
Will use C++ help or hurt compiler parallelism? Does it really matter?
I'm not an expert, but, in the simple world I want, I want it to not
matter in the least. For the people writing most code in the
compiler, I want clear simple rules for the
On Fri, Nov 10, 2006 at 12:38:07PM -0800, Mike Stump wrote:
> How many hunks do we need, well, today I want 8 for 4.2 and 16 for
> mainline, each release, just 2x more. I'm assuming nice, equal sized
> hunks. For larger variations in hunk size, I'd need even more hunks.
>
> Or, so that is ju
43 matches
Mail list logo