Regarding the GCC Binaries and Build status pages

2010-08-11 Thread Dennis Clarke

Dear GCC Team :

This is just a friendly letter. There probably will not be another GCC
update from the Sunfreeware site ( which is still showing 3.4.6 ) for a
long time now that Oracle has pulled finances. The same sad state of
affairs affects the OpenSolaris project as a whole.

I do expect that the Blastwave site will release formal 4.5.1 packages to
the world sometime in the next week and there should be at least some
mention on this page given that we have released packages for GCC ver 4.x
( 4.0.1 and 4.3.4 etc ) for four years now :

http://gcc.gnu.org/install/binaries.html

Also, we have some excellent GCC 4.5.1 test results on Solaris 8 Sparc as
well as i386 ( 32-bit ) thus :

http://gcc.gnu.org/ml/gcc-testresults/2010-08/msg01024.html

http://gcc.gnu.org/ml/gcc-testresults/2010-08/msg01023.html

We have a team of people that perform GCC builds and tests on the Solaris
production releases and we are happy to release these packages in SVR4
spec format to the Solaris user world. It would be nice if there were some
mention on the "Binaries" page that we have been doing this service.

Any reference to the test suite results is always nice also. We have been
building 4.5.1 since the pre-release and it just seems to be an excellent
compiler. In fact, we are moving our entire build process internally over
to GCC 4.5.1 and saying goodbye to Sun Studio now that Oracle has stepped
in and taken over Sun.

Any questions or need for Solaris based test accounts, please feel free to
ask.

-- 
Dennis Clarke  2010 OpenSolaris Governance Board Member
dcla...@opensolaris.ca  <- Email related to the open source Solaris
dcla...@blastwave.org   <- Email related to open source for Solaris




Re: Question about tree-switch-conversion.c

2010-08-11 Thread Richard Guenther
On Tue, Aug 10, 2010 at 6:48 PM, Ian Bolton  wrote:
> I am in the process of fixing PR44328
> (http://gcc.gnu.org/bugzilla/show_bug.cgi?id=44328)
>
> The problem is that gen_inbound_check in tree-switch-conversion.c subtracts
> info.range_min from info.index_expr, which can cause the MIN and MAX values
> for info.index_expr to become invalid.
>
> For example:
>
>
> typedef enum {
>  FIRST = 0,
>  SECOND,
>  THIRD,
>  FOURTH
> } ExampleEnum;
>
> int dummy (const ExampleEnum e)
> {
>  int mode = 0;
>  switch (e)
>  {
>    case SECOND: mode = 20; break;
>    case THIRD: mode = 30; break;
>    case FOURTH: mode = 40; break;
>  }
>  return mode;
> }
>
>
> tree-switch-conversion would like to create a lookup table for this, so
> that SECOND maps to entry 0, THIRD maps to entry 1 and FOURTH maps to
> entry 2.  It achieves this by subtracting SECOND from index_expr.  The
> problem is that after the subtraction, the type of the result can have a
> value outside the range 0-3.
>
> Later, when tree-vrp.c sees the inbound check as being <= 2 with a possible
> range for the type as 0-3, it converts the <=2 into a != 3, which is
> totally wrong.  If e==FIRST, then we can end up looking for entry 255 in
> the lookup table!
>
> I think the solution is to update the type of the result of the subtraction
> to show that it is no longer in the range 0-3, but I have had trouble
> implementing this.  The attached patch (based off 4.5 branch) shows my
> current approach, but I ran into LTO issues:
>
> lto1: internal compiler error: in lto_get_pickled_tree, at lto-streamer-in.c
>
> I am guessing this is because the debug info for the type does not match
> the new range I have set for it.
>
> Is there a *right* way to update the range such that LTO doesn't get
> unhappy?  (Maybe a cast with fold_convert_loc would be right?)

The fix is to always use a standard unsigned integer type for the new index
value.  lang_hooks.types.type_for_mode (TYPE_MODE (old-idx-type), 1)
would give you one.

Richard.


Re: Regarding the GCC Binaries and Build status pages

2010-08-11 Thread Richard Guenther
On Wed, Aug 11, 2010 at 9:32 AM, Dennis Clarke  wrote:
>
> Dear GCC Team :
>
> This is just a friendly letter. There probably will not be another GCC
> update from the Sunfreeware site ( which is still showing 3.4.6 ) for a
> long time now that Oracle has pulled finances. The same sad state of
> affairs affects the OpenSolaris project as a whole.
>
> I do expect that the Blastwave site will release formal 4.5.1 packages to
> the world sometime in the next week and there should be at least some
> mention on this page given that we have released packages for GCC ver 4.x
> ( 4.0.1 and 4.3.4 etc ) for four years now :
>
>    http://gcc.gnu.org/install/binaries.html
>
> Also, we have some excellent GCC 4.5.1 test results on Solaris 8 Sparc as
> well as i386 ( 32-bit ) thus :
>
>    http://gcc.gnu.org/ml/gcc-testresults/2010-08/msg01024.html
>
>    http://gcc.gnu.org/ml/gcc-testresults/2010-08/msg01023.html
>
> We have a team of people that perform GCC builds and tests on the Solaris
> production releases and we are happy to release these packages in SVR4
> spec format to the Solaris user world. It would be nice if there were some
> mention on the "Binaries" page that we have been doing this service.

If you provide a patch for the binaries page I am sure that Gerald will
have a look.

Thanks,
Richard.


Patch PR c++/45200

2010-08-11 Thread Dodji Seketeli
Hello,

In the example accompanying the patch below we consider that the
types

 forward_as_lref

at line marked with //#0 and

 forward_as_lref::type::seq_type>

at the line marked with //#1 should compare equal. And I believe that
is correct[1].

It follows that during the instantiantion of apply,
lookup_class_template looks up an instantiation for
forward_as_lref::type::seq_type>
and returns forward_as_lref. Then the
tsubst'ing of forward_as_lref in the
context of template struct apply fails because it
tries to reuse the typedef seq that was defined in the
incompatible context template struct
apply1.

This case seems to argue for stripping typedefs from the
TYPE_CONTEXT of TYPENAME_TYPEs when they are passed as template
arguments. I refrained from doing it before because I thought
that incompatible_dependent_types_p would be enough to handle all
comparisons involving dependent typedefs.

[1]: This is based on the resolution of PR c++/43800 at
http://gcc.gnu.org/ml/gcc-patches/2010-04/msg01241.html

Tested on x86_64-unknown-linux-gnu against trunk and the 4.5
branch. OK to commit to both branches?

commit d9a0f93c1fda1d97cda5747003de84edd3812bda
Author: Dodji Seketeli 
Date:   Mon Aug 9 23:12:39 2010 +0200

Fix PR c++/45200

gcc/cp/Changelog:
PR c++/45200
* tree.c (strip_typedefs): Strip typedefs from the context of
TYPENAME_TYPEs.

gcc/testsuite/ChangeLog:
PR c++/45200
* g++.dg/template/typedef34.C: New test.

diff --git a/gcc/cp/tree.c b/gcc/cp/tree.c
index 450b9e8..6b2aab0 100644
--- a/gcc/cp/tree.c
+++ b/gcc/cp/tree.c
@@ -1046,6 +1046,11 @@ strip_typedefs (tree t)
TYPE_RAISES_EXCEPTIONS (t));
   }
   break;
+case TYPENAME_TYPE:
+  result = make_typename_type (strip_typedefs (TYPE_CONTEXT (t)),
+  TYPENAME_TYPE_FULLNAME (t),
+  typename_type, tf_none);
+  break;
 default:
   break;
 }
diff --git a/gcc/cp/typeck.c b/gcc/cp/typeck.c
index 484d299..a506053 100644
--- a/gcc/cp/typeck.c
+++ b/gcc/cp/typeck.c
@@ -1212,7 +1212,7 @@ incompatible_dependent_types_p (tree t1, tree t2)
 
   if (!t1_typedef_variant_p || !t2_typedef_variant_p)
 /* Either T1 or T2 is not a typedef so we cannot compare the
-   the template parms of the typedefs of T1 and T2.
+   template parms of the typedefs of T1 and T2.
At this point, if the main variant type of T1 and T2 are equal
it means the two types can't be incompatible, from the perspective
of this function.  */
diff --git a/gcc/testsuite/g++.dg/template/typedef34.C 
b/gcc/testsuite/g++.dg/template/typedef34.C
new file mode 100644
index 000..9bb4460
--- /dev/null
+++ b/gcc/testsuite/g++.dg/template/typedef34.C
@@ -0,0 +1,37 @@
+// Origin PR c++/45200
+// { dg-do compile }
+
+template
+struct remove_reference
+{
+  typedef T type;
+};
+
+template
+struct forward_as_lref
+{
+};
+
+template
+struct apply1
+{
+  typedef typename remove_reference::type seq;
+  typedef forward_as_lref type; //#0
+};
+
+template
+struct apply
+{
+  typedef forward_as_lref::type::seq_type> 
type; //#1
+};
+
+struct reverse_view
+{
+  typedef int seq_type;
+};
+
+int
+main()
+{
+  apply::type a2;
+}

-- 
Dodji


gcc.dg/graphite/interchange-9.c and small memory target

2010-08-11 Thread Jie Zhang

Hi Sebastian,

I currently encountered an issue when testing 
gcc.dg/graphite/interchange-9.c on a ARM bare-metal board which has only 
4MB memory.


Apparently, with

#define N 
#define M 

"int A[N*M]" in main is too large to fit in stack.

There are several ways to solve this issue:

1. Make this test a compile test instead of a run test.

2. Define both M and N to 111. I checked and the test is still valid, ie 
it still tests what is intended.


3. Use STACK_SIZE macro to calculate M and N. But I don't know how to do 
that. And I'm not sure if we got a very small M and N, the test will be 
still valid.


Which way do you like most?


Regards,
--
Jie Zhang
CodeSourcery


Re: gcc.dg/graphite/interchange-9.c and small memory target

2010-08-11 Thread Sebastian Pop
On Wed, Aug 11, 2010 at 10:29, Jie Zhang  wrote:
> Hi Sebastian,
>
> I currently encountered an issue when testing
> gcc.dg/graphite/interchange-9.c on a ARM bare-metal board which has only 4MB
> memory.
>
> Apparently, with
>
> #define N 
> #define M 
>
> "int A[N*M]" in main is too large to fit in stack.
>
> There are several ways to solve this issue:
>
> 1. Make this test a compile test instead of a run test.
>
> 2. Define both M and N to 111. I checked and the test is still valid, ie it
> still tests what is intended.
>
> 3. Use STACK_SIZE macro to calculate M and N. But I don't know how to do
> that. And I'm not sure if we got a very small M and N, the test will be
> still valid.
>
> Which way do you like most?

I would say, let's go for solution 2.

I don't like the first solution as you want to also validate
that the transform is correct.   As for solution 3, I do not know
either how to do that.

I will keep in mind these limitations for the future testcases.

Thanks,
Sebastian


Re: gcc.dg/graphite/interchange-9.c and small memory target

2010-08-11 Thread Jie Zhang

On 08/11/2010 11:47 PM, Sebastian Pop wrote:

On Wed, Aug 11, 2010 at 10:29, Jie Zhang  wrote:

Hi Sebastian,

I currently encountered an issue when testing
gcc.dg/graphite/interchange-9.c on a ARM bare-metal board which has only 4MB
memory.

Apparently, with

#define N 
#define M 

"int A[N*M]" in main is too large to fit in stack.

There are several ways to solve this issue:

1. Make this test a compile test instead of a run test.

2. Define both M and N to 111. I checked and the test is still valid, ie it
still tests what is intended.

3. Use STACK_SIZE macro to calculate M and N. But I don't know how to do
that. And I'm not sure if we got a very small M and N, the test will be
still valid.

Which way do you like most?


I would say, let's go for solution 2.

I don't like the first solution as you want to also validate
that the transform is correct.   As for solution 3, I do not know
either how to do that.

I will keep in mind these limitations for the future testcases.


Thanks. I will submit a patch for solution 2.

--
Jie Zhang
CodeSourcery


Re: food for optimizer developers

2010-08-11 Thread Ralf W. Grosse-Kunstleve
Hi Tim,

> Do you mean you are adding an additional level of functions and hoping 

> for efficient in-lining?

Note that my questions arise in the context of automatic code generation:
  http://cci.lbl.gov/fable
Please compare e.g. the original LAPACK code with the generated C++ code
to see why the C++ code is done the way it is.

A goal more important than speed is that the auto-generated C++ code
is similar to the original Fortran code and not inflated/obfuscated by
constructs meant to cater to optimizers (which change over time anyway).

My original posting shows that gfortran and g++ don't do as good
a job as ifort in generating efficient machine code. Note that the
loss going from gfortran to g++ isn't as bad as going from ifort to
gfortran. This gives me hope that the gcc developers could work over
time towards bringing the performance of the g++-generated code
closer to the original ifort performance.

I think speed will be the major argument against using the C++ code
generated by the automatic converter. If the generated C++ code could somehow
be made to run nearly as fast as the original Fortran compiled with ifort
there wouldn't be any good reason anymore to still develop in Fortran,
or to bother with the complexities of mixing languages.

Ralf


Re: food for optimizer developers

2010-08-11 Thread Richard Henderson
On 08/11/2010 10:59 AM, Ralf W. Grosse-Kunstleve wrote:
> My original posting shows that gfortran and g++ don't do as good
> a job as ifort in generating efficient machine code. Note that the
> loss going from gfortran to g++ isn't as bad as going from ifort to
> gfortran. This gives me hope that the gcc developers could work over
> time towards bringing the performance of the g++-generated code
> closer to the original ifort performance.

While of course there's room for g++ to improve, I think it's more
likely that gfortran can improve to meet ifort.

The biggest issue, from the compiler writer's perspective, is that
the Fortran language provides more information to the optimizers 
than the C++ language can.  A Really Good compiler will probably
always be able to do better with Fortran than C++.

> I think speed will be the major argument against using the C++ code
> generated by the automatic converter.

How about using an automatic converter to arrange for C++ code to
call into the generated Fortran code instead?  Create nice classes
and wrappers and such, but in the end arrange for the Fortran code
to be called to do the real work.


r~


Re: food for optimizer developers

2010-08-11 Thread Vladimir Makarov

 On 08/10/2010 09:51 PM, Ralf W. Grosse-Kunstleve wrote:

I wrote a Fortran to C++ conversion program that I used to convert selected
LAPACK sources. Comparing runtimes with different compilers I get:

  absolute  relative
ifort 11.1.0721.790s1.00
gfortran 4.4.42.470s1.38
g++ 4.4.4 2.922s1.63


To get a full picture, it would be nice to see icc times too.

This is under Fedora 13, 64-bit, 12-core Opteron 2.2GHz

All files needed to easily reproduce the results are here:

   http://cci.lbl.gov/lapack_fem/

See the README file or the example commands below.

Questions:

- Is there a way to make the g++ version as fast as ifort?



I think it is more important (and harder) to make gfortran closer to ifort.

I can not say about your fragment of LAPACK.  But about 15 years ago I 
worked on manual LAPACK optimization for an Alpha processor.   As I 
remember LAPACK is quite memory bound benchmark.  The hottest spot was 
matrix multiplication which is used in many LAPACK places.  The matrix 
multiplication in LAPACK is already moderately optimized by using 
temporary variable and that makes it 1.5 faster (if cache is not enough 
to hold matrices) than normal algorithm.  But proper loop optimizations 
(tiling mostly) could improve it in more 4 times.


So I guess and hope graphite project finally will improve LAPACK by 
implementing tiling.


After solving memory bound problem, loop vectorization is another 
important optimization which could improve LAPACK.  Unfortunately, GCC 
vectorizes less loops (it was about 2 time less when last time I 
checked) than ifort.  I did not analyze what is the reason for this.


After solving vectorization problem, another important lower-level loop 
optimization is modulo scheduling (even if modern x86/x86_64 processor 
are out of order) because OOO processors can look only through a few 
branches.  And as I remember, Intel compiler does make modulo scheduling 
frequently.  GCC modulo-scheduling is quite constraint.


That is my thoughts but I might be wrong because I have no time to 
confirm my speculations.  If you really want to help GCC developers, you 
could make comparison analysis of the code generated by ifort and 
gfortran and find what optimizations GCC misses.  GCC has few resources 
and developers who could solve the problems are very busy.  Intel 
optimization compiler team (besides researchers) is much bigger than 
whole GCC community.  Taking this into account and that they have much 
more info about their processors, I don't think gfortran will generate a 
better or equal code for floating point benchmarks in near future.




Re: food for optimizer developers

2010-08-11 Thread Ralf W. Grosse-Kunstleve
Hi Richard,

> How about using an automatic converter to arrange for C++ code to
> call into the generated Fortran code instead?  Create nice classes
> and wrappers and such, but in the end arrange for the Fortran code
> to be called to do the real work.

I found it very labor intensive to maintain a mixed Fortran/C++ build
system. I rather take the speed hit than dealing with the constant
trickle of problems arising from non-existing or incompatible Fortran
compilers.

We distribute a pretty large system in source form to users (biologist)
who sometimes don't even know what a command-line prompt is.
If installation doesn't work out of the box a large fraction
of our users simply give up.

Ralf


The Linux binutils 2.20.51.0.11 is released

2010-08-11 Thread H.J. Lu
This is the beta release of binutils 2.20.51.0.11 for Linux, which is
based on binutils 2010 0810 in CVS on sourceware.org plus various
changes. It is purely for Linux.

All relevant patches in patches have been applied to the source tree.
You can take a look at patches/README to see what have been applied and
in what order they have been applied.

Starting from the 2.20.51.0.4 release, no diffs against the previous
release will be provided.

You can enable both gold and bfd ld with --enable-gold=both.  Gold will
be installed as ld.gold and bfd ld will be installed as ld.bfd.  By
default, ld.bfd will be installed as ld.  You can use the configure
option, --enable-gold=both/gold to choose gold as the default linker,
ld.  IA-32 binary and X64_64 binary tar balls are configured with
--enable-gold=both/ld --enable-plugins --enable-threads.

Starting from the 2.18.50.0.4 release, the x86 assembler no longer
accepts

fnstsw %eax

fnstsw stores 16bit into %ax and the upper 16bit of %eax is unchanged.
Please use

fnstsw %ax

Starting from the 2.17.50.0.4 release, the default output section LMA
(load memory address) has changed for allocatable sections from being
equal to VMA (virtual memory address), to keeping the difference between
LMA and VMA the same as the previous output section in the same region.

For

.data.init_task : { *(.data.init_task) }

LMA of .data.init_task section is equal to its VMA with the old linker.
With the new linker, it depends on the previous output section. You
can use

.data.init_task : AT (ADDR(.data.init_task)) { *(.data.init_task) }

to ensure that LMA of .data.init_task section is always equal to its
VMA. The linker script in the older 2.6 x86-64 kernel depends on the
old behavior.  You can add AT (ADDR(section)) to force LMA of
.data.init_task section equal to its VMA. It will work with both old
and new linkers. The x86-64 kernel linker script in kernel 2.6.13 and
above is OK.

The new x86_64 assembler no longer accepts

monitor %eax,%ecx,%edx

You should use

monitor %rax,%ecx,%edx

or
monitor

which works with both old and new x86_64 assemblers. They should
generate the same opcode.

The new i386/x86_64 assemblers no longer accept instructions for moving
between a segment register and a 32bit memory location, i.e.,

movl (%eax),%ds
movl %ds,(%eax)

To generate instructions for moving between a segment register and a
16bit memory location without the 16bit operand size prefix, 0x66,

mov (%eax),%ds
mov %ds,(%eax)

should be used. It will work with both new and old assemblers. The
assembler starting from 2.16.90.0.1 will also support

movw (%eax),%ds
movw %ds,(%eax)

without the 0x66 prefix. Patches for 2.4 and 2.6 Linux kernels are
available at

http://www.kernel.org/pub/linux/devel/binutils/linux-2.4-seg-4.patch
http://www.kernel.org/pub/linux/devel/binutils/linux-2.6-seg-5.patch

The ia64 assembler is now defaulted to tune for Itanium 2 processors.
To build a kernel for Itanium 1 processors, you will need to add

ifeq ($(CONFIG_ITANIUM),y)
CFLAGS += -Wa,-mtune=itanium1
AFLAGS += -Wa,-mtune=itanium1
endif

to arch/ia64/Makefile in your kernel source tree.

Please report any bugs related to binutils 2.20.51.0.11 to
hjl.to...@gmail.com

and

http://www.sourceware.org/bugzilla/

Changes from binutils 2.20.51.0.10:

1. Update from binutils 2010 0810.
2. Properly support compressed debug sections in all binutis programs.
Add --compress-debug-sections/--decompress-debug-sections to objcopy.
PR 11819.
3. Fix linker crash on undefined symbol errors with DWARF.  PR 11817.
4. Don't generate .got.plt section if it isn't needed.  PR 11812.
5. Support garbage collection against STT_GNU_IFUNC symbols.  PR 11791.
6. Don't generate multi-byte nops for i686.  PR 6957.
7. Fix strip on binaries generated by gold.  PR 11866.
8. Fix .quad directive with 32bit hosts.  PR 11867.
9. Fix x86 assembler with Intel syntax regression.  PR 11806.
10. Add ud1 to x86 assembler.
11. Avoid assembler crash on ".data" label.  PR 11841.
12. Avoid assembler crash on malformed macro.  PR 11834.
13. Improve linker version script handling.  PR 11887.
14. Remove 64K section compatibility patch.
15. Improve gold.
16. Improve VMS support.
17. Improve Windows SEH support.
18. Improve alpha support.
19. Improve arm support.
20. Improve mips support.
21. Improve ppc support.
22. Improve rx support.
23. Improve v850 support.

Changes from binutils 2.20.51.0.9:

1. Update from binutils 2010 0707.
2. Support AVX Programming Reference (June, 2010)
3. Add --compress-debug-sections support to assembler, disassembler and
readelf.
4. Fix a linker crash due to unintialized field in a structure.
5. Fix an x86 assembler crash with Intel syntax and invalid GOTPCREL.
PR 11732.
6. Enable SOM support for any hosts.
7. Improve gold.
8. Improve VMS support.
9. Improve arm support.
10. Improve mips support.
11. Improve RX support.
12. Impro

Fw: Debugging plugins with gdb

2010-08-11 Thread Jeff Saremi
Sending this to "gcc" since I got no help from sending it to "gcc-help"

--- On Sun, 8/8/10, Jeff Saremi  wrote:

> From: Jeff Saremi 
> Subject: Debugging plugins with gdb
> To: gcc-h...@gcc.gnu.org
> Date: Sunday, August 8, 2010, 9:52 AM
> I'd like to step into my plugin to
> see if I can debug it. My plugin currently causes gcc to
> crash.
> Trying to use "break execute_x" command in
> "add-symbol-file myplugin.so" but neither of them work. The
> first one complains that "function not defined" and the 2nd
> one says that "the address where .so has been loaded is
> missing".
> Try run with the arugments did not help either. After gcc
> returns, it has unloaded the shared libraries.
> I'd appreciate if anyone would share his experience in
> successfully debugging a plugin. thanks
> jeff
> 



Re: Fw: Debugging plugins with gdb

2010-08-11 Thread Andrew Pinski
On Wed, Aug 11, 2010 at 3:52 PM, Jeff Saremi  wrote:
> Sending this to "gcc" since I got no help from sending it to "gcc-help"

Are you trying to debug gcc or cc1/cc1plus?  If the former try running
with -v and seeing that cc1/cc1plus is involved and then debug
cc1/cc1plus instead.

Thanks,
Andrew Pinski


Re: Fw: Debugging plugins with gdb

2010-08-11 Thread Daniel Jacobowitz
On Wed, Aug 11, 2010 at 03:52:17PM -0700, Jeff Saremi wrote:
> > Trying to use "break execute_x" command in
> > "add-symbol-file myplugin.so" but neither of them work. The
> > first one complains that "function not defined"

Did it ask you if you want to make the breakpoint pending?  If it did,
say yes.  If it didn't, try a newer version of GDB.

-- 
Daniel Jacobowitz
CodeSourcery


Re: Fw: Debugging plugins with gdb

2010-08-11 Thread Jeff Saremi
Daniel/Andrew
thanks so much. I was using gdb version 7.1. So it understood deferred 
breakpoints but as long as I started gdb with something like ~/bin/gcc it never 
stopped in my function.
As soon as I switched to running gdb on cc1, it worked!
Now i can work on debugging the seg-fault i'm causing.



Re: food for optimizer developers

2010-08-11 Thread Ralf W. Grosse-Kunstleve
Hi Vladimir,

Thanks for the feedback! Very interesting.


> Intel optimization compiler team (besides researchers) is much bigger than 
>whole GCC community.

That's a surprise to me. I have to say that the GCC community has done amazing
work, as you came within factor 1.4 (gfortran) and 1.6 (g++ compiling converted 
code)
of ifort performance, which is close enough for our purposes, and I think those 
of many
people. To add to this, icpc vs. g++ is a tie overall, with g++ even having a 
slight
advantage. Really great work!

Ralf