Re: C Compiler benchmark: gcc 4.6.3 vs. Intel v11 and others

2012-01-20 Thread Richard Guenther
On Fri, Jan 20, 2012 at 3:43 AM, willus.com  wrote:
> On 1/19/2012 6:29 AM, Richard Guenther wrote:
>>
>> On Thu, Jan 19, 2012 at 3:27 PM, willus.com  wrote:
>>>
>>> On 1/19/2012 2:59 AM, Richard Guenther wrote:

 On Thu, Jan 19, 2012 at 7:37 AM, Marc Glisse
  wrote:
>
> On Wed, 18 Jan 2012, willus.com wrote:
>
>> For those who might be interested, I've recently benchmarked gcc 4.6.3
>> (and 3.4.2) vs. Intel v11 and Microsoft (in Windows 7) here:
>>
>> http://willus.com/ccomp_benchmark2.shtml
>
>
> http://en.wikipedia.org/wiki/Microsoft_Windows_SDK#64-bit_development
>
> For the math functions, this is normally more a libc feature, so you
> might
> get very different results on different OS. Then again, by using
> -ffast-math, you allow the math functions to return any random value,
> so
> I
> can think of ways to make it even faster ;-)

 Also for math functions you can simply substitute the Intel compilers
 one
 (GCC uses the Microsoft ones) by linking against libimf.  You can also
 make
 use of their vectorized variants from GCC by specifying -mveclibabi=svml
 and link against libimf (the GCC autovectorizer will then use the
 routines
 from the Intel compiler math library).  That makes a huge difference for
 code using functions from math.h.

 Richard.

> --
> Marc Glisse
>>>
>>> Thank you both for the tips.  Are you certain that with the flags I used
>>> Intel doesn't completely in-line the math2.h functions at the compile
>>> stage?
>>
>> Yes.  Intel merely comes with its own (optimized) math library while GCC
>> has to rely on the operating system one.
>>
> Wouldn't it be possible to in-line the standard C math functions in math.h,
> though, if the correct compiler flags were set?  I realize this could be a
> big task and would potentially have a lot of dependencies on the CPU flag
> settings, but is it at least conceivable?   Or is it highly undesirable for
> some reason?  (I almost don't see how Intel could be so fast on some of
> those functions without in-lining.)

The functions are too large for inlining and the function call overhead is
never the problem.  The Intel routines are faster because they are highly
optimized, which neither Windows nor glibcs are.

> Alternately, is anybody developing an open-source, x86-based or x64-based
> fast math library (just for standard C math functions--I don't need
> vector/array/BLAS/etc.) that auto-detects and takes advantage of modern CPU
> capabilities?

AMD libm is open source AFAIK and has optimizations for several AMD CPUs.
Other than that, don't hold your breath - developing math library routines is
not an easy task.

Richard.


Re: Dealing with compilers that pretend to be GCC

2012-01-20 Thread Peter Rosin
Dave Korn skrev 2012-01-20 01:15:

*snip*

>That could be tricky because I guess you won't be able to use
> libtool at configure time.

*snip*

It's possible to use libtool at configure time, but you need to invoke
LT_OUTPUT before you do so.  Or is there a reason for that not to work
in this case?

Cheers,
Peter


const_int vs. const_double on 64-bit vs. 32-bit build platform

2012-01-20 Thread Georg-Johann Lay
Hi, in avr.md there is

(define_insn "map_bitsqi"
  [(set (match_operand:QI 0 "register_operand" "=d")
(unspec:QI [(match_operand:SI 1 "const_int_operand" "n")
(match_operand:QI 2 "register_operand"  "r")]
   UNSPEC_MAP_BITS))]
  ""
  {
return avr_out_map_bits (insn, operands, NULL);
  })

(define_insn "map_bitshi"
  [(set (match_operand:HI 0 "register_operand"   "=&r")
(unspec:HI [(match_operand:DI 1 "const_double_operand" "n")
(match_operand:HI 2 "register_operand" "r")]
   UNSPEC_MAP_BITS))]
  ""
  {
return avr_out_map_bits (insn, operands, NULL);
  })


i.e. depending on operand size of operand1 it is a const_int operand or a
const_double operand.

Now it depends on the build platform if a specific value like
0x1234567812345678 is CONST_INT or CONST_DOUBLE: On a 32-bit build platform it
is CONST_DOUBLE because it does not for in 32 bits and on a 64-bit build
platform it is CONST_INT.

To add to the complication, the "n" constraint has to be decomposed into
several constraints. What rtx code should the constraint mention? const_int or
const_double? Or both?

Some constraint letters even require const_int and others require const_double
so that it is odd to factor out the build platform dependency.

Some targets set need_64bit_hwint which should fix the issue because then all
values in question were CONST_INT.

Is this a right use case for need_64bit_hwint?
Would it have other side effects like ABI changes, e.g. to the preprocessor?

Thanks for hints on this topic.

Johann


How to define a built-in 24-bit type?

2012-01-20 Thread Georg-Johann Lay
Hi.

avr-gcc implements a 24-bit scalar integer types __int24 and __uint24 in
avr.c:TARGET_INIT_BUILTINS like so:

  tree int24_type  = make_signed_type (GET_MODE_BITSIZE (PSImode));
  tree uint24_type = make_unsigned_type (GET_MODE_BITSIZE (PSImode));

  (*lang_hooks.types.register_builtin_type) (int24_type, "__int24");
  (*lang_hooks.types.register_builtin_type) (uint24_type, "__uint24");

PSImode is defined in avr-modes.c:

FRACTIONAL_INT_MODE (PSI, 24, 3);

Is this the right definition of a built-in type?

The question is because __int24 shreds the compiler, see PR51527

So the question is if there is something missing or broken in the definition
above or if it's actually a flaw in the front-end.

For the __int128 there is much more code sprinkled over the compiler sources,
so maybe it's not that easy to introduce a new, built-in type completely in the
back-end?

Thanks.


Re: Interface Method Table

2012-01-20 Thread Ian Lance Taylor
Matt Davis  writes:

> For a Go program being compiled in gcc, from the middle end, is there a way to
> figure-out which routines make up the interface-method-table?  I could check 
> the
> mangled name of the method table, but is there another way to deduce what
> methods compose it from the middle-end?

The type of the table is a struct.  The second element of the struct is
a pointer to a table of pointers.  The pointers in that table are the
methods.  Each one is cast to void*, but if you undo that case you
should find the actual function pointer.

Ian


Re: Dealing with compilers that pretend to be GCC

2012-01-20 Thread Dave Korn
On 20/01/2012 11:19, Peter Rosin wrote:
> Dave Korn skrev 2012-01-20 01:15:
> 
> *snip*
> 
>>That could be tricky because I guess you won't be able to use
>> libtool at configure time.
> 
> *snip*
> 
> It's possible to use libtool at configure time, but you need to invoke
> LT_OUTPUT before you do so.  Or is there a reason for that not to work
> in this case?

  Not as far as I know, I just wasn't aware that you could generate the output
script early.

cheers,
  DaveK


Re: Access to source code from an analyser

2012-01-20 Thread Manuel López-Ibáñez
> On Thu, 2012-01-19 at 14:06 +0100, Alberto Lozano Alelu wrote:
>> Hello.
>>
>> Thanks for your fast response.
>>
>> With expand_location I get struct expanded_location which has these fields:
>> type = struct {
>> const char *file;
>> int line;
>> int column;
>> unsigned char sysp;
>> }
>>
>> But it hasn't source line text.
>>
>> I know how I have to use expand_location function but I would like to
>> get source text from a source location. I would like a funcion such
>> as:
>>
>> char *v_line_text = (file,line);
>>
>> or
>>
>> expanded_location v_location=expand_location(SOURCE_LOCATION(my_element));
>> char * v_line_text = a_function(v_location);
>>
>> v_line_text is a source code line.
>>
>> I need to have source text
>
> I'm interested in hearing if there's a more "official" way of doing this
> from the GCC experts, but can't you just read the file from disk and
> split it on newline characters? (probably with some caching)

Since nobody can be bothered to answer you: No, there is not.

It would be nice if there were, because it could have many
applications, but as far as I know, no one is working on it. See:
http://gcc.gnu.org/wiki/Better_Diagnostics A.3

However, to be honest, even if you implement your own source-location
manager in your own plugin code, I don't think it will be very precise
because the internal representation of GCC C/C++ FE is not very close
to the actual code, and the locations are sometimes quite imprecise.
Nonetheless, I think having a source-location manager could help to
point out these issues.

If you wished to contribute a source-location manager to GCC, I think
it would be accepted, eventually. In fact, it should be possible to
reuse code from CPP. However, having precise locations is a much
harder problem to fix. As an alternative, you may wish to take a look
at Clang, which is more focused on supporting diverse clients and IDEs
and provides both a source-location manager module and very precise
location info. It also tracks the location of many more things than
GCC, which is another problem that is quite likely very hard to fix.

Cheers,

Manuel.


gfortran 4.6 incompatible with previous?

2012-01-20 Thread Sewell, Granville


I know that gfortran 4.3 was not compatible with earlier versions (can't mix 
object code), but now a
user is telling me that 4.6 is not compatible with 4.3, is that true?

Thanks,

Granville Sewell
Mathematics dept.
UTEP


Re: gfortran 4.6 incompatible with previous?

2012-01-20 Thread Tobias Burnus

Sewell, Granville wrote:

I know that gfortran 4.3 was not compatible with earlier versions (can't mix 
object code), but now a
user is telling me that 4.6 is not compatible with 4.3, is that true?


The library .so version number of GCC 4.3 to 4.7 is the same and no 
symbol was deleted from libgfortran since 4.3 - though new symbols have 
been added such that a GCC 4.6 program might not work with a GCC 4.3 
libgfortran.


Thus, a newer libgfortran should work with older programs.

I am also not aware about any ABI issue. The only thing I am aware of 
are issues with REAL(16) and selected_real_kind. Namely, mixing a 
libgfortran on a system where libquadmath was available with one 
compiler which does not support libquadmath. That might to lead issues 
when a configure script tests for the available kind numbers in a 
certain way (such as HDF5 did).



Note: The gcc@gcc.gnu.org mailing list is about the development of GCC. 
Such questions are better suited for the gcc-h...@gcc.gnu.org mailing 
list, though they might also be suited for the fort...@gcc.gnu.org.


Tobias


Re: Access to source code from an analyser

2012-01-20 Thread Tom Tromey
> "Manuel" == Manuel López-Ibáñez  writes:

Manuel> However, to be honest, even if you implement your own source-location
Manuel> manager in your own plugin code, I don't think it will be very precise
Manuel> because the internal representation of GCC C/C++ FE is not very close
Manuel> to the actual code, and the locations are sometimes quite imprecise.

Please file bugs if you run across these.

Tom


Re: Dealing with compilers that pretend to be GCC

2012-01-20 Thread Ludovic Courtès
Hi Vincent,

Vincent Lefevre  skribis:

> For ICC, one can test __ICC. For instance, here's what we have in mpfr.h
> (for the use of __builtin_constant_p and __extension__ ({ ... })):
>
> #if defined (__GNUC__) && !defined(__ICC) && !defined(__cplusplus)

Yeah, but it’s a shame that those compilers define __GNUC__ without
supporting 100% of the GNU C extensions.  With this approach, you would
also need to add !defined for Clang, PGI, and probably others.

Thanks,
Ludo’.



Re: Getting rid of duplicate .debug_ranges

2012-01-20 Thread Cary Coutant
>> Is there a way to detect that basic blocks have the same range even
>> though they have different block numbers? Or am I not looking/thinking
>> about this issue correctly?

I may be oversimplifying this, but it seems that
gen_inlined_subroutine_die generates a DW_AT_ranges list, then calls
decls_for_scope, which results in a call to gen_lexical_block_die,
which generates another DW_AT_ranges list, but in both cases,
BLOCK_FRAGMENT_CHAIN(stmt) points to the same list of block fragments.
I'd think you could just keep a single-entry cache in
add_high_low_attributes that remembers the last value of
BLOCK_FRAGMENT_CHAIN(stmt) and the pointer returned from add_ranges
(stmt) for that chain. If you get a match, just generate a
DW_AT_ranges entry using the range list already generated.

-cary


Re: Dealing with compilers that pretend to be GCC

2012-01-20 Thread Cary Coutant
> Yeah, but it’s a shame that those compilers define __GNUC__ without
> supporting 100% of the GNU C extensions.  With this approach, you would
> also need to add !defined for Clang, PGI, and probably others.

Having worked on the other side for a while -- for a vendor whose
compiler supported many but not all of GCC's extensions -- I claim
that the problem is with the many examples of code out there that
blindly test for __GNUC__ instead of testing for individual
extensions. From the other vendor's point of view, it's nearly useless
to support any of the GCC extensions if you don't also define
__GNUC__, because most code out there will simply test for that macro.
By defining the macro even if you don't support, for example, nested
functions, you can still compile 99% of the code that uses the
extensions.

-cary


Re: Dealing with compilers that pretend to be GCC

2012-01-20 Thread James Dennett
On Fri, Jan 20, 2012 at 2:41 PM, Cary Coutant  wrote:
>> Yeah, but it’s a shame that those compilers define __GNUC__ without
>> supporting 100% of the GNU C extensions.  With this approach, you would
>> also need to add !defined for Clang, PGI, and probably others.
>
> Having worked on the other side for a while -- for a vendor whose
> compiler supported many but not all of GCC's extensions -- I claim
> that the problem is with the many examples of code out there that
> blindly test for __GNUC__ instead of testing for individual
> extensions. From the other vendor's point of view, it's nearly useless
> to support any of the GCC extensions if you don't also define
> __GNUC__, because most code out there will simply test for that macro.
> By defining the macro even if you don't support, for example, nested
> functions, you can still compile 99% of the code that uses the
> extensions.

If there were a defined way to test for extensions from within C (or
C++), then this problem would be much reduced. Clang has something of
a framework to query support for different features, and I drafted a
proposal for something similar that would work across different
compilers (with the intension of tracking C++11 features as they roll
out), but that proposal went nowhere (I was too late for it to be
useful for C++11 in any case).

-- James


gcc-4.6-20120120 is now available

2012-01-20 Thread gccadmin
Snapshot gcc-4.6-20120120 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.6-20120120/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.6 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_6-branch 
revision 183357

You'll find:

 gcc-4.6-20120120.tar.bz2 Complete GCC

  MD5=f7ca5d9f7a07216577f81318b7cf56ef
  SHA1=ad5f72678b3ee52822482879e18f859ed7c98314

Diffs from 4.6-20120113 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.6
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: Dealing with compilers that pretend to be GCC

2012-01-20 Thread Jonathan Wakely
2012/1/20 Ludovic Courtès:
>
> Yeah, but it’s a shame that those compilers define __GNUC__ without
> supporting 100% of the GNU C extensions.  With this approach, you would
> also need to add !defined for Clang, PGI, and probably others.

May I politely suggest that this is the wrong place to complain about
other compilers pretending to be GCC :)

If GCC added a __REALLY_GNUC__ macro the other compilers would define
it, for exactly the same reasons they define __GNUC__


Re: Dealing with compilers that pretend to be GCC

2012-01-20 Thread Ludovic Courtès
Hi,

James Dennett  skribis:

> If there were a defined way to test for extensions from within C (or
> C++), then this problem would be much reduced. Clang has something of
> a framework to query support for different features, and I drafted a
> proposal for something similar that would work across different
> compilers (with the intension of tracking C++11 features as they roll
> out), but that proposal went nowhere (I was too late for it to be
> useful for C++11 in any case).

It would still be useful, though, and a net improvement over the
catch-all __GNUC__.

Ludo’.


Re: Dealing with compilers that pretend to be GCC

2012-01-20 Thread Ludovic Courtès
Hi,

Cary Coutant  skribis:

>> Yeah, but it’s a shame that those compilers define __GNUC__ without
>> supporting 100% of the GNU C extensions.  With this approach, you would
>> also need to add !defined for Clang, PGI, and probably others.
>
> Having worked on the other side for a while -- for a vendor whose
> compiler supported many but not all of GCC's extensions -- I claim
> that the problem is with the many examples of code out there that
> blindly test for __GNUC__ instead of testing for individual
> extensions. From the other vendor's point of view, it's nearly useless
> to support any of the GCC extensions if you don't also define
> __GNUC__, because most code out there will simply test for that macro.
> By defining the macro even if you don't support, for example, nested
> functions, you can still compile 99% of the code that uses the
> extensions.

Thanks, I see.

I think the problem is that __GNUC__ is (ab)used to refer to the GNU C
language (any version), whereas it’s initially meant to refer to the
compiler implementation.

Maybe CPP assertions could be revived to test for single language
features?

Ludo’.


Re: Dealing with compilers that pretend to be GCC

2012-01-20 Thread Dave Korn
On 20/01/2012 23:28, Jonathan Wakely wrote:
> 2012/1/20 Ludovic Courtès:
>> Yeah, but it’s a shame that those compilers define __GNUC__ without
>> supporting 100% of the GNU C extensions.  With this approach, you would
>> also need to add !defined for Clang, PGI, and probably others.
> 
> May I politely suggest that this is the wrong place to complain about
> other compilers pretending to be GCC :)
> 
> If GCC added a __REALLY_GNUC__ macro the other compilers would define
> it, for exactly the same reasons they define __GNUC__

  I do agree with the proposition that if you pretend to be GCC, but don't do
it completely and well enough, that's a bug that should be fixed in the
compiler pretending to be GCC.

  OTOH the entire point of autotools is that any toolchain (even GCC itself)
sometimes has bugs or unimplemented features, and you just can't argue with
the principle that the definitive test is always going to be "try and use the
feature and verify if it worked or not".  Therefore autoconf tests should not
just test __GNUC__, unless the only thing they're trying to be a test for is
whether __GUNC__ is defined or not.

cheers,
  DaveK



Re: Dealing with compilers that pretend to be GCC

2012-01-20 Thread Vincent Lefevre
On 2012-01-20 23:28:07 +, Jonathan Wakely wrote:
> May I politely suggest that this is the wrong place to complain about
> other compilers pretending to be GCC :)

I think that's the fault of GCC, which should have defined a macro
for each extension.

-- 
Vincent Lefèvre  - Web: 
100% accessible validated (X)HTML - Blog: 
Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)


Re: Dealing with compilers that pretend to be GCC

2012-01-20 Thread Jonathan Wakely
On 21 January 2012 00:32, Vincent Lefevre wrote:
> On 2012-01-20 23:28:07 +, Jonathan Wakely wrote:
>> May I politely suggest that this is the wrong place to complain about
>> other compilers pretending to be GCC :)
>
> I think that's the fault of GCC, which should have defined a macro
> for each extension.

And what about the fact other compilers haven't defined such a macro
for each extension they implement, whether it comes from GCC or not,
is that GCC's fault too?