RE: ftp.net mirror of gcc.gnu.org is broken

2010-12-01 Thread Vu Tong Minh
Hi,

I fixed the symbolic link. It's http://mirror-fpt-telecom.fpt.net/gcc

Thanks your notice.

VTMinh

From: Gerald Pfeifer [ger...@pfeifer.com]
Sent: Monday, November 29, 2010 2:12 AM
To: Minh Vu Tong
Cc: gcc@gcc.gnu.org
Subject: ftp.net mirror of gcc.gnu.org is broken

Looking at

  http://gcc.gnu.org/mirrors.html

I noticed that the following entry

  Viet Nam, HoChiMinh: http://mirror-fpt-telecom.fpt.net/gcc/,
  thanks to Minh Vu Tong (mirror at fpt dot net)

is not working any more.  It seems a symbolic link from
http://mirror-fpt-telecom.fpt.net/gcc/ to
http://mirror-fpt-telecom.fpt.net/sourceware/gcc or me changing
that link on the GCC side both would fix this.

Which way do you suggest?

Gerald


Re: gcc-4.5-20101125: minor bug & test results

2010-12-01 Thread Jonathan Wakely
On 1 December 2010 02:45, Russell Whitaker wrote:
>
> Minor build bug: The cpp sanity check fails because it is looking for cpp in
> /lib instead of /usr/bin

Like http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40950 ?
In that case it was because there was no C++ compiler installed when
building gcc.


questions about cv-qulifier for function.

2010-12-01 Thread zhang qingshan
Hi, team

Question 1:

std(N3000=09-0190) 8.3.5/6 says that:

A cv-qualifier-seq shall only be part of the function type for a
non-static member function, the function type to which a pointer to
member refers, or the top-level function type of a function typedef
declaration.

const void func();

can be compiled successfully for GCC 4.5.0. However, I am confused about that:

the const here doesn't belong to the three types which std allows. Why
it is well-form?

Question 2:

std(N3000=09-0190) 8.3.2/1 says that:

Cv-qualified references are ill-formed except when the cv-qualifiers
are introduced through the use of a typedef (7.1.3) or of a template
type argument (14.4), in which case the cv-qualifiers are ignored.

int m = 3;
const int &t = m;

can be compiled successfully for GCC 4.5.0. But std says that
cv-qualified references are not allowed to introduce except through
the use of typedef or template type argument. In this case , it is
allowed, why?

Thanks.


Re: Partial hookization / PR46738 (Was: Re: RFA: partially hookize *_TYPE_SIZE)

2010-12-01 Thread Eric Botcazou
> I don't understand what makes you say this.  Partial conversions have
> much lower chance of breaking things when the macro is set up in
> complicated ways, and you also trivially avoid performance regressions at
> non-converted call site.

But what's the point in doing this during stage3?  This will give no benefits 
to the users and may introduce new bugs; we already have enough of them and 
we should concentrate on fixing them.  IMO this is stage1 material only.

-- 
Eric Botcazou


Re: questions about cv-qulifier for function.

2010-12-01 Thread Jonathan Wakely
On 1 December 2010 10:28, zhang qingshan wrote:
> Hi, team

This sort of question should be sent the mailing list gcc-h...@gcc.gnu.org,
not the mailing list g...@gcc.gnu.org.  gcc@gcc.gnu.org is for the
development of gcc itself.  Please take any future questions about using
gcc to gcc-help.  Thanks.

> Question 1:
>
> std(N3000=09-0190) 8.3.5/6 says that:

N3000 is not a standard, it's only a draft, and N3126 is the latest draft.

> A cv-qualifier-seq shall only be part of the function type for a
> non-static member function, the function type to which a pointer to
> member refers, or the top-level function type of a function typedef
> declaration.
>
> const void func();
>
> can be compiled successfully for GCC 4.5.0. However, I am confused about that:
>
> the const here doesn't belong to the three types which std allows. Why
> it is well-form?

The const is not part of the function type, it's part of the return
type, and is ignored.


> Question 2:
>
> std(N3000=09-0190) 8.3.2/1 says that:
>
> Cv-qualified references are ill-formed except when the cv-qualifiers
> are introduced through the use of a typedef (7.1.3) or of a template
> type argument (14.4), in which case the cv-qualifiers are ignored.
>
> int m = 3;
> const int &t = m;
>
> can be compiled successfully for GCC 4.5.0. But std says that
> cv-qualified references are not allowed to introduce except through
> the use of typedef or template type argument. In this case , it is
> allowed, why?

That is not a const-qualified reference, it's a reference to const int.

Please take any further questions to the gcc-help list or a C++ forum.

Thanks,

Jonathan


Re: microblaze ASM_OUTPUT_IDENT Was: RFA: hookize BOOL_TYPE_SIZE and ADA_LONG_TYPE_SIZE

2010-12-01 Thread Joern Rennecke

Quoting Laurent GUERBY :


On Wed, 2010-12-01 at 00:32 -0500, Joern Rennecke wrote:

bootstrapped & regtested on x86_64-pc-linux-gnu
cross-tested on x86_64-pc-linux-gnu for the following targets:
ppc-darwin
alpha-linux-gnu hppa-linux-gnu mips-elf sh-elf arc-elf ia64-elf sparc-elf
arm-eabi iq2000-elf mn10300-elf spu-elf avr-elf lm32-elf moxie-elf v850-elf
m32c-elf pdp11-aout cris-elf m32r-elf picochip-elf xstormy16-elf
crx-elf m68hc11-elf ppc-elf xtensa-elf fr30-elf m68k-elf rx-elf
mcore-elf
s390-linux-gnu h8300-elf mep-elf score-elf

bootstrapped on i686-pc-linux-gnu
cross-tested on i686-pc-linux-gnu for targets:
bfin-elf frv-elf mmix-knuth-mmixware vax-linux-gnu

Ada testing for microblaze is currently not possible because of PR46738.


This one looks like a missing include (again): the macro
ASM_OUTPUT_IDENT is visible and defined in microblaze.h calling
microblaze_asm_output_ident from microblaze.c which is not visible from
microblaze.h but from microblaze-protos.h which for some reason
end up not included from ada/gcc-interface/trans.c

What are the include rules in GCC? Why would a macro in one .h do not
include what's needed to implement it? Is it a microblaze issue
or a trans.c issue?


In general, if you use any tm.h macro, besides tm.h, you also have to
include tm_h, and if the macro involves a type from tree.h / rtl.h, you
have to include tree.h / rtl.h between including tm.h and tm_p.h .

But frontend maintainers often get this wrong, and also there are often
ways in which type, value ranges, constantness or non-constantness of
macro definitions by particular targets cause warnings in frontends,
which causes bootstraps to fail.  The interfaces are not defined
rigerously enough to avoid such failures without testing every possible
target with every frontend first.

These are just two of the reasons why it would be better for our users if
frontends would not use target macros, and reducing the number of target
macros in frontends by replacing them with hooks also generally reduces
this instability.
Once all target macros are eliminated from a frontend file, the way to
avoid regressions is to eliminate all direct and indirect includes of
tm.h.  And once we have archived that for all frontend files, we can poison
the include guards of tm.h and its friends for frontends, i.e. TM_H,
HARD_REG_SET_H etc. .
Target macros in the rtl optimizers are a lesser concern because they get
better test coverage than the less-frequently built frontends, like ada.


Re: microblaze ASM_OUTPUT_IDENT Was: RFA: hookize BOOL_TYPE_SIZE and ADA_LONG_TYPE_SIZE

2010-12-01 Thread Joseph S. Myers
On Wed, 1 Dec 2010, Joern Rennecke wrote:

> Once all target macros are eliminated from a frontend file, the way to
> avoid regressions is to eliminate all direct and indirect includes of
> tm.h.  And once we have archived that for all frontend files, we can poison
> the include guards of tm.h and its friends for frontends, i.e. TM_H,
> HARD_REG_SET_H etc. .

Note that sometimes the poisoning we have of certain macros/headers for 
front ends has accidentally broken some targets because of includes in 
target-specific front-end files (c_target_objs etc.).  sh-symbianelf, for 
example.  Your work on making each target architecture build cleanly with 
current GCC trunk has been very useful; it would be good to extend to it 
every target OS for every architecture (especially those with their own .c 
files, but there might be other cases of OS-specific breakage as well) and 
have an easy way to test patches don't break the build for any target (a 
standard list of all supported target variants that enable different sets 
of .c or .h files, for example - or at least an autobuilder reporting 
patches that break any target).  (i686-interix3 is another target with 
broken build that has shown up when I've done tests for lots of targets, 
though I think that breakage is unrelated to any poisoning changes.)

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: ppc: const data not in RO section

2010-12-01 Thread Nathan Froyd
On Tue, Nov 30, 2010 at 08:04:06PM +0100, Joakim Tjernlund wrote:
> Why is not
>   const char cstr[] = "mystr";
>   const int myint = 3;
> added to a read only section?
> Especially since
>   const int myarr[]={1,2,3};
> is placed in .rodata.
> 
> hmm, -G 0 does place these in .rodata but why do I have to specify that?

It would help if you specified the target and the compiler version that
you used.

The compiler I have (~4.5) places myint and mystr in .sdata; since
they're so small, GCC thinks that placing myint and mystr in .sdata is
beneficial.  Why do you think -G 0 should be the default?

It does seem kind of odd that "mystr" is placed in .sdata, since
rs6000_elf_in_small_data_p indicates that string constants shouldn't be
in .sdata.  You could investigate and submit a patch or file a bug.

-Nathan


Re: ppc: const data not in RO section

2010-12-01 Thread Joakim Tjernlund
Nathan Froyd  wrote on 2010/12/01 18:33:23:
>
> On Tue, Nov 30, 2010 at 08:04:06PM +0100, Joakim Tjernlund wrote:
> > Why is not
> >   const char cstr[] = "mystr";
> >   const int myint = 3;
> > added to a read only section?
> > Especially since
> >   const int myarr[]={1,2,3};
> > is placed in .rodata.
> >
> > hmm, -G 0 does place these in .rodata but why do I have to specify that?
>
> It would help if you specified the target and the compiler version that
> you used.

ppc32(e300c2), gcc 4.4.5

>
> The compiler I have (~4.5) places myint and mystr in .sdata; since
> they're so small, GCC thinks that placing myint and mystr in .sdata is
> beneficial.  Why do you think -G 0 should be the default?

I don't, I just noticed that -G 0 changed this into .rodata

>
> It does seem kind of odd that "mystr" is placed in .sdata, since
> rs6000_elf_in_small_data_p indicates that string constants shouldn't be
> in .sdata.  You could investigate and submit a patch or file a bug.

I am just surprised that gcc doesn't place RO data into a RO section
by default. As is now there is no protection against actually
modifying small const data.

 Jocke



Update LTO plugin interface

2010-12-01 Thread H.J. Lu
Hi,

Here is a proposal to update LTO plugin interface.  Any comments?

Thanks.

-- 
H.J.
---
Goal:  We should preserve the same linker command line order as if
there are no IR.
Problem:
a. LTO may generate extra symbol references which aren't in IR.
b. It was worked around with -pass-through hack.  But it doesn't
preserve the link command line order.

Proposal:
a. Remove -pass-through hack in GCC.
b. Compiler plugin controls what linker uses to generate the final 
executable:
i. The linker command line order should be the same, with or 
without LTO.
c. Add a cmdline bit field to
struct ld_plugin_input_file
{
   const char *name;
   int fd;
   off_t offset;
   off_t filesize;
   void *handle;
   unsigned int cmdline : 1;
};
It is used by linker to tell plugin that the input file comes from
linker command line.
d. 2 stage linker:
i. Stage 1: Normal symbol resolution with plugin.
ii. Stage 2:
1) Call the "all symbols read" handler to get the final 
linker inputs.
2) Discard all previous inputs.
3) Generate the final executable with inputs from 
plugin.
e. Compiler plugin:
i. For a file, which comes from the linker command line and 
isn't
claimed by plugin, save it in the linker pass-through list in the same
order as it comes in.
ii. For the first file claimed by plugin,  remember the last
pass-through linker input.
iii. The "all symbols read" handler adds input files to the 
linker
in the order:
1) All linker input files on the linker pass-through 
list up to the
first file claimed by plugin.
2) All linker input files generated by plugin.
3) The rest of linker input files on the linker 
pass-through list.
f. Limitation:
i. All files claimed by plugin are grouped together.  Any 
archives
between files claimed by plugin are placed after all linker input
files generated by plugin when passed to linker.


Re: Update LTO plugin interface

2010-12-01 Thread Basile Starynkevitch
On Wed, 1 Dec 2010 10:18:58 -0800
"H.J. Lu"  wrote:

> Here is a proposal to update LTO plugin interface.  

How should we parse the above sentence?

Is it about an interface to plugin inside binutils to support LTO?

Is it about an interface for GCC plugins to help them be more LTO friendly?

Is it about an interface inside the GOLD linker to dlopen the LTO plugin 
provided with GCC sources?

Cheers.

-- 
Basile STARYNKEVITCH http://starynkevitch.net/Basile/
email: basilestarynkevitchnet mobile: +33 6 8501 2359
8, rue de la Faiencerie, 92340 Bourg La Reine, France
*** opinions {are only mine, sont seulement les miennes} ***


Re: Update LTO plugin interface

2010-12-01 Thread H.J. Lu
On Wed, Dec 1, 2010 at 10:28 AM, Basile Starynkevitch
 wrote:
> On Wed, 1 Dec 2010 10:18:58 -0800
> "H.J. Lu"  wrote:
>
>> Here is a proposal to update LTO plugin interface.
>
> How should we parse the above sentence?
>
> Is it about an interface to plugin inside binutils to support LTO?
>
> Is it about an interface for GCC plugins to help them be more LTO friendly?
>
> Is it about an interface inside the GOLD linker to dlopen the LTO plugin 
> provided with GCC sources?
>

It is about external linker plugin API as specified at

http://gcc.gnu.org/wiki/whopr/driver

-- 
H.J.


concurrence.h compiler error on head

2010-12-01 Thread Joel Sherrill

Hi,

Compiling C++ on the head targeting
arm-rtems4.11, I get this error.  It doesn't
occur on m68k-rtems4.11.  I don't know about
the other targets yet.

Any suggestions on what is causing this
and how to resolve it?


libtool: compile:  /users/joel/test-gcc/b-gcc1-arm/./gcc/xgcc 
-shared-libgcc -B/users/joel/test-gcc/b-gcc1-arm/./gcc -nostdinc++ 
-L/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/src 
-L/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/src/.libs 
-nostdinc -B/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/newlib/ 
-isystem 
/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/newlib/targ-include 
-isystem /users/joel/test-gcc/gcc-svn/newlib/libc/include 
-B/users/joel/test-gcc/install-svn/arm-rtems4.11/bin/ 
-B/users/joel/test-gcc/install-svn/arm-rtems4.11/lib/ -isystem 
/users/joel/test-gcc/install-svn/arm-rtems4.11/include -isystem 
/users/joel/test-gcc/install-svn/arm-rtems4.11/sys-include 
-I/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/include/arm-rtems4.11 
-I/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/include 
-I/users/joel/test-gcc/gcc-svn/libstdc++-v3/libsupc++ 
-fno-implicit-templates -Wall -Wextra -Wwrite-strings -Wcast-qual 
-fdiagnostics-show-location=once -ffunction-sections -fdata-sections -g 
-O2 -c /users/joel/test-gcc/gcc-svn/libstdc++-v3/src/pool_allocator.cc 
-o pool_allocator.o
In file included from 
/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/include/arm-rtems4.11/bits/gthr.h:162:0,
 from 
/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/include/ext/atomicity.h:34,
 from 
/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/include/ext/pool_allocator.h:50,
 from 
/users/joel/test-gcc/gcc-svn/libstdc++-v3/src/pool_allocator.cc:31:
/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/include/ext/concurrence.h: 
In destructor '__gnu_cxx::__scoped_lock::~__scoped_lock()':
/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/include/ext/concurrence.h:313:5: 
error: __gnu_cxx::__scoped_lock::~__scoped_lock() causes a section type 
conflict
In file included from 
/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/include/ext/pool_allocator.h:51:0,
 from 
/users/joel/test-gcc/gcc-svn/libstdc++-v3/src/pool_allocator.cc:31:
/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/include/ext/concurrence.h:313:5: 
error: __gnu_cxx::__scoped_lock::~__scoped_lock() causes a section type 
conflict



--
Joel Sherrill, Ph.D. Director of Research&  Development
joel.sherr...@oarcorp.comOn-Line Applications Research
Ask me about RTEMS: a free RTOS  Huntsville AL 35805
   Support Available (256) 722-9985




Re: Update LTO plugin interface

2010-12-01 Thread Ian Lance Taylor
"H.J. Lu"  writes:

>   b. Compiler plugin controls what linker uses to generate the final 
> executable:
>   i. The linker command line order should be the same, with or 
> without LTO.
>   c. Add a cmdline bit field to
>   struct ld_plugin_input_file
>   {
>  const char *name;
>  int fd;
>  off_t offset;
>  off_t filesize;
>  void *handle;
>  unsigned int cmdline : 1;
>   };

Just make it an int.  But I don't see why this is needed.  The plugin
already knows the files that it passed to add_input_file and
add_input_library.  Why does it need to linker to report back where the
file came from?  Why doesn't the plugin just keep track?

Ian


Re: Update LTO plugin interface

2010-12-01 Thread H.J. Lu
On Wed, Dec 1, 2010 at 10:54 AM, Ian Lance Taylor  wrote:
> "H.J. Lu"  writes:
>
>>       b. Compiler plugin controls what linker uses to generate the final 
>> executable:
>>               i. The linker command line order should be the same, with or 
>> without LTO.
>>       c. Add a cmdline bit field to
>>       struct ld_plugin_input_file
>>       {
>>          const char *name;
>>          int fd;
>>          off_t offset;
>>          off_t filesize;
>>          void *handle;
>>          unsigned int cmdline : 1;
>>       };
>
> Just make it an int.  But I don't see why this is needed.  The plugin
> already knows the files that it passed to add_input_file and
> add_input_library.  Why does it need to linker to report back where the
> file came from?  Why doesn't the plugin just keep track?
>

It is used to keep the same linker command line order. With LTO,
linker should use

crtX.o *trans*.o -lbar -lgcc -lc ... crtX.o

instead of

crtX.o -lbar -lgcc -lc ... crtX.o  *trans*.o

to generate final executable.  2 orders may generate different
executables.

-- 
H.J.


Re: Update LTO plugin interface

2010-12-01 Thread Jan Hubicka
> On Wed, Dec 1, 2010 at 10:54 AM, Ian Lance Taylor  wrote:
> > "H.J. Lu"  writes:
> >
> >>       b. Compiler plugin controls what linker uses to generate the final 
> >> executable:
> >>               i. The linker command line order should be the same, with or 
> >> without LTO.
> >>       c. Add a cmdline bit field to
> >>       struct ld_plugin_input_file
> >>       {
> >>          const char *name;
> >>          int fd;
> >>          off_t offset;
> >>          off_t filesize;
> >>          void *handle;
> >>          unsigned int cmdline : 1;
> >>       };
> >
> > Just make it an int.  But I don't see why this is needed.  The plugin
> > already knows the files that it passed to add_input_file and
> > add_input_library.  Why does it need to linker to report back where the
> > file came from?  Why doesn't the plugin just keep track?
> >
> 
> It is used to keep the same linker command line order. With LTO,
> linker should use
> 
> crtX.o *trans*.o -lbar -lgcc -lc ... crtX.o
> 
> instead of
> 
> crtX.o -lbar -lgcc -lc ... crtX.o  *trans*.o
> 
> to generate final executable.  2 orders may generate different
> executables.

Hmm and when I have something like

ctrX.o non-lto1.o lto1.o non-lto2.o lto2.o  crtX.o
and then linker plugin produce ltrans0.o combining both lto1.o and lto2.o, ho
we will deal with non-lto2.o?

If we get into extending linker plugin interface, it would be great if we would
do somehting about COMDAT.  We now have RESOLVED and RESOLVED_IRONLY, while the
problem is that all non-hidden COMDAT symbols get RESOLVED that pretty much
fixes them in the output library.

I would propose adding RESOLVED_IRDYNAMIC for cases where symbol was resolved
IRONLY except that it is externally visible to dynamic linker.  We can then 
allow
compiler to optimize this symbol out (same way as IRONLY) if it knows it may or
may not be exported - i.e. from COMDAT flag or via -fwhole-program.

Honza
> 
> -- 
> H.J.


Re: Update LTO plugin interface

2010-12-01 Thread H.J. Lu
2010/12/1 Jan Hubicka :
>> On Wed, Dec 1, 2010 at 10:54 AM, Ian Lance Taylor  wrote:
>> > "H.J. Lu"  writes:
>> >
>> >>       b. Compiler plugin controls what linker uses to generate the final 
>> >> executable:
>> >>               i. The linker command line order should be the same, with 
>> >> or without LTO.
>> >>       c. Add a cmdline bit field to
>> >>       struct ld_plugin_input_file
>> >>       {
>> >>          const char *name;
>> >>          int fd;
>> >>          off_t offset;
>> >>          off_t filesize;
>> >>          void *handle;
>> >>          unsigned int cmdline : 1;
>> >>       };
>> >
>> > Just make it an int.  But I don't see why this is needed.  The plugin
>> > already knows the files that it passed to add_input_file and
>> > add_input_library.  Why does it need to linker to report back where the
>> > file came from?  Why doesn't the plugin just keep track?
>> >
>>
>> It is used to keep the same linker command line order. With LTO,
>> linker should use
>>
>> crtX.o *trans*.o -lbar -lgcc -lc ... crtX.o
>>
>> instead of
>>
>> crtX.o -lbar -lgcc -lc ... crtX.o  *trans*.o
>>
>> to generate final executable.  2 orders may generate different
>> executables.
>
> Hmm and when I have something like
>
> ctrX.o non-lto1.o lto1.o non-lto2.o lto2.o  crtX.o
> and then linker plugin produce ltrans0.o combining both lto1.o and lto2.o, ho
> we will deal with non-lto2.o?
>

My current implementation groups all LTO files together and linker will see

ctrX.o non-lto1.o ltrans0.o non-lto2.o  crtX.o

-- 
H.J.


Re: Update LTO plugin interface

2010-12-01 Thread Ian Lance Taylor
"H.J. Lu"  writes:

> On Wed, Dec 1, 2010 at 10:54 AM, Ian Lance Taylor  wrote:
>> "H.J. Lu"  writes:
>>
>>>       b. Compiler plugin controls what linker uses to generate the final 
>>> executable:
>>>               i. The linker command line order should be the same, with or 
>>> without LTO.
>>>       c. Add a cmdline bit field to
>>>       struct ld_plugin_input_file
>>>       {
>>>          const char *name;
>>>          int fd;
>>>          off_t offset;
>>>          off_t filesize;
>>>          void *handle;
>>>          unsigned int cmdline : 1;
>>>       };
>>
>> Just make it an int.  But I don't see why this is needed.  The plugin
>> already knows the files that it passed to add_input_file and
>> add_input_library.  Why does it need to linker to report back where the
>> file came from?  Why doesn't the plugin just keep track?
>>
>
> It is used to keep the same linker command line order. With LTO,
> linker should use
>
> crtX.o *trans*.o -lbar -lgcc -lc ... crtX.o
>
> instead of
>
> crtX.o -lbar -lgcc -lc ... crtX.o  *trans*.o
>
> to generate final executable.  2 orders may generate different
> executables.

I'm sorry, I'm missing something.  What does adding that bit have to do
with keeping the same linker command line order?

Is your concern that when the plugin adds a new input file to the link,
that new input file does not cause additional objects to be pulled out
of archives later in the link?  At least in gold, what matters for that
is when the plugin calls the add_input_file or add_input_library
callback.  In gold it would be fairly difficult to have that work any
other way.

Ian


Re: IRA undoing sched1

2010-12-01 Thread Paul Koning

On Nov 29, 2010, at 9:51 PM, Vladimir Makarov wrote:

> On 11/29/2010 08:52 PM, Paul Koning wrote:
>> I'm doing some experiments to get to know GCC better, and something is 
>> puzzling me.
>> 
>> I have defined an md file with DFA and costs describing the fact that loads 
>> take a while (as do stores). Also, there is no memory to memory move, only 
>> memory to/from register.
>> 
>> Test program is basically a=b; c=d; e=f; g=h;
>> 
>> Sched1, as expected, turns this into four loads followed by four stores, 
>> exploiting the pipeline.
>> 
>> Then IRA kicks in.  It shuffles the insns back into load/store, load/store 
>> pairs, essentially the source code order.  It looks like it's doing that to 
>> reduce the number of registers used.  Fair enough, but this makes the code 
>> less efficient.  I don't see a way to tell IRA not to do this.
>> 
> Most probably that happens because of ira.c::update_equiv_regs.   This 
> function was inherited from the old register allocator.  The major goal of 
> the function is to find equivalent memory/constants/invariants for pseudos 
> which can be used by reload pass.  Pseudo equivalence also affects live range 
> splitting decision in IRA.
> 
> Update_equiv_regs can also move insns initiating pseudo equivalences close to 
> the pseudo usage.  You could try to prevent this and to see what happens.  
> IMO preventing such insn moving will do more harm on performance on SPEC 
> benchmarks for x86/x86-64 processors.
>> As it happens, there's a secondary reload involved: the loads are into one 
>> set of registers but the stores from another, so a register to register move 
>> is added in by reload.  Does that explain the behavior?  I tried changing 
>> the cover_classes, but that doesn't make a difference.
>> 
> It is hard to say without the dump file.  If everything is correctly defined, 
> it should not happen.
> 

I extended the test code a little, and fed it to a mips64el-elf targeted gcc.  
It showed the same pattern in one of the two functions but not the other.  The 
test code is test8.c (attached).

What I see in the assembly output (test8.s, also attached) is that foo() has a 
load then store then load then store pattern, which contradicts what sched1 
constructed and doesn't take advantage of the pipeline.  However, bar() does 
use the pipeline.  I don't know what's different between these two.

Do you want some dump file (which ones)?  Or you could just reproduce this with 
the current gcc, it's a standard target build.  The compile was -O2 
-mtune=mips64r2 -mabi=n32.

paul


test8.c
Description: Binary data


test8.s
Description: Binary data


Re: Update LTO plugin interface

2010-12-01 Thread H.J. Lu
On Wed, Dec 1, 2010 at 11:12 AM, Ian Lance Taylor  wrote:
> "H.J. Lu"  writes:
>
>> On Wed, Dec 1, 2010 at 10:54 AM, Ian Lance Taylor  wrote:
>>> "H.J. Lu"  writes:
>>>
       b. Compiler plugin controls what linker uses to generate the final 
 executable:
               i. The linker command line order should be the same, with or 
 without LTO.
       c. Add a cmdline bit field to
       struct ld_plugin_input_file
       {
          const char *name;
          int fd;
          off_t offset;
          off_t filesize;
          void *handle;
          unsigned int cmdline : 1;
       };
>>>
>>> Just make it an int.  But I don't see why this is needed.  The plugin
>>> already knows the files that it passed to add_input_file and
>>> add_input_library.  Why does it need to linker to report back where the
>>> file came from?  Why doesn't the plugin just keep track?
>>>
>>
>> It is used to keep the same linker command line order. With LTO,
>> linker should use
>>
>> crtX.o *trans*.o -lbar -lgcc -lc ... crtX.o
>>
>> instead of
>>
>> crtX.o -lbar -lgcc -lc ... crtX.o  *trans*.o
>>
>> to generate final executable.  2 orders may generate different
>> executables.
>
> I'm sorry, I'm missing something.  What does adding that bit have to do
> with keeping the same linker command line order?

We don't want to put all unclaimed files passed to plugin back to linker.
On Linux,

[...@gnu-6 gcc-lto]$ cat /usr/lib/libc.so
/* GNU ld script
   Use the shared library, but some functions are only in
   the static library, so try that secondarily.  */
OUTPUT_FORMAT(elf32-i386)
GROUP ( /lib/libc.so.6 /usr/lib/libc_nonshared.a  AS_NEEDED (
/lib/ld-linux.so.2 ) )
[...@gnu-6 gcc-lto]$

Linker should use /usr/lib/libc.so, not /lib/libc.so.6,
/usr/lib/libc_nonshared.a,
/lib/ld-linux.so.2,  for final linker.  With the new cmdline field,
plugin can only pass
those unclaimed files from linker command line back to linker for the
final link.

> Is your concern that when the plugin adds a new input file to the link,
> that new input file does not cause additional objects to be pulled out
> of archives later in the link?  At least in gold, what matters for that
> is when the plugin calls the add_input_file or add_input_library
> callback.  In gold it would be fairly difficult to have that work any
> other way.
>

Please try the testcase in

http://sourceware.org/bugzilla/show_bug.cgi?id=12248#c5

with gold.

-- 
H.J.


Re: ppc: const data not in RO section

2010-12-01 Thread Joakim Tjernlund
Nathan Froyd  wrote on 2010/12/01 18:33:23:
>
> On Tue, Nov 30, 2010 at 08:04:06PM +0100, Joakim Tjernlund wrote:
> > Why is not
> >   const char cstr[] = "mystr";
> >   const int myint = 3;
> > added to a read only section?
> > Especially since
> >   const int myarr[]={1,2,3};
> > is placed in .rodata.
> >
> > hmm, -G 0 does place these in .rodata but why do I have to specify that?
>
> It would help if you specified the target and the compiler version that
> you used.
>
> The compiler I have (~4.5) places myint and mystr in .sdata; since
> they're so small, GCC thinks that placing myint and mystr in .sdata is
> beneficial.  Why do you think -G 0 should be the default?
>
> It does seem kind of odd that "mystr" is placed in .sdata, since
> rs6000_elf_in_small_data_p indicates that string constants shouldn't be
> in .sdata.  You could investigate and submit a patch or file a bug.

This I find a bit strange:
 gcc  tst.c -Os  -fpic  -msdata=sysv
 tst.c:1: error: -fpic and -msdata=sysv are incompatible

Yet it seems like -msdata=sysv is default as gcc generates
.sdata for variables.
Perhaps -fpic/-fPIC should imply -mno-sdata?

 Jocke



Re: concurrence.h compiler error on head

2010-12-01 Thread John Tytgat
In message <4cf69533.10...@oarcorp.com>
  Joel Sherrill  wrote:

> Hi,
> 
> Compiling C++ on the head targeting
> arm-rtems4.11, I get this error.  It doesn't
> occur on m68k-rtems4.11.  I don't know about
> the other targets yet.
> 
> Any suggestions on what is causing this
> and how to resolve it?
> 
> [...]
> In destructor '__gnu_cxx::__scoped_lock::~__scoped_lock()':
> /users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/include/ext/concurrence.h:313:5:
>  
> error: __gnu_cxx::__scoped_lock::~__scoped_lock() causes a section type 
> conflict
> In file included from 
> /users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/include/ext/pool_allocator.h:51:0,
>   from 
> /users/joel/test-gcc/gcc-svn/libstdc++-v3/src/pool_allocator.cc:31:
> /users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/include/ext/concurrence.h:313:5:
>  
> error: __gnu_cxx::__scoped_lock::~__scoped_lock() causes a section type 
> conflict

This is http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46667 and caused
by Jan Hubicka's change r167085.

John.
-- 
John Tytgat, in his comfy chair at home
john.tyt...@aaug.net


Re: concurrence.h compiler error on head

2010-12-01 Thread Joel Sherrill

On 12/01/2010 02:16 PM, John Tytgat wrote:

In message<4cf69533.10...@oarcorp.com>
   Joel Sherrill  wrote:


Hi,

Compiling C++ on the head targeting
arm-rtems4.11, I get this error.  It doesn't
occur on m68k-rtems4.11.  I don't know about
the other targets yet.

Any suggestions on what is causing this
and how to resolve it?

[...]
In destructor '__gnu_cxx::__scoped_lock::~__scoped_lock()':
/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/include/ext/concurrence.h:313:5:
error: __gnu_cxx::__scoped_lock::~__scoped_lock() causes a section type
conflict
In file included from
/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/include/ext/pool_allocator.h:51:0,
   from
/users/joel/test-gcc/gcc-svn/libstdc++-v3/src/pool_allocator.cc:31:
/users/joel/test-gcc/b-gcc1-arm/arm-rtems4.11/libstdc++-v3/include/ext/concurrence.h:313:5:
error: __gnu_cxx::__scoped_lock::~__scoped_lock() causes a section type
conflict

This is http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46667 and caused
by Jan Hubicka's change r167085.


Thanks.  I am now on the cc list for this one.

John.



--
Joel Sherrill, Ph.D. Director of Research&  Development
joel.sherr...@oarcorp.comOn-Line Applications Research
Ask me about RTEMS: a free RTOS  Huntsville AL 35805
   Support Available (256) 722-9985




Re: Update LTO plugin interface

2010-12-01 Thread Ian Lance Taylor
"H.J. Lu"  writes:

> We don't want to put all unclaimed files passed to plugin back to linker.
> On Linux,
>
> [...@gnu-6 gcc-lto]$ cat /usr/lib/libc.so
> /* GNU ld script
>Use the shared library, but some functions are only in
>the static library, so try that secondarily.  */
> OUTPUT_FORMAT(elf32-i386)
> GROUP ( /lib/libc.so.6 /usr/lib/libc_nonshared.a  AS_NEEDED (
> /lib/ld-linux.so.2 ) )
> [...@gnu-6 gcc-lto]$
>
> Linker should use /usr/lib/libc.so, not /lib/libc.so.6,
> /usr/lib/libc_nonshared.a,
> /lib/ld-linux.so.2,  for final linker.  With the new cmdline field,
> plugin can only pass
> those unclaimed files from linker command line back to linker for the
> final link.

Thanks, at least now I understand what the new field means: it is true
for a file explicitly named on the command line, false for a file named
in a linker script.

Are you planning to have the plugin claim all files, even linker
scripts, and then pass only the command line files back to the linker?

Ian


Re: Update LTO plugin interface

2010-12-01 Thread H.J. Lu
On Wed, Dec 1, 2010 at 12:37 PM, Ian Lance Taylor  wrote:
> "H.J. Lu"  writes:
>
>> We don't want to put all unclaimed files passed to plugin back to linker.
>> On Linux,
>>
>> [...@gnu-6 gcc-lto]$ cat /usr/lib/libc.so
>> /* GNU ld script
>>    Use the shared library, but some functions are only in
>>    the static library, so try that secondarily.  */
>> OUTPUT_FORMAT(elf32-i386)
>> GROUP ( /lib/libc.so.6 /usr/lib/libc_nonshared.a  AS_NEEDED (
>> /lib/ld-linux.so.2 ) )
>> [...@gnu-6 gcc-lto]$
>>
>> Linker should use /usr/lib/libc.so, not /lib/libc.so.6,
>> /usr/lib/libc_nonshared.a,
>> /lib/ld-linux.so.2,  for final linker.  With the new cmdline field,
>> plugin can only pass
>> those unclaimed files from linker command line back to linker for the
>> final link.
>
> Thanks, at least now I understand what the new field means: it is true
> for a file explicitly named on the command line, false for a file named
> in a linker script.
>
> Are you planning to have the plugin claim all files, even linker
> scripts, and then pass only the command line files back to the linker?
>

Plugin will keep the same claim strategy.  For those aren't claimed by
plugin, plugin will save and pass them back to linker only if they are
specified at command line.


-- 
H.J.


Re: Update LTO plugin interface

2010-12-01 Thread Ian Lance Taylor
"H.J. Lu"  writes:

> On Wed, Dec 1, 2010 at 12:37 PM, Ian Lance Taylor  wrote:
>
>> Are you planning to have the plugin claim all files, even linker
>> scripts, and then pass only the command line files back to the linker?
>>
>
> Plugin will keep the same claim strategy.  For those aren't claimed by
> plugin, plugin will save and pass them back to linker only if they are
> specified at command line.

Just to be clear, that does not make sense as written.  If the plugin
does not claim a file, it should not then pass it back to the linker.

In fact, if the plugin claims all files, then as far as I can see your
new ld_plugin_input_file field is not required.  And if the plugin does
not claim all files, I don't see how this can work.

Ian


Re: Update LTO plugin interface

2010-12-01 Thread H.J. Lu
On Wed, Dec 1, 2010 at 12:55 PM, Ian Lance Taylor  wrote:
> "H.J. Lu"  writes:
>
>> On Wed, Dec 1, 2010 at 12:37 PM, Ian Lance Taylor  wrote:
>>
>>> Are you planning to have the plugin claim all files, even linker
>>> scripts, and then pass only the command line files back to the linker?
>>>
>>
>> Plugin will keep the same claim strategy.  For those aren't claimed by
>> plugin, plugin will save and pass them back to linker only if they are
>> specified at command line.
>
> Just to be clear, that does not make sense as written.  If the plugin
> does not claim a file, it should not then pass it back to the linker.

API has

typedef
enum ld_plugin_status
(*ld_plugin_claim_file_handler) (
  const struct ld_plugin_input_file *file, int *claimed);

For linker script, archive, DSO and object file without IR,
*claimed will return 0 and plugin will save and pass it back to
linker later in  if it is specified at command line.

> In fact, if the plugin claims all files, then as far as I can see your
> new ld_plugin_input_file field is not required.  And if the plugin does
> not claim all files, I don't see how this can work.

Stage 2 linker should:

1. Discard all previous inputs.
2. Generate the final executable with inputs from plugin, which include
linker script, archive, DSO and object file without IR specified at
command line as well as trans files from LTO.

My implementation is available on hjl/lto branch at

http://git.kernel.org/?p=devel/binutils/hjl/x86.git;a=summary
http://git.kernel.org/?p=devel/gcc/hjl/x86.git;a=summary


-- 
H.J.


Re: Update LTO plugin interface

2010-12-01 Thread Ian Lance Taylor
"H.J. Lu"  writes:

> On Wed, Dec 1, 2010 at 12:55 PM, Ian Lance Taylor  wrote:
>> "H.J. Lu"  writes:
>>
>>> On Wed, Dec 1, 2010 at 12:37 PM, Ian Lance Taylor  wrote:
>>>
 Are you planning to have the plugin claim all files, even linker
 scripts, and then pass only the command line files back to the linker?

>>>
>>> Plugin will keep the same claim strategy.  For those aren't claimed by
>>> plugin, plugin will save and pass them back to linker only if they are
>>> specified at command line.
>>
>> Just to be clear, that does not make sense as written.  If the plugin
>> does not claim a file, it should not then pass it back to the linker.
>
> API has
>
> typedef
> enum ld_plugin_status
> (*ld_plugin_claim_file_handler) (
>   const struct ld_plugin_input_file *file, int *claimed);
>
> For linker script, archive, DSO and object file without IR,
> *claimed will return 0 and plugin will save and pass it back to
> linker later in  if it is specified at command line.

I don't understand what you wrote, so I am going to write what I think
happens.

The claim_file handler is an interface provided by the plugin itself.
The plugin will register it via LDPT_REGISTER_CLAIM_FILE_HOOK.  The
linker proper will call it for each input file.

In the case of the LTO plugin, this is the static function
claim_file_handler in lto-plugin.c.

If the plugin registers a claim_file handler, and, when the linker calls
it, it returns with *claimed == 0, then the linker will process the file
as it normally does.  Since the file will already have been processed,
it does not make sense for the plugin to then pass it back to the
linker.  The effect would be similar to listing the file twice on the
command line.


>> In fact, if the plugin claims all files, then as far as I can see your
>> new ld_plugin_input_file field is not required.  And if the plugin does
>> not claim all files, I don't see how this can work.
>
> Stage 2 linker should:
>
> 1. Discard all previous inputs.

How is this step done?


> My implementation is available on hjl/lto branch at

Thanks, but I don't see any changes to gold there, so I don't see what
you have done to change the plugin interface.

Ian


Re: Update LTO plugin interface

2010-12-01 Thread Richard Guenther
On Wed, Dec 1, 2010 at 10:28 PM, Ian Lance Taylor  wrote:
> "H.J. Lu"  writes:
>
>> On Wed, Dec 1, 2010 at 12:55 PM, Ian Lance Taylor  wrote:
>>> "H.J. Lu"  writes:
>>>
 On Wed, Dec 1, 2010 at 12:37 PM, Ian Lance Taylor  wrote:

> Are you planning to have the plugin claim all files, even linker
> scripts, and then pass only the command line files back to the linker?
>

 Plugin will keep the same claim strategy.  For those aren't claimed by
 plugin, plugin will save and pass them back to linker only if they are
 specified at command line.
>>>
>>> Just to be clear, that does not make sense as written.  If the plugin
>>> does not claim a file, it should not then pass it back to the linker.
>>
>> API has
>>
>> typedef
>> enum ld_plugin_status
>> (*ld_plugin_claim_file_handler) (
>>   const struct ld_plugin_input_file *file, int *claimed);
>>
>> For linker script, archive, DSO and object file without IR,
>> *claimed will return 0 and plugin will save and pass it back to
>> linker later in  if it is specified at command line.
>
> I don't understand what you wrote, so I am going to write what I think
> happens.
>
> The claim_file handler is an interface provided by the plugin itself.
> The plugin will register it via LDPT_REGISTER_CLAIM_FILE_HOOK.  The
> linker proper will call it for each input file.
>
> In the case of the LTO plugin, this is the static function
> claim_file_handler in lto-plugin.c.
>
> If the plugin registers a claim_file handler, and, when the linker calls
> it, it returns with *claimed == 0, then the linker will process the file
> as it normally does.  Since the file will already have been processed,
> it does not make sense for the plugin to then pass it back to the
> linker.  The effect would be similar to listing the file twice on the
> command line.

The basic problem is that if lto-plugin claims a file and provides a symtab
to the linker the link-time optimization might change that, including
adding new undefined symbols (think of libcalls).  The linker needs
to re-process even not-claimed static archives (such as libgcc) to
resolve those new undefs.  We hack around this by adding another
-lgcc at the end of the command-line, but that does change linker
resolution as the link order does matter.

Basically we need to trigger a complete re-link with the claimed
object files substituted for the link-time optimized ones.

Richard.

>
>>> In fact, if the plugin claims all files, then as far as I can see your
>>> new ld_plugin_input_file field is not required.  And if the plugin does
>>> not claim all files, I don't see how this can work.
>>
>> Stage 2 linker should:
>>
>> 1. Discard all previous inputs.
>
> How is this step done?
>
>
>> My implementation is available on hjl/lto branch at
>
> Thanks, but I don't see any changes to gold there, so I don't see what
> you have done to change the plugin interface.
>
> Ian
>


Re: Update LTO plugin interface

2010-12-01 Thread H.J. Lu
On Wed, Dec 1, 2010 at 1:28 PM, Ian Lance Taylor  wrote:
> "H.J. Lu"  writes:
>
>> On Wed, Dec 1, 2010 at 12:55 PM, Ian Lance Taylor  wrote:
>>> "H.J. Lu"  writes:
>>>
 On Wed, Dec 1, 2010 at 12:37 PM, Ian Lance Taylor  wrote:

> Are you planning to have the plugin claim all files, even linker
> scripts, and then pass only the command line files back to the linker?
>

 Plugin will keep the same claim strategy.  For those aren't claimed by
 plugin, plugin will save and pass them back to linker only if they are
 specified at command line.
>>>
>>> Just to be clear, that does not make sense as written.  If the plugin
>>> does not claim a file, it should not then pass it back to the linker.
>>
>> API has
>>
>> typedef
>> enum ld_plugin_status
>> (*ld_plugin_claim_file_handler) (
>>   const struct ld_plugin_input_file *file, int *claimed);
>>
>> For linker script, archive, DSO and object file without IR,
>> *claimed will return 0 and plugin will save and pass it back to
>> linker later in  if it is specified at command line.
>
> I don't understand what you wrote, so I am going to write what I think
> happens.
>
> The claim_file handler is an interface provided by the plugin itself.
> The plugin will register it via LDPT_REGISTER_CLAIM_FILE_HOOK.  The
> linker proper will call it for each input file.
>
> In the case of the LTO plugin, this is the static function
> claim_file_handler in lto-plugin.c.
>
> If the plugin registers a claim_file handler, and, when the linker calls
> it, it returns with *claimed == 0, then the linker will process the file
> as it normally does.  Since the file will already have been processed,
> it does not make sense for the plugin to then pass it back to the
> linker.  The effect would be similar to listing the file twice on the
> command line.

That is what "Discard all previous inputs" in stage 2 linking is for.

>
>>> In fact, if the plugin claims all files, then as far as I can see your
>>> new ld_plugin_input_file field is not required.  And if the plugin does
>>> not claim all files, I don't see how this can work.
>>
>> Stage 2 linker should:
>>
>> 1. Discard all previous inputs.
>
> How is this step done?

For GNU linker, I mark all sections in a bfd file, which
will be sent back from plugin, with SEC_EXCLUDE. I also
free and recreate the output hash table.

>
>> My implementation is available on hjl/lto branch at
>
> Thanks, but I don't see any changes to gold there, so I don't see what
> you have done to change the plugin interface.
>

My changes should be visible now.


-- 
H.J.


Re: Update LTO plugin interface

2010-12-01 Thread H.J. Lu
On Wed, Dec 1, 2010 at 1:33 PM, Richard Guenther
 wrote:
> On Wed, Dec 1, 2010 at 10:28 PM, Ian Lance Taylor  wrote:
>> "H.J. Lu"  writes:
>>
>>> On Wed, Dec 1, 2010 at 12:55 PM, Ian Lance Taylor  wrote:
 "H.J. Lu"  writes:

> On Wed, Dec 1, 2010 at 12:37 PM, Ian Lance Taylor  wrote:
>
>> Are you planning to have the plugin claim all files, even linker
>> scripts, and then pass only the command line files back to the linker?
>>
>
> Plugin will keep the same claim strategy.  For those aren't claimed by
> plugin, plugin will save and pass them back to linker only if they are
> specified at command line.

 Just to be clear, that does not make sense as written.  If the plugin
 does not claim a file, it should not then pass it back to the linker.
>>>
>>> API has
>>>
>>> typedef
>>> enum ld_plugin_status
>>> (*ld_plugin_claim_file_handler) (
>>>   const struct ld_plugin_input_file *file, int *claimed);
>>>
>>> For linker script, archive, DSO and object file without IR,
>>> *claimed will return 0 and plugin will save and pass it back to
>>> linker later in  if it is specified at command line.
>>
>> I don't understand what you wrote, so I am going to write what I think
>> happens.
>>
>> The claim_file handler is an interface provided by the plugin itself.
>> The plugin will register it via LDPT_REGISTER_CLAIM_FILE_HOOK.  The
>> linker proper will call it for each input file.
>>
>> In the case of the LTO plugin, this is the static function
>> claim_file_handler in lto-plugin.c.
>>
>> If the plugin registers a claim_file handler, and, when the linker calls
>> it, it returns with *claimed == 0, then the linker will process the file
>> as it normally does.  Since the file will already have been processed,
>> it does not make sense for the plugin to then pass it back to the
>> linker.  The effect would be similar to listing the file twice on the
>> command line.
>
> The basic problem is that if lto-plugin claims a file and provides a symtab
> to the linker the link-time optimization might change that, including
> adding new undefined symbols (think of libcalls).  The linker needs
> to re-process even not-claimed static archives (such as libgcc) to
> resolve those new undefs.  We hack around this by adding another
> -lgcc at the end of the command-line, but that does change linker
> resolution as the link order does matter.
>
> Basically we need to trigger a complete re-link with the claimed
> object files substituted for the link-time optimized ones.
>

That is what my implementation does.


-- 
H.J.


Re: gccgo branch and darwin

2010-12-01 Thread Ian Lance Taylor
Arnaud Lacombe  writes:

> On Fri, Nov 19, 2010 at 11:58 PM, Arnaud Lacombe  wrote:
>> sysinfo.go:2874:13: error: use of undefined type 'func___sighandler_t'
>>
> gen-sysinfo.go has :
>
> // type ___sighandler_t func*(int32)

That is not valid Go, so this looks like a bug in godump.c.  What does
the type __sighandler_t look like in the .h file?

On GNU/Linux it looks like this in the .h file:

typedef void (*__sighandler_t) (int);

and I get this in gen-sysinfo.go:

type ___sighandler_t func(int32)

Thanks.


> I have not been able to find much
> info on function pointer in the specs, but I certainly did not look at
> the right place.

In Go a function type is written simply as

func(PARAMETER-TYPES) RESULT-TYPES

This is the type which in C would be called a pointer to function type.
Go does not have any type which corresponds to the C function type, as
opposed to the C pointer to function type.

Ian


Re: Update LTO plugin interface

2010-12-01 Thread Ian Lance Taylor
"H.J. Lu"  writes:

> On Wed, Dec 1, 2010 at 1:28 PM, Ian Lance Taylor  wrote:
>> "H.J. Lu"  writes:
>>> For linker script, archive, DSO and object file without IR,
>>> *claimed will return 0 and plugin will save and pass it back to
>>> linker later in  if it is specified at command line.
>>
>> I don't understand what you wrote, so I am going to write what I think
>> happens.
>>
>> The claim_file handler is an interface provided by the plugin itself.
>> The plugin will register it via LDPT_REGISTER_CLAIM_FILE_HOOK.  The
>> linker proper will call it for each input file.
>>
>> In the case of the LTO plugin, this is the static function
>> claim_file_handler in lto-plugin.c.
>>
>> If the plugin registers a claim_file handler, and, when the linker calls
>> it, it returns with *claimed == 0, then the linker will process the file
>> as it normally does.  Since the file will already have been processed,
>> it does not make sense for the plugin to then pass it back to the
>> linker.  The effect would be similar to listing the file twice on the
>> command line.
>
> That is what "Discard all previous inputs" in stage 2 linking is for.

But what does that mean?  Are you saying that the linker interface to
the plugin should change to work that way?  If we do that, then we
should change other aspects of the plugin interface as well.  It could
probably become quite a bit simpler.


The only reason we would ever need to do a complete relink is if the LTO
plugin can introduce arbitrary new symbol references.  Is that ever
possible?  If it is, we need to rethink the whole approach.  If the LTO
plugin can introduce arbitrary new symbol references, that means that
LTO plugin can cause arbitrary objects to be pulled in from archives.
And that means that if we only run the plugin once, we are losing
possible optimizations, because the plugin will never those new objects.


My suspicion is that the LTO plugin can only introduce a small bounded
set of new symbol references, namely those which we assume can be
satisified from -lc or -lgcc.  Is that true?

Ian


Re: Update LTO plugin interface

2010-12-01 Thread Cary Coutant
> If we get into extending linker plugin interface, it would be great if we 
> would
> do somehting about COMDAT.  We now have RESOLVED and RESOLVED_IRONLY, while 
> the
> problem is that all non-hidden COMDAT symbols get RESOLVED that pretty much
> fixes them in the output library.
>
> I would propose adding RESOLVED_IRDYNAMIC for cases where symbol was resolved
> IRONLY except that it is externally visible to dynamic linker.  We can then 
> allow
> compiler to optimize this symbol out (same way as IRONLY) if it knows it may 
> or
> may not be exported - i.e. from COMDAT flag or via -fwhole-program.

(This is off the main topic...)

Actually, we have PREVAILING_DEF and PREVAILING_DEF_IRONLY, plus
RESOLVED_IR, RESOLVED_EXEC, and RESOLVED_DYN. If the symbol was
resolved elsewhere, we don't have any way to say whether it was IRONLY
or not, and that's a problem for common symbols, because there really
is no prevailing def -- the linker just allocates the space itself.
Currently, gold picks one of the common symbols and calls it the
prevailing def, but the one it picks might not actually be the largest
one. I'd prefer to add something like COMMON and COMMON_IRONLY as
possible resolutions.

I'm not sure if you're talking about that, or about real COMDAT
groups. As far as gold is concerned, it picks one COMDAT group and
throws the rest of them away, but for the one it picks, you'll get
either PREVAILING_DEF or PREVAILING_DEF_IRONLY. That should tell the
compiler what it needs to know.

I'm also not sure what you mean by "resolved IRONLY except that it is
externally visible to the dynamic linker." If we're building a shared
library, and the symbol is exported, it's not going to be IRONLY, and
I don't see how it would be valid to optimize it out. If we're
building an executable with --export-dynamic, same thing.

-cary


RE: ftp.net mirror of gcc.gnu.org is broken

2010-12-01 Thread Gerald Pfeifer
On Wed, 1 Dec 2010, Vu Tong Minh wrote:
> I fixed the symbolic link. It's http://mirror-fpt-telecom.fpt.net/gcc

Excellent, thanks a lot!

Gerald


Re: Update LTO plugin interface

2010-12-01 Thread Cary Coutant
>> That is what "Discard all previous inputs" in stage 2 linking is for.
>
> But what does that mean?  Are you saying that the linker interface to
> the plugin should change to work that way?  If we do that, then we
> should change other aspects of the plugin interface as well.  It could
> probably become quite a bit simpler.
>
> The only reason we would ever need to do a complete relink is if the LTO
> plugin can introduce arbitrary new symbol references.  Is that ever
> possible?  If it is, we need to rethink the whole approach.  If the LTO
> plugin can introduce arbitrary new symbol references, that means that
> LTO plugin can cause arbitrary objects to be pulled in from archives.
> And that means that if we only run the plugin once, we are losing
> possible optimizations, because the plugin will never those new objects.
>
> My suspicion is that the LTO plugin can only introduce a small bounded
> set of new symbol references, namely those which we assume can be
> satisified from -lc or -lgcc.  Is that true?

Exactly. The plugin API was designed for this model -- if you want to
start the link all over again, you may as well stick with the collect2
approach and enhance it to deal with archives of IR files.

The plugin API, as implemented in gold (not sure about gnu ld), does
maintain the original order of input files as far as symbol binding is
concerned. When IR files are claimed, the plugin provides the list of
symbols defined and referenced, and the linker builds the symbol table
as if those files were linked in at that particular spot in the
command line. When the compiler provides real definitions of those
symbols later, the real definitions simply replace the "placeholders"
that were left in the linker's symbol table. The only aspect of link
order that isn't maintained is the physical order of the sections in
memory.

As Ian noted, if the compiler introduces new references that weren't
there before, the new references must be from a limited set of
libcalls that the backend can introduce, and those should all be
resolved with an extra pass through -lc or -lgcc. That's not exactly
pretty, but I don't see how it destroys the notion of link order --
the only way those new symbols could have been resolved differently is
if a user library interposed definitions for the libcall, and those
certainly can't be what the compiler intended to bind to. In PR 12248,
I think it's questionable to claim that the compiler-introduced call
to __udivdi3 should not resolve to the version in libgcc. Sure, I
understand it's useful for library developers while debugging and
testing, but an ordinary user certainly can't count on his own
definition of that routine to get called -- the compiler might
generate the division inline, or call a different specialized version.
All of these routines are outside the user's namespace, and we should
be able to optimize without regard for what the user's libraries might
contain.

An improvement could be for the claim file handler to determine what
libcalls might be introduced and add them to the list of referenced
symbols so that the linker can bring in the definitions in the
original pass through the input files -- any that end up not being
referenced can be garbage collected. Alternatively, we could do a
whole-archive link of the library that contains the libcalls, again
discarding unreferenced routines via garbage collection. Neither of
these require a change to the API.

-cary


Re: operator new[] overflow (PR 19351)

2010-12-01 Thread Chris Lattner

On Nov 30, 2010, at 3:12 PM, Joe Buck wrote:

> On Tue, Nov 30, 2010 at 01:49:23PM -0800, Gabriel Dos Reis wrote:
>> The existing GCC behaviour is a bit more perverse than the
>> C malloc() case as in
>> 
>>   new T[n]
>> 
>> there is no multiplication that could be credited to careless programmer.
>> The multiplication is introduced by GCC.
> 
> ... which suggests strongly that GCC should fix it.  Too bad the ABI is
> frozen; if the internal ABI kept the two values (the size of the type, and
> the number of values) separate and passed two arguments to the allocation
> function, it would be easy to do the right thing (through bad_alloc if the
> multiplication overflows).

You don't need any ABI changes to support this.  For example, clang compiles:

int *foo(long X) {
  return new int[X];
}

into:

__Z3fool:   ## @_Z3fool
Leh_func_begin0:
## BB#0:## %entry
movl$4, %ecx
movq%rdi, %rax
mulq%rcx
testq   %rdx, %rdx
movq$-1, %rdi
cmoveq  %rax, %rdi
jmp __Znam

On overflow it just forces the size passed in to operator new to -1ULL, which 
throws bad_alloc.

-Chris


Re: [pph] New branch for incremental C++ parsing

2010-12-01 Thread Benjamin Kosnik

Hi Diego! Sorry to have missed this talk at the GCC Summit, this work
looks interesting. 

> We have created a new branch for the incremental parsing work
> that Lawrence and I described at the last GCC Summit
> (http://gcc.gnu.org/wiki/summit2010?action=AttachFile&do=get&target=IncrementalCompiler.pdf).
> 
> To get the branch:
> 
> $ svn co svn+ssh://gcc.gnu.org/svn/gcc/branches/pph

I've been trying to use this, and have some basic usage questions for
you.

From your description in the email above, in particular:

" The code currently implements a token cache on disk.  This is
currently enabled with -fpth (for Pre-Tokenized Headers).  Each
file in a translation unit gets its own .pth image. When a file
is found unchanged wrt the .pth image, its tokens are
instantiated out of the image instead of the text stream.

This saves on average ~15% of compilation time on C++.  PTH
images are factored, so a change in one file does require
building the complete PTH image for the whole TU.  Additionally,
each PTH file is segmented into token hunks, each of which can be
validated and applied separately.  This allows reusing the same
PTH file in different translation units."

However, in use I am having problems with this. 

For instance, take the following two files:

1.cc
#include 
#include 

int main()
{
  using namespace std;
  
  vector v;
  
  for(unsigned int i = 0; i<100; ++i)
v.push_back(i);

  cout << v[10] << endl;

  return 0;
}


2.cc
#include 
#include 

int main()
{
  using namespace std;
  
  string s("100 count vector");

  cout << s << endl;

  return 0;
}

To compile the first, I check out and build the branch. I use this like
so:

mkdir tmp; cd tmp
g++ -fpth ../1.cc

This seems fine. I end up with an a.out executable, and 89
separate .pth files. The pth files are named like:

_usr_include_ctype_h.pth

or 

_mnt_share_bin_H_x86_64_gcc_pph_branch_20101201_binlib_gcc_x86_64_unknown_linux_gnu_4_6_0_include_c___4_6_0_x86_64_unknown_linux_gnu_bits_os_defines_h.pth

This seems as expected.

Now, I should be able to compile again, exact same compile line, and
use the cache. Like:

%g++ -fpth ../1.cc

In file included from ../test_multi_1.cc:33554432:0:
/usr/include/locale.h:179:2: error: #endif without #if
In file included from ../test_multi_1.cc:33554432:0:
/usr/include/sched.h:98:2: error: #endif without #if
In file included from ../test_multi_1.cc:33554432:0:
/usr/include/pthread.h:1119:2: error: #endif without #if
In file included from ../test_multi_1.cc:33554432:0:
/usr/include/unistd.h:938:2: error: #endif without #if
In file included from ../test_multi_1.cc:33554432:0:
/usr/include/wctype.h:7:2: error: #endif without #if
In file included from ../test_multi_1.cc:33554432:0:
/usr/include/wctype.h:285:2: error: #endif without #if

This seems to be an error. This is supposed to work,  correct? 

Then, assuming this worked, then (re-using 1's images)

%g++ -fpth ../2.cc

Should in theory re-use 1's images and generate any new images that are
necessary (according to the initial email. I understand results may
vary at the moment.)

This does not work atm, but gets errors similar to the #endif without
#if errors above, but for a different file than locale.h. 

From discussing this with you via email, it looks like there are two
options for -fpth, one with uses a timestamp (-fpth) and one that uses
an md5 hash (-fpth-md5). 

-fpth  // use timestamp
-fpth-md5 // use md5 of file

Please note the -fpth-md5 does not do anything on the branch at the
moment. (Ie using it means no .pth files are generated.)

Anyway. This is from my initial use and probing. Is it worth filing
these bugs in bugzilla for the pph branch, or is this branch kind of
dead while you work on the thing-after-pph-branch? 

I'd like to start documenting this project/branch on the GCC wiki. At
least the command options in gcc/c-family/c.opt, and have usage
examples. You'd mentioned that this may use the incremental linker
page, but as PPH/PTH is but one part of this I'm hoping to convince you
to use a new page, say PrettyCachedHeader or PPHPTH or FECaching or
something. Thoughts?

-benjamin


Re: Update LTO plugin interface

2010-12-01 Thread H.J. Lu
On Wed, Dec 1, 2010 at 3:06 PM, Cary Coutant  wrote:
>>> That is what "Discard all previous inputs" in stage 2 linking is for.
>>
>> But what does that mean?  Are you saying that the linker interface to
>> the plugin should change to work that way?  If we do that, then we
>> should change other aspects of the plugin interface as well.  It could
>> probably become quite a bit simpler.
>>
>> The only reason we would ever need to do a complete relink is if the LTO
>> plugin can introduce arbitrary new symbol references.  Is that ever
>> possible?  If it is, we need to rethink the whole approach.  If the LTO
>> plugin can introduce arbitrary new symbol references, that means that
>> LTO plugin can cause arbitrary objects to be pulled in from archives.
>> And that means that if we only run the plugin once, we are losing
>> possible optimizations, because the plugin will never those new objects.
>>
>> My suspicion is that the LTO plugin can only introduce a small bounded
>> set of new symbol references, namely those which we assume can be
>> satisified from -lc or -lgcc.  Is that true?
>
> Exactly. The plugin API was designed for this model -- if you want to
> start the link all over again, you may as well stick with the collect2
> approach and enhance it to deal with archives of IR files.

Some compilers duplicates the whole linker symbol resolution in their
"collect2" program to get it right.

> The plugin API, as implemented in gold (not sure about gnu ld), does
> maintain the original order of input files as far as symbol binding is
> concerned. When IR files are claimed, the plugin provides the list of
> symbols defined and referenced, and the linker builds the symbol table
> as if those files were linked in at that particular spot in the
> command line. When the compiler provides real definitions of those
> symbols later, the real definitions simply replace the "placeholders"
> that were left in the linker's symbol table. The only aspect of link
> order that isn't maintained is the physical order of the sections in
> memory.

That is exactly the problem my proposal tries to address.

> As Ian noted, if the compiler introduces new references that weren't
> there before, the new references must be from a limited set of
> libcalls that the backend can introduce, and those should all be
> resolved with an extra pass through -lc or -lgcc. That's not exactly
> pretty, but I don't see how it destroys the notion of link order --
> the only way those new symbols could have been resolved differently is
> if a user library interposed definitions for the libcall, and those
> certainly can't be what the compiler intended to bind to. In PR 12248,
> I think it's questionable to claim that the compiler-introduced call
> to __udivdi3 should not resolve to the version in libgcc. Sure, I
> understand it's useful for library developers while debugging and
> testing, but an ordinary user certainly can't count on his own
> definition of that routine to get called -- the compiler might
> generate the division inline, or call a different specialized version.
> All of these routines are outside the user's namespace, and we should
> be able to optimize without regard for what the user's libraries might
> contain.
>

__udivdi3 is just an example.  It can also happen to memcpy, or
any library calls generated by GCC. I am enclosing a testcase for memcpy.


-- 
H.J.


bug-2.tar.bz2
Description: BZip2 compressed data


Re: Update LTO plugin interface

2010-12-01 Thread Cary Coutant
>> The only aspect of link
>> order that isn't maintained is the physical order of the sections in
>> memory.
>
> That is exactly the problem my proposal tries to address.

Really? That's not at all what PR 12248 is about. The physical order
of the sections (meaning the order of contributions within each output
section) -- in the absence of any linker scripts -- should be
irrelevant. With linker scripts, or any other form of layout control,
the link order is decoupled from the layout anyway.

> __udivdi3 is just an example.  It can also happen to memcpy, or
> any library calls generated by GCC. I am enclosing a testcase for memcpy.

Regardless, if the compiler backend introduces a call to a runtime
support routine, it's expecting to bind to a specific routine in its
runtime support library. Anything else is unsupported. For gcc or
libgcc hackers, if you *really* need the interposed routine, it's
simple enough to link the .o instead of the .a, or use
--whole-archive.

Think about it -- any failure to bind to an interposed copy of memcpy
(or any other library call generated by gcc) is indistinguishable from
the compiler choosing to generate the code inline.

-cary


Re: [pph] New branch for incremental C++ parsing

2010-12-01 Thread Lawrence Crowl
On 12/1/10, Benjamin Kosnik  wrote:
> Hi Diego! Sorry to have missed this talk at the GCC Summit, this work
> looks interesting.
>
>> We have created a new branch for the incremental parsing work
>> that Lawrence and I described at the last GCC Summit
>> (http://gcc.gnu.org/wiki/summit2010?action=AttachFile&do=get&target=IncrementalCompiler.pdf).
>>
>> To get the branch:
>>
>> $ svn co svn+ssh://gcc.gnu.org/svn/gcc/branches/pph
>
> I've been trying to use this, and have some basic usage questions for
> you.
>
> From your description in the email above, in particular:
>
> " The code currently implements a token cache on disk.  This is
> currently enabled with -fpth (for Pre-Tokenized Headers).  Each
> file in a translation unit gets its own .pth image. When a file
> is found unchanged wrt the .pth image, its tokens are
> instantiated out of the image instead of the text stream.
>
> This saves on average ~15% of compilation time on C++.  PTH
> images are factored, so a change in one file does require
> building the complete PTH image for the whole TU.

What this is supposed to mean is that changing one header does _not_
necesssarily invalidate other PTH files, most of which may be used
as-is in the remainder of the TU.

> Additionally,
> each PTH file is segmented into token hunks, each of which can be
> validated and applied separately.  This allows reusing the same
> PTH file in different translation units."
>
> However, in use I am having problems with this.
>
> For instance, take the following two files:
>
> 1.cc
> #include 
> #include 
>
> int main()
> {
>   using namespace std;
>
>   vector v;
>
>   for(unsigned int i = 0; i<100; ++i)
> v.push_back(i);
>
>   cout << v[10] << endl;
>
>   return 0;
> }
>
>
> 2.cc
> #include 
> #include 
>
> int main()
> {
>   using namespace std;
>
>   string s("100 count vector");
>
>   cout << s << endl;
>
>   return 0;
> }
>
> To compile the first, I check out and build the branch. I use this like
> so:
>
> mkdir tmp; cd tmp
> g++ -fpth ../1.cc
>
> This seems fine. I end up with an a.out executable, and 89
> separate .pth files. The pth files are named like:
>
> _usr_include_ctype_h.pth
>
> or
>
> _mnt_share_bin_H_x86_64_gcc_pph_branch_20101201_binlib_gcc_x86_64_unknown_linux_gnu_4_6_0_include_c___4_6_0_x86_64_unknown_linux_gnu_bits_os_defines_h.pth
>
> This seems as expected.
>
> Now, I should be able to compile again, exact same compile line, and
> use the cache. Like:
>
> %g++ -fpth ../1.cc
>
> In file included from ../test_multi_1.cc:33554432:0:
> /usr/include/locale.h:179:2: error: #endif without #if
> In file included from ../test_multi_1.cc:33554432:0:
> /usr/include/sched.h:98:2: error: #endif without #if
> In file included from ../test_multi_1.cc:33554432:0:
> /usr/include/pthread.h:1119:2: error: #endif without #if
> In file included from ../test_multi_1.cc:33554432:0:
> /usr/include/unistd.h:938:2: error: #endif without #if
> In file included from ../test_multi_1.cc:33554432:0:
> /usr/include/wctype.h:7:2: error: #endif without #if
> In file included from ../test_multi_1.cc:33554432:0:
> /usr/include/wctype.h:285:2: error: #endif without #if
>
> This seems to be an error. This is supposed to work,  correct?

There were some merge problems when moving from 4.4 to trunk.
I get slightly different, but also failing, results.  Eventually,
it is supposed to work.

>
> Then, assuming this worked, then (re-using 1's images)
>
> %g++ -fpth ../2.cc
>
> Should in theory re-use 1's images and generate any new images that are
> necessary (according to the initial email. I understand results may
> vary at the moment.)
>
> This does not work atm, but gets errors similar to the #endif without
> #if errors above, but for a different file than locale.h.
>
> From discussing this with you via email, it looks like there are two
> options for -fpth, one with uses a timestamp (-fpth) and one that uses
> an md5 hash (-fpth-md5).
>
> -fpth  // use timestamp
> -fpth-md5 // use md5 of file
>
> Please note the -fpth-md5 does not do anything on the branch at the
> moment. (Ie using it means no .pth files are generated.)
>
> Anyway. This is from my initial use and probing. Is it worth filing
> these bugs in bugzilla for the pph branch, or is this branch kind of
> dead while you work on the thing-after-pph-branch?
>
> I'd like to start documenting this project/branch on the GCC wiki. At
> least the command options in gcc/c-family/c.opt, and have usage
> examples. You'd mentioned that this may use the incremental linker
> page, but as PPH/PTH is but one part of this I'm hoping to convince you
> to use a new page, say PrettyCachedHeader or PPHPTH or FECaching or
> something. Thoughts?

Diego's on vacation (or holiday) right now, so it might be a while
before he answers.

-- 
Lawrence Crowl


Re: Update LTO plugin interface

2010-12-01 Thread Jan Hubicka
> I'm also not sure what you mean by "resolved IRONLY except that it is
> externally visible to the dynamic linker." If we're building a shared
> library, and the symbol is exported, it's not going to be IRONLY, and
> I don't see how it would be valid to optimize it out. If we're

Well, the typical COMDAT symbols (ignoring side cases) needs to be put into
binary/library only if they are actually used, as all the other DSOs will define
them too if they are used there.
So it is valid to optimize out COMDAT after you optimized out all its uses. This
commonly happens at linktime.

Honza

> building an executable with --export-dynamic, same thing.
> 
> -cary


Re: Update LTO plugin interface

2010-12-01 Thread Ian Lance Taylor
"H.J. Lu"  writes:

> __udivdi3 is just an example.  It can also happen to memcpy, or
> any library calls generated by GCC. I am enclosing a testcase for memcpy.

I believe we can solve that specific problem much more efficiently than
requiring a complete link of all the input files.  We currently solve it
using the -pass-through option which is passed to the linker plugin.
Are there any cases for which using -pass-through=-lc
-pass-through=-lgcc would not be a complete solution?

Ian


Re: Update LTO plugin interface

2010-12-01 Thread Cary Coutant
>> I'm also not sure what you mean by "resolved IRONLY except that it is
>> externally visible to the dynamic linker." If we're building a shared
>> library, and the symbol is exported, it's not going to be IRONLY, and
>> I don't see how it would be valid to optimize it out. If we're
>
> Well, the typical COMDAT symbols (ignoring side cases) needs to be put into
> binary/library only if they are actually used, as all the other DSOs will 
> define
> them too if they are used there.
> So it is valid to optimize out COMDAT after you optimized out all its uses. 
> This
> commonly happens at linktime.

Ahh, OK. I was worried about those side cases where sometimes a pure
reference is emitted. From a linker point of view, that's something
that theoretically could happen, although it may be the case that we
don't actually have to support it. If we had a resolution like
PREVAILING_DEF_IRONLY_BUT_EXPORTED (preferably something shorter than
that), I think that would give the compiler the information it needs.
Is that pretty much what your RESOLVED_IRDYNAMIC was intended to mean?

Another thing that I don't remember offhand whether I got right or not
in gold is that if a COMDAT group is defined in IR and non-IR files,
we want to choose one of the IR files as the instance to keep. I'll
have to check.

-cary


Re: Update LTO plugin interface

2010-12-01 Thread Jan Hubicka
> >> I'm also not sure what you mean by "resolved IRONLY except that it is
> >> externally visible to the dynamic linker." If we're building a shared
> >> library, and the symbol is exported, it's not going to be IRONLY, and
> >> I don't see how it would be valid to optimize it out. If we're
> >
> > Well, the typical COMDAT symbols (ignoring side cases) needs to be put into
> > binary/library only if they are actually used, as all the other DSOs will 
> > define
> > them too if they are used there.
> > So it is valid to optimize out COMDAT after you optimized out all its uses. 
> > This
> > commonly happens at linktime.
> 
> Ahh, OK. I was worried about those side cases where sometimes a pure
> reference is emitted. From a linker point of view, that's something
> that theoretically could happen, although it may be the case that we
> don't actually have to support it. If we had a resolution like
> PREVAILING_DEF_IRONLY_BUT_EXPORTED (preferably something shorter than
> that), I think that would give the compiler the information it needs.
> Is that pretty much what your RESOLVED_IRDYNAMIC was intended to mean?

Ah, yes.  My first attempt for name was same as yours ;)
GCC knows what COMDATs have to be output even if unused.

> 
> Another thing that I don't remember offhand whether I got right or not
> in gold is that if a COMDAT group is defined in IR and non-IR files,
> we want to choose one of the IR files as the instance to keep. I'll
> have to check.

At GCC side I am trying to keep those grouped. So as soon as you ask for
one symbol from the group, you will get all of them, even if you possibly
decide to resolve others from other group (that indeed should not happen).

Also from QOI point of view, it is better if linker was choosing variant
with IR definitions over variant without if given multiple variants.

Honza
> 
> -cary


Re: Update LTO plugin interface

2010-12-01 Thread H.J. Lu
On Wed, Dec 1, 2010 at 4:48 PM, Ian Lance Taylor  wrote:
> "H.J. Lu"  writes:
>
>> __udivdi3 is just an example.  It can also happen to memcpy, or
>> any library calls generated by GCC. I am enclosing a testcase for memcpy.
>
> I believe we can solve that specific problem much more efficiently than
> requiring a complete link of all the input files.  We currently solve it
> using the -pass-through option which is passed to the linker plugin.
> Are there any cases for which using -pass-through=-lc
> -pass-through=-lgcc would not be a complete solution?
>

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46760


-- 
H.J.


Re: Update LTO plugin interface

2010-12-01 Thread Ian Lance Taylor
"H.J. Lu"  writes:

> On Wed, Dec 1, 2010 at 4:48 PM, Ian Lance Taylor  wrote:
>> "H.J. Lu"  writes:
>>
>>> __udivdi3 is just an example.  It can also happen to memcpy, or
>>> any library calls generated by GCC. I am enclosing a testcase for memcpy.
>>
>> I believe we can solve that specific problem much more efficiently than
>> requiring a complete link of all the input files.  We currently solve it
>> using the -pass-through option which is passed to the linker plugin.
>> Are there any cases for which using -pass-through=-lc
>> -pass-through=-lgcc would not be a complete solution?
>>
>
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46760

Sigh, OK, so I'll just add -pass-through=-lgcov when appropriate.

Or I'll rephrase: are there any cases for which using -pass-through for
the set of libraries that the gcc driver automatically adds to the end
of the link line would not be a complete solution?

I'm not saying -pass-through is the solution we should be using.  It's
clearly a bit of a hack.  However, I am asking the question seriously,
because if we have to do a complete relink, then let's do a complete
relink, but if we don't have to do one, let's definitely not.

Ian


RFD: inline hooks

2010-12-01 Thread Joern Rennecke

For the rtl passes, architecture target macros are not much of an issue
with regards to executable code modularity: since the rtl passes are
deeply interwoven with the insn-*.c files, we might as well compile one
specialized copy of the rtl passes for each target architecture.

Another argument against leaving the macros are their often ill-defined
interface types and the call-by-name semantics that make all the identifiers
in scope at the call site a potential interface.

We could avoid the latter problems without sacrificing the speed that we
get from target-specific code by replacing the target macro with an
inline hook.  E.g. consider HARD_REGNO_MODE_OK.  We could have $tm_file
define TARGET_HARD_REGNO_MODE_OK as a static inline function, or #define it
as the name of a static inline function somewhere else in $tm_file.
The function's address will be in TARGET_INITIALIZER, and thus type
checking on the function definition will be done.

But a file that includes tm.h will be able to use the function
TARGET_HARD_REGNO_MODE_OK directly, which can then be inlined, thus
giving type safety without performance penalty.


Re: Update LTO plugin interface

2010-12-01 Thread H.J. Lu
On Wed, Dec 1, 2010 at 5:53 PM, Ian Lance Taylor  wrote:
> "H.J. Lu"  writes:
>
>> On Wed, Dec 1, 2010 at 4:48 PM, Ian Lance Taylor  wrote:
>>> "H.J. Lu"  writes:
>>>
 __udivdi3 is just an example.  It can also happen to memcpy, or
 any library calls generated by GCC. I am enclosing a testcase for memcpy.
>>>
>>> I believe we can solve that specific problem much more efficiently than
>>> requiring a complete link of all the input files.  We currently solve it
>>> using the -pass-through option which is passed to the linker plugin.
>>> Are there any cases for which using -pass-through=-lc
>>> -pass-through=-lgcc would not be a complete solution?
>>>
>>
>> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46760
>
> Sigh, OK, so I'll just add -pass-through=-lgcov when appropriate.
>
> Or I'll rephrase: are there any cases for which using -pass-through for
> the set of libraries that the gcc driver automatically adds to the end
> of the link line would not be a complete solution?
>
> I'm not saying -pass-through is the solution we should be using.  It's
> clearly a bit of a hack.  However, I am asking the question seriously,
> because if we have to do a complete relink, then let's do a complete
> relink, but if we don't have to do one, let's definitely not.

Nothing can't be solved with a hack :-).


-- 
H.J.


Re: RFD: inline hooks

2010-12-01 Thread Ian Lance Taylor
Joern Rennecke  writes:

> For the rtl passes, architecture target macros are not much of an issue
> with regards to executable code modularity: since the rtl passes are
> deeply interwoven with the insn-*.c files, we might as well compile one
> specialized copy of the rtl passes for each target architecture.
>
> Another argument against leaving the macros are their often ill-defined
> interface types and the call-by-name semantics that make all the identifiers
> in scope at the call site a potential interface.
>
> We could avoid the latter problems without sacrificing the speed that we
> get from target-specific code by replacing the target macro with an
> inline hook.  E.g. consider HARD_REGNO_MODE_OK.  We could have $tm_file
> define TARGET_HARD_REGNO_MODE_OK as a static inline function, or #define it
> as the name of a static inline function somewhere else in $tm_file.
> The function's address will be in TARGET_INITIALIZER, and thus type
> checking on the function definition will be done.
>
> But a file that includes tm.h will be able to use the function
> TARGET_HARD_REGNO_MODE_OK directly, which can then be inlined, thus
> giving type safety without performance penalty.

I think that would be a plausible implementation technique which a
backend could choose to use.  I think it should definitely be a macro
which refers to a reasonable name.  We would then want to have
defaults.h, or some such header file, do something like

#ifndef TARGET_HARD_REGNO_MODE_OK
#define TARGET_HARD_REGNO_MODE_OK(REGNO, MODE) \
  targetm.hard_regno_mode_ok ((REGNO), (MODE))
#endif

Ian


Re: RFD: inline hooks

2010-12-01 Thread Joseph S. Myers
I think we want to move *away* from inline functions in headers and 
towards link-time inlining, in the interests of modularity: if one 
component of GCC cannot see the internals of another component at compile 
time, it cannot use them but must use the actual interface of that 
component, but inline functions often require internals to be visible.  
tm.h is a case in point of exposing internals, as it exports a great many 
macros that are really part of the internals of a particular back end and 
so should only be visible in that back end (including the generated 
insn-*.c files) and not in the RTL passes, so tempting people to put e.g. 
TARGET_64BIT conditionals outside of config/ (TARGET_64BIT should be 
private to the individual back ends).

See  for example 
on the direction of moving away from inline functions in headers.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: operator new[] overflow (PR 19351)

2010-12-01 Thread Gabriel Dos Reis
On Wed, Dec 1, 2010 at 5:36 PM, Chris Lattner  wrote:
>
> On Nov 30, 2010, at 3:12 PM, Joe Buck wrote:
>
>> On Tue, Nov 30, 2010 at 01:49:23PM -0800, Gabriel Dos Reis wrote:
>>> The existing GCC behaviour is a bit more perverse than the
>>> C malloc() case as in
>>>
>>>       new T[n]
>>>
>>> there is no multiplication that could be credited to careless programmer.
>>> The multiplication is introduced by GCC.
>>
>> ... which suggests strongly that GCC should fix it.  Too bad the ABI is
>> frozen; if the internal ABI kept the two values (the size of the type, and
>> the number of values) separate and passed two arguments to the allocation
>> function, it would be easy to do the right thing (through bad_alloc if the
>> multiplication overflows).
>
> You don't need any ABI changes to support this.  For example, clang compiles:
>
> int *foo(long X) {
>  return new int[X];
> }
>
> into:
>
> __Z3fool:                               ## @_Z3fool
> Leh_func_begin0:
> ## BB#0:                                ## %entry
>        movl    $4, %ecx
>        movq    %rdi, %rax
>        mulq    %rcx
>        testq   %rdx, %rdx
>        movq    $-1, %rdi
>        cmoveq  %rax, %rdi
>        jmp     __Znam
>
> On overflow it just forces the size passed in to operator new to -1ULL, which 
> throws bad_alloc.

This is a very good point.  At the minimum, GCC should generate similar code
if not improve on it.

-- Gaby


Re: Update LTO plugin interface

2010-12-01 Thread Dave Korn
On 01/12/2010 23:06, Cary Coutant wrote:

>> My suspicion is that the LTO plugin can only introduce a small bounded
>> set of new symbol references, namely those which we assume can be
>> satisified from -lc or -lgcc.  Is that true?
> 
> Exactly. 

  Potentially also gcov, ssp, mudflap?

> The plugin API, as implemented in gold (not sure about gnu ld), does
> maintain the original order of input files as far as symbol binding is
> concerned. When IR files are claimed, the plugin provides the list of
> symbols defined and referenced, and the linker builds the symbol table
> as if those files were linked in at that particular spot in the
> command line. When the compiler provides real definitions of those
> symbols later, the real definitions simply replace the "placeholders"
> that were left in the linker's symbol table. 

  We just ran into a new problem with that:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=45375#c12

  (Brief summary: no symbol type info is conveyed in the LTO symtab, and in
the ELF linker, _bfd_elf_merge_symbol errors out when the real symbol is TLS
and the LTO symtab one didn't have the same type, and it does so too early for
the plugin api's multiple_definition linker callback hook to do anything about
it.)

> As Ian noted, if the compiler introduces new references that weren't
> there before, the new references must be from a limited set of
> libcalls that the backend can introduce, and those should all be
> resolved with an extra pass through -lc or -lgcc. That's not exactly
> pretty, but I don't see how it destroys the notion of link order --
> the only way those new symbols could have been resolved differently is
> if a user library interposed definitions for the libcall, and those
> certainly can't be what the compiler intended to bind to. In PR 12248,
> I think it's questionable to claim that the compiler-introduced call
> to __udivdi3 should not resolve to the version in libgcc. Sure, I
> understand it's useful for library developers while debugging and
> testing, but an ordinary user certainly can't count on his own
> definition of that routine to get called -- the compiler might
> generate the division inline, or call a different specialized version.
> All of these routines are outside the user's namespace, and we should
> be able to optimize without regard for what the user's libraries might
> contain.

  I tend to follow this theory.  I think that the current approach should be
sufficient, but I think we probably need to arrange a few more pass-throughs:
for (some/all of?) the libraries I mentioned above, and HJ pointed out another
relevant issue: we should pass-through the crt endfiles as well, because the
new object files introduced by LTO might pull in new dynamic references from
libc et. al., and apparently you mustn't do that after crtn.o has been linked.

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=42690#c27

  I was going to test something like the attached.


cheers,
  DaveK

Index: gcc/gcc.c
===
--- gcc/gcc.c	(revision 167334)
+++ gcc/gcc.c	(working copy)
@@ -264,7 +264,9 @@ static const char *print_asm_header_spec_function
 static const char *compare_debug_dump_opt_spec_function (int, const char **);
 static const char *compare_debug_self_opt_spec_function (int, const char **);
 static const char *compare_debug_auxbase_opt_spec_function (int, const char **);
+static const char *gen_pass_through_spec (int, const char **, bool, bool);
 static const char *pass_through_libs_spec_func (int, const char **);
+static const char *pass_through_objs_spec_func (int, const char **);
 
 /* The Specs Language
 
@@ -637,7 +639,9 @@ proper position among the other output files.  */
 -plugin %(linker_plugin_file) \
 -plugin-opt=%(lto_wrapper) \
 -plugin-opt=-fresolution=%u.res \
+%{fprofile-arcs|fprofile-generate*|coverage:-plugin-opt=-pass-through=-lgcov} \
 %{!nostdlib:%{!nodefaultlibs:%:pass-through-libs(%(link_gcc_c_sequence))}} \
+%{!A:%{!nostdlib:%{!nostartfiles:%:pass-through-objs(%E)}}}
 } \
 %{flto*:%

Re: Update LTO plugin interface

2010-12-01 Thread Dave Korn
On 02/12/2010 04:28, Dave Korn wrote:

>   I was going to test something like the attached.

  Oops, there was a typo in that:

> @@ -637,7 +639,9 @@ proper position among the other output files.  */
>  -plugin %(linker_plugin_file) \
>  -plugin-opt=%(lto_wrapper) \
>  -plugin-opt=-fresolution=%u.res \
> +
> %{fprofile-arcs|fprofile-generate*|coverage:-plugin-opt=-pass-through=-lgcov} 
> \
>  
> %{!nostdlib:%{!nodefaultlibs:%:pass-through-libs(%(link_gcc_c_sequence))}} \
> +%{!A:%{!nostdlib:%{!nostartfiles:%:pass-through-objs(%E)}}}
>  } \
>  %{flto*:%  %{flto*} %l " LINK_PIE_SPEC \


  The second added line in that hunk is missing a continuation char.  JFTR, I
won't post a respin of the patch until it's been bootstrapped and tested a bit.

cheers,
  DaveK


Re: Update LTO plugin interface

2010-12-01 Thread Dave Korn
On 02/12/2010 00:12, Cary Coutant wrote:

> Think about it -- any failure to bind to an interposed copy of memcpy
> (or any other library call generated by gcc) is indistinguishable from
> the compiler choosing to generate the code inline.

  Indeed, replacing library functions is a tricky business in the presence of
optimisations:

> $ cat main.c
> #include 
> 
> int main (int argc, const char **argv)
> {
>   printf ("hello world\n");
>   return 0;
> }
> 
> $ cat myprintf.c
> #include 
> 
> int printf (const char *fmt, ...)
> {
>   abort ();
> }
> 
> $ gcc -O3 main.c myprintf.c -o test1
> 
> $ ./test1.exe
> hello world
> 
> $ cat main2.c
> #include 
> 
> int main (int argc, const char **argv)
> {
>   printf ("<%s>", "hello world\n");
>   return 0;
> }
> 
> $ gcc -O3 main2.c myprintf.c -o test2
> 
> $ ./test2.exe
> Aborted (core dumped)
> 
> $

  I think the answer to this is that you have to use -fno-builtin if you want
to interpose a library function, regardless of LTO or not.

cheers,
  DaveK



Re: operator new[] overflow (PR 19351)

2010-12-01 Thread Florian Weimer
* Chris Lattner:

> On overflow it just forces the size passed in to operator new to
> -1ULL, which throws bad_alloc.

This is also what my patch tries to implement.