Re: how to "parse" gcc -v output

2010-04-05 Thread Dave Korn
On 04/04/2010 20:08, Joseph S. Myers wrote:

> I think extracting compiler/linker *internal commands* and trying to 
> process or adapt them is inherently fragile and liable to break whenever 
> new compiler/linker options (internal or otherwise) are added.  If 
> possible the aim should be to work out user-friendly interfaces for direct 
> GCC users and have libtool use the same interfaces while expecting how 
> they are implemented to change over time.  Interfaces by which GCC does 
> things (e.g. link a shared library for the multilib implied by the given 
> options) seem safer than interfaces where it gives information (if you ask 
> it for directories and lists of libraries, you might then find that 
> interface inadequate for handling per-library choice of static or shared 
> libraries, for example).  

  Essentially, libtool needs to know about gcc's specs, and what they do to a
command-line.  ISTM that using "-###" and the appropriate language-dependent
driver should do most things that libtool needs, but maybe we should add an
option to the driver that turns it into a command-line driven arbitrary specs
processor of some kind.  Ralf, might that help the situation, if you could
pass arbitrary command-lines to the driver and have it report back the results
of spec processing in some controlled and parseable fashion?

cheers,
  DaveK


Processing global static (or const) variables

2010-04-05 Thread Ehren Metcalfe
Hello,

I'm trying to develop a dead code finder using gcc and mozilla's
treehydra but I've hit a wall processing certain initializations of
global variables.

In order to mark a function declaration whenever its address is held
in a file scope variable/table/structure I use code like this:

-

static tree find_funcs_callback(tree *tp, int *walk_subtrees, void *data) {
  tree t = *tp;

  if (TREE_CODE(t) == FUNCTION_DECL) {
// dump function
  }

  return NULL_TREE;
}

static void find_funcs(tree decl) {
  walk_tree(&decl, find_funcs_callback, NULL, NULL);
}

// elsewhere
struct varpool_node *vnode;
FOR_EACH_STATIC_VARIABLE(vnode)
  find_funcs(DECL_INITIAL(vnode->decl));

-

Unfortunately this doesn't work for code like this:

-

int foo() {
  return 0;
}

typedef struct {
  int (*p) ();
} Table;

const /* or static, or const static */ Table t[] = {
  { foo }
};

-

If I remove the qualifiers from my table the initialization is
detected. Is this a bug or is there some other way of recovering the
FUNCTION_DECL? It doesn't need to be modular, I just have to find a
way to dump the function.

Thanks,

Ehren


Re: Processing global static (or const) variables

2010-04-05 Thread Richard Guenther
On Mon, Apr 5, 2010 at 3:50 PM, Ehren Metcalfe  wrote:
> Hello,
>
> I'm trying to develop a dead code finder using gcc and mozilla's
> treehydra but I've hit a wall processing certain initializations of
> global variables.
>
> In order to mark a function declaration whenever its address is held
> in a file scope variable/table/structure I use code like this:
>
> -
>
> static tree find_funcs_callback(tree *tp, int *walk_subtrees, void *data) {
>  tree t = *tp;
>
>  if (TREE_CODE(t) == FUNCTION_DECL) {
>    // dump function
>  }
>
>  return NULL_TREE;
> }
>
> static void find_funcs(tree decl) {
>  walk_tree(&decl, find_funcs_callback, NULL, NULL);
> }
>
> // elsewhere
> struct varpool_node *vnode;
> FOR_EACH_STATIC_VARIABLE(vnode)
>  find_funcs(DECL_INITIAL(vnode->decl));
>
> -
>
> Unfortunately this doesn't work for code like this:
>
> -
>
> int foo() {
>  return 0;
> }
>
> typedef struct {
>  int (*p) ();
> } Table;
>
> const /* or static, or const static */ Table t[] = {
>  { foo }
> };
>
> -
>
> If I remove the qualifiers from my table the initialization is
> detected. Is this a bug or is there some other way of recovering the
> FUNCTION_DECL? It doesn't need to be modular, I just have to find a
> way to dump the function.

At which point during the compilation does it not work?  I suppose
at the point where the qualified variants are already optimized away.

Richard.

> Thanks,
>
> Ehren
>


Vanilla cross compiling and libstdc++v3

2010-04-05 Thread Kofi Doku Atuah
Hello, and a pleasant good day to everyone. With no further ado:

The process of building a simply, plain vanilla cross compiler for
arch-fmt-no_os is really probably overdone. To build, for example, a
GCC cross compiler for an i586-elf target, the build process requires
you to have a libc for the target, and then from there, the build
process uses the features in your vanilla target's libc to decide how
to configure libstdc++v3.

However, anyone building a vanilla cross compiler either doesn't yet
*have* a standard lib for his kernel project as yet, or isn't yet
interested in building an os-specific toolchain for arch-fmt-his_os as
yet. Therefore the assumption that there would be a standard library,
or libc, or even that the person even *wants* a libstdc++ with his
vanilla build is incorrect.

Normally, a hobbyist kernel-developer, or a person working on a kernel
of any sort for that matter, would begin by building a vanilla target,
which uses no libs whatsoever. Therefore the triplet lacks the 'os'
part. That is: "I'm building a cross compiler which does not target
any particular OS.".

The vanilla target is used to compile the kernel itself. A kernel
project usually provides its *own* stdlibs, in subdirectories of its
tree, Therefore there is no need for GCC to require me to build
libstdc++ for a vanilla build.

Later on, the kernel developer would usually, after implementing
syscalls, and now having actually written the system libraries for his
kernel, compile a target for arch-fmt-my_os. In *this* build, s/he
will have a libc which is either native, or a layer over his/her own
native system API. Using this libc and the relevant headers, etc, s/he
would now cross compile the arch-fmt-my_os target. NOW the developer
is interested in having libstdc++ built along with the cross compiler
since there is a need for the userspace libstdc++.

What I'm trying to say therefore is that you folks are doing a great
job, and I love your compiler, but could you please keep things simple
where they should be? There's no *need* for a libstdc++ on a vanilla
or 'bare metal' build. And the assumption that a libc, or any other
system lib even exists for a bare metal target is flawed. The cross
compiler should only depend on, or assume the existence of system libs
when it is being built with a *FULL* triplet where a target *OS* is
specified.

Currently, kernel developers need to build newlib, and create stub
blank functions for functions that *do not* exist in their kernels
just so that the unnecessary libstdc++v3 will build without problems
(REF: http://wiki.osdev.org/GCC_Cross-Compiler); yet neither newlib,
nor any other library is needed for a vanilla target. A vanilla target
should produce free-standing binaries.

Please really consider this, since there is a whole community
(http://osdev.org) whose lives would be significantly eased if you
simplify the build process for a vanilla target.

Also, it may be worth noting that the idea of assuming that everyone
will use either libc, or else just use newlib is flawed. Especially
for vanilla targets. So again, making a 'special case' for newlib is
really not a very grace solution to the problem.

--Please consider this seriously.


Re: Vanilla cross compiling and libstdc++v3

2010-04-05 Thread Nathan Froyd
On Mon, Apr 05, 2010 at 10:29:07AM -0430, Kofi Doku Atuah wrote:
> The process of building a simply, plain vanilla cross compiler for
> arch-fmt-no_os is really probably overdone. To build, for example, a
> GCC cross compiler for an i586-elf target, the build process requires
> you to have a libc for the target, and then from there, the build
> process uses the features in your vanilla target's libc to decide how
> to configure libstdc++v3.
> 
> However, anyone building a vanilla cross compiler either doesn't yet
> *have* a standard lib for his kernel project as yet, or isn't yet
> interested in building an os-specific toolchain for arch-fmt-his_os as
> yet. Therefore the assumption that there would be a standard library,
> or libc, or even that the person even *wants* a libstdc++ with his
> vanilla build is incorrect.

Have you tried configuring with --enable-languages=c?  Doing so should
ensure that libstdc++ is not configured for your target.

-Nathan


Re: Vanilla cross compiling and libstdc++v3

2010-04-05 Thread Dave Korn
On 05/04/2010 15:59, Kofi Doku Atuah wrote:
> Hello, and a pleasant good day to everyone. With no further ado:

  :)  Actually, that's a fair amount of ado simply to say:

> There's no *need* for a libstdc++ on a vanilla or 'bare metal' build.

  This is my idea of "no further ado":

/path/to/gcc/configure --disable-libstdc___v3

cheers,
  DaveK



RFC: c++ diagnostics

2010-04-05 Thread Benjamin Kosnik

Hello all! 

I've put up a short diagnostics comparison between gcc, icc, and
clang. It is my plan to update this with major revisions to individual
compilers. 

Included are most of the outstanding bugzilla requests with the
"diagnostic" keyword. However, I am looking for help! Please send me
code samples that frustrate, obfuscate, and annoy. 

In particular, I am looking for template instantiation issues such as
c++/41884, but hopefully something in a deliciously small snippet. No
doubt other C++ hackers have particular annoyances.

I'm also looking for guidance on how to rank the priority of these
issues. Is there some way to tell what the biggest annoyance is?

http://people.redhat.com/bkoz/diagnostics/diagnostics.html

best,
benjamin


Re: Vanilla cross compiling and libstdc++v3

2010-04-05 Thread Dave Korn
On 05/04/2010 16:10, Nathan Froyd wrote:

> 
> Have you tried configuring with --enable-languages=c?  Doing so should
> ensure that libstdc++ is not configured for your target.

  I've found it possible to build a c++ compiler with no libstdc (as per other
post).  Bare-metal COFF target, no libc; I configure using "--without-headers
--without-libs --with-newlib --disable-libc --disable-libssp --with-gnu-as
--with-gnu-ld --disable-sjlj-exceptions --disable-libstdc___v3
--enable-languages=c,c++".

  I haven't verified how useful the thing actually is in practice with no
libstdc and no libsupc.

cheers,
  DaveK


VTA/debugging vs reload-v2

2010-04-05 Thread Jeff Law


So as I mentioned in the meeting last week, I've largely been ignoring 
VTA (and more generally all debugging) issues with the reload work I'm 
doing.  It's time to rectify that situation.


For this phase of the work (range splitting) we only need to consider a 
few straightforward transformations to the RTL and how they impact the 
debugging information.


The goal is to take a pseudo P and break its live range down into P1, 
P2, ... Pn where each of the sub-ranges are local to a region (right now 
a region is a straight line hunk of code with no join nodes -- not quite 
an extended basic block, but close).   Outside these regions P lives in 
memory.  Within each region the new pseudos P1, P2, ... Pn may be 
allocated to different hard registers.


We accomplish this by emitting a load from memory into a new pseudo 
before the first use of P in a region and a store from the new pseudo 
back to memory after the last assignment to P within the region, then we 
rename all references from P to P'.  It's marginally more complex, but I 
think for this discussion the other complexities can be ignored.  After 
all regions have been processed, P is gone from the insn stream.  
Obviously P can be found in memory, P1, P2, ... Pn depending on 
precisely where we are in the code when the value is P is requested.


I'm not terribly familiar with how dwarf2 represents variable ranges; I 
tend to think of this as P living in memory, except during the 
subregions where its in P1, P2, .. Pn.The sub-range pseudos P1, P2, 
.. Pn all point back to P via ORIGINAL_REGNO and all have the same 
reg_equiv_memory_loc.


So, without having looked closely at dwarf2out.c (it hurts my head every 
time I try), is it likely we're going to need to be emitting new 
debug_insns to describe how to correctly find P in the different contexts?


Thanks,
Jeff







Re: RFC: c++ diagnostics

2010-04-05 Thread Chris Lattner

On Apr 5, 2010, at 8:20 AM, Benjamin Kosnik wrote:

> 
> Hello all! 
> 
> I've put up a short diagnostics comparison between gcc, icc, and
> clang. It is my plan to update this with major revisions to individual
> compilers. 
> 
> Included are most of the outstanding bugzilla requests with the
> "diagnostic" keyword. However, I am looking for help! Please send me
> code samples that frustrate, obfuscate, and annoy. 
> 
> In particular, I am looking for template instantiation issues such as
> c++/41884, but hopefully something in a deliciously small snippet. No
> doubt other C++ hackers have particular annoyances.
> 
> I'm also looking for guidance on how to rank the priority of these
> issues. Is there some way to tell what the biggest annoyance is?
> 
> http://people.redhat.com/bkoz/diagnostics/diagnostics.html

This is a great resource Benjamin, thanks for putting it together!

Some random thoughts if you ever regenerate this:

1) the caret diagnostics would be easier to understand in the output if 
formatted with a  or  tag.  

2) The clang invocations don't need -fcaret-diagnostics -fshow-source-location 
-fdiagnostics-fixit-info because they are the default.

3) It's best to not pass -fdiagnostics-print-source-range-info unless you're 
looking for machine interpretable output.  This flag adds things like 
{3:29-3:32} which are useful to a machine, but otherwise just clutter the 
output up.

4) It looks like ICC defaults to a number of coding standards types of checks, 
e.g. "access control not specified".  I don't know if it is possible to turn 
those off, but they seem to just be noise for the purposes of this comparison.

5) There are a couple cases of GCC rejecting valid code (e.g. 19377), or which 
there may be some debate about (19538) it might be worth pointing this out. 
*shrug*

6) My understanding was that GCC's complex extension in C++ mode is supposed to 
work like C99 _Complex. If so, 41895 looks like a GCC bug.  I don't know if 
C++'0x affects this though.

7) There are some clang bugs here.  Access control is not yet enabled by 
default (affects 20397, 39728), and a variety of other bugs (affects 14283, 
38612).  I file Clang PR#6782/6783 to track these.

Thanks again for putting this together,

-Chris




Re: RFC: c++ diagnostics

2010-04-05 Thread Manuel López-Ibáñez
On 5 April 2010 17:20, Benjamin Kosnik  wrote:
>
> Hello all!
>
> I've put up a short diagnostics comparison between gcc, icc, and
> clang. It is my plan to update this with major revisions to individual
> compilers.

Awesome!

How to contribute? patches against the html? I see there are some
examples without output. Also, it would be nicer if the page linked to
each PR in bugzilla.

On the one hand, it is strange that you use
-fdiagnostics-show-location=once  without -fmessage-length= because
that doesn't have any effect. On the other hand,
-fdiagnostics-show-option will give an output more similar to the one
of clang.

> Included are most of the outstanding bugzilla requests with the
> "diagnostic" keyword. However, I am looking for help! Please send me
> code samples that frustrate, obfuscate, and annoy.

I guess most of the ones listed here:

http://clang.llvm.org/diagnostics.html

Also, posting this to kde-de...@kde.org may get some further feedback.

> I'm also looking for guidance on how to rank the priority of these
> issues. Is there some way to tell what the biggest annoyance is?

Let people vote in GCC bugzilla? Hum, perhaps not...

Cheers,

Manuel.


Re: RFC: c++ diagnostics

2010-04-05 Thread Benjamin Kosnik

> How to contribute? patches against the html? I see there are some
> examples without output. Also, it would be nicer if the page linked to
> each PR in bugzilla.

Well, the html is auto-generated so that isn't really the way to go.
Should I just check in the tests + xml into some gcc repository? There
is a README in the enclosing directory in the original URL that tries to
explain the way the sources are generated.

Certainly I can add a notes section to the HTML output and put in links
to specific bug reports.

> On the one hand, it is strange that you use
> -fdiagnostics-show-location=once  without -fmessage-length= because
> that doesn't have any effect. On the other hand,
> -fdiagnostics-show-option will give an output more similar to the one
> of clang.

Thanks for pointing this out. 
 
> > Included are most of the outstanding bugzilla requests with the
> > "diagnostic" keyword. However, I am looking for help! Please send me
> > code samples that frustrate, obfuscate, and annoy.
> 
> I guess most of the ones listed here:
> 
> http://clang.llvm.org/diagnostics.html
> 
> Also, posting this to kde-de...@kde.org may get some further feedback.

Oh, good idea. 
 
> > I'm also looking for guidance on how to rank the priority of these
> > issues. Is there some way to tell what the biggest annoyance is?
> 
> Let people vote in GCC bugzilla? Hum, perhaps not...

That's an idea, to be able to vote up diagnostics things in bugzilla.
The Red Hat bugzilla lets me do that for os bugs.

One of the other areas I really want to explore are some of the
verbose diagnostics that deal with missed optimizations, looping, or
vectorization. I see a lot more options for complex diagnostic/analysis
passes in the current round of compilers and would like to figure out
some way of tracking this for GCC. Any ideas? Even the basics would be
helpful.

-benjamin


Re: RFC: c++ diagnostics

2010-04-05 Thread Benjamin Kosnik

> 2) The clang invocations don't need -fcaret-diagnostics
> -fshow-source-location -fdiagnostics-fixit-info because they are the
> default.
> 
> 3) It's best to not pass -fdiagnostics-print-source-range-info unless
> you're looking for machine interpretable output.  This flag adds
> things like {3:29-3:32} which are useful to a machine, but otherwise
> just clutter the output up.
> 
> 4) It looks like ICC defaults to a number of coding standards types
> of checks, e.g. "access control not specified".  I don't know if it
> is possible to turn those off, but they seem to just be noise for the
> purposes of this comparison.

I'll look into this. Some of this is over-zealousness on my part and
not wanting to miss a diagnostics flag. If I could make the diagnostics
more verbose I tried to not miss an opportunity to twist a knob.

The actual flags I compiled the data with are in this script and may
vary from the total listing at the top of the webpage.

 http://people.redhat.com/bkoz/diagnostics/make-diagnostic-output.sh

> 5) There are a couple cases of GCC rejecting valid code (e.g. 19377),
> or which there may be some debate about (19538) it might be worth
> pointing this out. *shrug*

One of the goals was to measure the output when the input is
truncated, or obviously flawed (no semicolons is very
common!). Certainly if you can think of other (obvious) issues where
refactoring or haste make general classes of errors I'm very interested
in this particular type of pathology.

The valid code issues I can flag in the existing bug reports.

thanks,
benjamin


Re: RFC: c++ diagnostics

2010-04-05 Thread Chris Lattner

On Apr 5, 2010, at 12:51 PM, Benjamin Kosnik wrote:

>> 
>> 5) There are a couple cases of GCC rejecting valid code (e.g. 19377),
>> or which there may be some debate about (19538) it might be worth
>> pointing this out. *shrug*
> 
> One of the goals was to measure the output when the input is
> truncated, or obviously flawed (no semicolons is very
> common!). Certainly if you can think of other (obvious) issues where
> refactoring or haste make general classes of errors I'm very interested
> in this particular type of pathology.

Absolutely, this is one of the reasons I'm particularly interested in common 
errors like missing semi colons, . vs ->, use of things like <::foo>, 
explaining overload set failures, etc.

> The valid code issues I can flag in the existing bug reports.

Ok, thanks again.

-Chris


Re: VTA/debugging vs reload-v2

2010-04-05 Thread Alexandre Oliva
On Apr  5, 2010, Jeff Law  wrote:

> We accomplish this by emitting a load from memory into a new pseudo
> before the first use of P in a region and a store from the new pseudo
> back to memory after the last assignment to P within the region, then
> we rename all references from P to P'.  It's marginally more complex,
> but I think for this discussion the other complexities can be ignored.
> After all regions have been processed, P is gone from the insn stream.
> Obviously P can be found in memory, P1, P2, ... Pn depending on
> precisely where we are in the code when the value is P is requested.

I can think of 3 points that you might have to be concerned about:

1. Don't pay attention to debug insns when computing the live ranges.
You don't want to take debug insns into account when making decisions
about transformations to executable code.

2. When renaming references from P to P' in a region, do take debug
insns in the region into account, renaming references in debug insns as
you would in any other insn.

3. If any debug insns ended up outside any of the regions determined
without taking debug insns into account, you may have to patch things up
so that they don't remain as dangling pointers to P.  From your
description above, it appears to me that replacing such remaining
references to P in debug insns with the memory slot assigned to it would
be the right thing to do.

This should be all, and it's very much in line with “The Zen of VTA”:
disregard debug insns when deciding what to do, transform debug insns
just as you would regular insns, and patch up any debug insns left out
of the decisions you made.

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist  Red Hat Brazil Compiler Engineer


Re: VTA/debugging vs reload-v2

2010-04-05 Thread Jeff Law

On 04/05/10 14:32, Alexandre Oliva wrote:

On Apr  5, 2010, Jeff Law  wrote:

   

We accomplish this by emitting a load from memory into a new pseudo
before the first use of P in a region and a store from the new pseudo
back to memory after the last assignment to P within the region, then
we rename all references from P to P'.  It's marginally more complex,
but I think for this discussion the other complexities can be ignored.
After all regions have been processed, P is gone from the insn stream.
Obviously P can be found in memory, P1, P2, ... Pn depending on
precisely where we are in the code when the value is P is requested.
 

I can think of 3 points that you might have to be concerned about:

1. Don't pay attention to debug insns when computing the live ranges.
You don't want to take debug insns into account when making decisions
about transformations to executable code.
   

Right.  I already figured this one out the hard way a while back.


2. When renaming references from P to P' in a region, do take debug
insns in the region into account, renaming references in debug insns as
you would in any other insn.
   
OK.  So presumably the 2nd argument in a VAR_LOCATION can be any rtl 
expression?  Meaning I have to parse it looking for things that need 
changing?Right?



3. If any debug insns ended up outside any of the regions determined
without taking debug insns into account, you may have to patch things up
so that they don't remain as dangling pointers to P.  From your
description above, it appears to me that replacing such remaining
references to P in debug insns with the memory slot assigned to it would
be the right thing to do.
   
Makes sense, though I'm not terribly familiar with how this could 
happen, replacing P with its memory location seems to be the right thing 
to do.  I guess a single pass through the entire function's RTL looking 
for dangling references in debug insns is in order.  Or I might be able 
to get away with changing regno_reg_rtx to point to the appropriate 
memref...  hmmm.


Everything you noted seems to be designed to keep the existing 
debug_insns updated -- under what circumstances are debug_insns created 
(which ought to give me a clue about whether or not I'm going to need to 
create new ones).




This should be all, and it's very much in line with “The Zen of VTA”:
disregard debug insns when deciding what to do, transform debug insns
just as you would regular insns, and patch up any debug insns left out
of the decisions you made.
   

FWIW, I don't see any references to debug_insn or var_location in
gcc/doc/*.texi.   Somehow I think this must be unintentional.

Jeff




Problem on handling fall-through edge in bb-reorder

2010-04-05 Thread Amker.Cheng
Hi All:
  I read codes in bb-reorder pass. normally it's fine to take the most
probable basic block as the downward bb.
unfortunately, the processor I'm working on is a little different.
It has no pipeline stall when branches are taken, but does introduce
stall when they are not taken.

take an example code like:
--
statement 0;
if likely(condition)
statement 1;
else
statement 2;

return;

gcc may generate :
---
  statement 0;
  if !(condition) branch to label x;
  statement 1;
  return;
label x:
  statement 2;
  return;

Which is less effective on my processor. I am wondering whether possible
to modify codes in bb-reorder making gcc to take the less probable basic
block as the downward bb?
So any tips? Thanks in advance.

-- 
Best Regards.


Re: Processing global static (or const) variables

2010-04-05 Thread Ehren Metcalfe
(Apologies to Richard for sending this twice -- I forgot to cc the list)

> At which point during the compilation does it not work?  I suppose
> at the point where the qualified variants are already optimized away.

I've had some difficulty walking the DECL_INITIAL from within a
separate pass but I've added this code to the execute function of
pass_ipa_function_and_variable_visibility which should be about as
close to pass_build_cgraph_edges as I can get. Also the
record_references callback in cgraphbuild.c exhibits the same
behavior.

I get the same results with 4.3.4 and a recent checkout.

Is there a way to disable the optimizing away of qualified variants?
This seems to be a bug, especially with regard to
record_references_in_initializer and record_references in
cgraphbuild.c

On Mon, Apr 5, 2010 at 10:20 AM, Richard Guenther
 wrote:
> On Mon, Apr 5, 2010 at 3:50 PM, Ehren Metcalfe  wrote:
>> Hello,
>>
>> I'm trying to develop a dead code finder using gcc and mozilla's
>> treehydra but I've hit a wall processing certain initializations of
>> global variables.
>>
>> In order to mark a function declaration whenever its address is held
>> in a file scope variable/table/structure I use code like this:
>>
>> -
>>
>> static tree find_funcs_callback(tree *tp, int *walk_subtrees, void *data) {
>>  tree t = *tp;
>>
>>  if (TREE_CODE(t) == FUNCTION_DECL) {
>>    // dump function
>>  }
>>
>>  return NULL_TREE;
>> }
>>
>> static void find_funcs(tree decl) {
>>  walk_tree(&decl, find_funcs_callback, NULL, NULL);
>> }
>>
>> // elsewhere
>> struct varpool_node *vnode;
>> FOR_EACH_STATIC_VARIABLE(vnode)
>>  find_funcs(DECL_INITIAL(vnode->decl));
>>
>> -
>>
>> Unfortunately this doesn't work for code like this:
>>
>> -
>>
>> int foo() {
>>  return 0;
>> }
>>
>> typedef struct {
>>  int (*p) ();
>> } Table;
>>
>> const /* or static, or const static */ Table t[] = {
>>  { foo }
>> };
>>
>> -
>>
>> If I remove the qualifiers from my table the initialization is
>> detected. Is this a bug or is there some other way of recovering the
>> FUNCTION_DECL? It doesn't need to be modular, I just have to find a
>> way to dump the function.
>
> At which point during the compilation does it not work?  I suppose
> at the point where the qualified variants are already optimized away.
>
> Richard.
>
>> Thanks,
>>
>> Ehren
>>
>


Re: VTA/debugging vs reload-v2

2010-04-05 Thread Jakub Jelinek
On Mon, Apr 05, 2010 at 05:18:35PM -0600, Jeff Law wrote:
>> 2. When renaming references from P to P' in a region, do take debug
>> insns in the region into account, renaming references in debug insns as
>> you would in any other insn.
>>
> OK.  So presumably the 2nd argument in a VAR_LOCATION can be any rtl  
> expression?  Meaning I have to parse it looking for things that need  
> changing?Right?

Yes, it can be arbitrary valid RTL (validate_change/verify_changes allow
any changes to DEBUG_INSNs).  The problematic stuff is mainly when some RTL
with non-VOIDmode (REG, MEM etc.) needs to be replaced with a VOIDmode
constant - in that case simplify_replace_{,fn_}rtx needs to be used to
change the invalid RTL into valid.  We don't want say
(zero_extend:DI (const_int 6)) or (subreg:QI (const_int 12345678) 4) etc.
staying around in the DEBUG_INSNs.  But I guess for reload2 you'll be
changing just REGs and MEMs to other REGs and MEMs - in that case
just a replacement through say for_each_rtx is possible too.

Jakub