Re: [tsan] Instrument atomics

2012-11-24 Thread Jakub Jelinek
On Fri, Nov 23, 2012 at 08:10:39PM +0400, Dmitry Vyukov wrote:
> I've just committed a patch to llvm with failure_memory_order (r168518).

Shouldn't it be something like this instead?

--- tsan_interface_atomic.cc.jj 2012-11-23 17:20:49.0 +0100
+++ tsan_interface_atomic.cc2012-11-23 19:06:05.917147568 +0100
@@ -199,15 +199,17 @@ static T AtomicFetchXor(ThreadState *thr
 template
 static bool AtomicCAS(ThreadState *thr, uptr pc,
 volatile T *a, T *c, T v, morder mo, morder fmo) {
-  (void)fmo;
   if (IsReleaseOrder(mo))
 Release(thr, pc, (uptr)a);
   T cc = *c;
   T pr = __sync_val_compare_and_swap(a, cc, v);
-  if (IsAcquireOrder(mo))
-Acquire(thr, pc, (uptr)a);
-  if (pr == cc)
+  if (pr == cc) {
+if (IsAcquireOrder(mo))
+  Acquire(thr, pc, (uptr)a);
 return true;
+  }
+  if (IsAcquireOrder(fmo))
+Acquire(thr, pc, (uptr)a);
   *c = pr;
   return false;
 }

The failure model is for when the CAS fails, success model for when it
succeeds.  Or perhaps the IsReleaseOrder/Release lines should be also after
the __sync_val_compare_and_swap (that doesn't matter, right, one thing is
the actual accesses, another thing is tsan events?) and also depend on the
corresponding model?

Jakub


Re: [tsan] Instrument atomics

2012-11-24 Thread Dmitry Vyukov
On Sat, Nov 24, 2012 at 12:11 PM, Jakub Jelinek  wrote:
> On Fri, Nov 23, 2012 at 08:10:39PM +0400, Dmitry Vyukov wrote:
>> I've just committed a patch to llvm with failure_memory_order (r168518).
>
> Shouldn't it be something like this instead?
>
> --- tsan_interface_atomic.cc.jj 2012-11-23 17:20:49.0 +0100
> +++ tsan_interface_atomic.cc2012-11-23 19:06:05.917147568 +0100
> @@ -199,15 +199,17 @@ static T AtomicFetchXor(ThreadState *thr
>  template
>  static bool AtomicCAS(ThreadState *thr, uptr pc,
>  volatile T *a, T *c, T v, morder mo, morder fmo) {
> -  (void)fmo;
>if (IsReleaseOrder(mo))
>  Release(thr, pc, (uptr)a);
>T cc = *c;
>T pr = __sync_val_compare_and_swap(a, cc, v);
> -  if (IsAcquireOrder(mo))
> -Acquire(thr, pc, (uptr)a);
> -  if (pr == cc)
> +  if (pr == cc) {
> +if (IsAcquireOrder(mo))
> +  Acquire(thr, pc, (uptr)a);
>  return true;
> +  }
> +  if (IsAcquireOrder(fmo))
> +Acquire(thr, pc, (uptr)a);
>*c = pr;
>return false;
>  }
>
> The failure model is for when the CAS fails, success model for when it
> succeeds.  Or perhaps the IsReleaseOrder/Release lines should be also after
> the __sync_val_compare_and_swap (that doesn't matter, right, one thing is
> the actual accesses, another thing is tsan events?) and also depend on the
> corresponding model?


You are right.
But llvm does not pass failure_mo now, so I can't actually use it right now.
Moreover, the logic before I added fail_mo was incorrect as well.
Precise modeling of C++11 MM is impossible anyway. We've found that we
have now is enough for all real programs we verify (millions of LOC).
ThreadSanitizer is not intended for verification of tricky
synchronization algorithms (it can't be done this way anyway). So the
plan is to have modestly precise model (at least support basic
acquire-release sync) and them improve it as we see real-world
failures.
However, it's great that gcc will pass it, because such changes that
involve 2 compilers and runtime are pain.

I must admit that my model of work is based on the fact that I was
able to continuously improve and redeploy llvm-based tsan as needed
outside of any release cycles.  If we are talking about shipping tsan
as part of gcc4.8, then perhaps we need to revisit it, and prepare
some kind of list of things that must be done before release (there
are still tons of work on runtime side).


Re: [tsan] Instrument atomics

2012-11-24 Thread Dmitry Vyukov
On Fri, Nov 23, 2012 at 9:07 PM, Xinliang David Li  wrote:
> On Fri, Nov 23, 2012 at 8:39 AM, Jakub Jelinek  wrote:
>> On Fri, Nov 23, 2012 at 08:10:39PM +0400, Dmitry Vyukov wrote:
>>> > This patch attempts to instrument __atomic_* and __sync_* builtins.
>>> > Unfortunately for none of the builtins there is 1:1 mapping to the tsan
>>> > replacements, tsan uses weirdo memory model encoding (instead of values
>>> > from 0 to 5 apparently 1 << 0 to 1 << 5, as if one could have more than
>>> > one memory model at the same time), so even for the easy cases GCC
>>> > has to transform them.
>>>
>>>
>>> gcc must be using old version of the library. I've switched to ABI
>>> constants some time ago.
>>
>> Ah, it was just me looking at llvm compiler-rt tsan checkout from a few days
>> ago.  Guess I'll need to update the patch.  So, it now expects 0 for relaxed
>> up to 5 for sequentially consistent?
>>
>>> > More importantly, there is no way to pass through
>>> > __ATOMIC_HLE_ACQUIRE/__ATOMIC_HLE_RELEASE.  Right now the patch just gives
>>> > up and doesn't replace anything if e.g.
>>> > __atomic_store_n (p, 0, __ATOMIC_HLE_RELEASE | __ATOMIC_RELEASE);
>>> > is seen, perhaps one could ignore the hle bits which are just an
>>> > optimization, but there is no way to find out in the generic code
>>> > whether the extra bits are just an optimization or change the behavior
>>> > of the builtin.
>>> >
>>>
>>> Do you mean hardware lock elission? oh, boy
>>
>> Yes.
>>
>>> It's not just "an atomic". It should be legal to downgrade them to plain
>>> atomic ops (however, I am not sure what memory order they must have... is
>>> it possible to have HLE_ACQUIRE before seq_cst atomic rmw?). And I think
>>> that's what we need to do.
>>
>> Perhaps if there wasn't the compatibility hack or what is that 100500
>> comparison, or if it could be tweaked, we could pass through the HLE bits
>> too and let the library decide what to do with it.
>>
>> If HLE bits are set, the low order bits (model & 65535) contains the
>> normal memory model, i.e. 0 (relaxed) through 5 (seq_cst), and either 65536
>> (hle acquire) or 131072 (hle release) is ored with that.
>>
>>> Well, it's a dirty implementation that relies on x86 memory model (and no
>>> compiler optimizations, well, apparently there are data races :)).
>>> I think eventually I will just replace them with mutexes.
>>
>> I don't see why mutexes would be better than just plain __atomic_* builtins.
>> With mutexes there wouldn't be any atomicity...
>>
>>> I've just committed a patch to llvm with failure_memory_order (r168518).
>>
>> Ok, I'll adjust the patch to pass both memory models through then.
>>
>>> Yeah, I think it's better to transform them to a more standard ones (llvm
>>> also has own weird atomic ops and own memory orders).
>>
>> Ok, no change on the GCC side then needed for that beyond what I posted.
>>
>>> > Oh, and there are no 16-byte atomics provided by libtsan, the library
>>> > would need to be built with -mcx16 on x86_64 for that and have the
>>> > additional entry points for unsigned __int128.  The patch doesn't touch
>>> > the 16-byte atomics, leaves them as is.
>>>
>>> I think that's fine for now.
>>
>> Perhaps.  Note that such atomics are used also e.g. for #pragma omp atomic
>> on long double etc.
>>
>>> That's what llvm does as well. But it inserts a fast path before
>>> __cxa_guard_acquire -- acquire-load of the lock word. Doesn't gcc do the
>>> same?
>>> tsan intercepts __cxa_guard functions.
>>
>> Yes, except it isn't __atomic_load_*, but plain memory read.
>>   _3 = MEM[(char *)&_ZGVZ3foovE1a];
>>   if (_3 == 0)
>> goto ;
>>   else
>> goto ;
>>
>>   :
>>   fast path, whatever;
>>
>>   :
>>   _5 = __cxa_guard_acquire (&_ZGVZ3foovE1a);
>>   ...
>>
>> So, right now tsan would just instrument it as __tsan_read1 from
>> &_ZGVZ3foovE1a rather than any atomic load.
>>
>>> Well, yes, the compiler module must pass to the runtime all memory
>>> accesses, whatever form they have in compiler internal representation.
>>> Yes, I think I need to provide range memory access functions in runtime. I
>>> already have this issue for Go language, there are a lot of range accesses
>>> due to arrays and slices.
>>> I will add them next week.
>>
>> Ok.  A slight problem then is that where the tsan pass sits right now, there
>> is no easy way to find out if the builtin call will be expanded inline or
>> not, so (similar for asan), if we instrument them in the pass, it might be
>> instrumented twice at runtime if the builtin is expanded as a library call
>> (once the added instrumentation for the builtin, once in the intercepted
>> library call).  That isn't wrong, just might need slightly more resources
>> than if we ensured we only instrument the builtin if it isn't expanded
>> inline.
>>
>
> Should inlining of those functions be disabled as if -fno-builtins is 
> specified?

Yes, it sounds reasonable. Performance characteristics under tsan
differ significantly, so most likely we

Re: Ping / update: Re: RFA: hookize ADJUST_INSN_LENGTH

2012-11-24 Thread Richard Sandiford
[responding because you kept me on cc:]

Joern Rennecke  writes:
> This uses the same interface as my previous patch:
> http://gcc.gnu.org/ml/gcc-patches/2012-11/msg00473.html ,
> but I refined the algorithm for the get_insn_variants
> mechanism to work properly with the reworked ARC port -
> http://gcc.gnu.org/ml/gcc-patches/2012-11/msg01891.html -
> the only user so far, and added some documentation.
>
> Bootstrapped in r193731 on i686-pc-linux-gnu .

In http://gcc.gnu.org/ml/gcc-patches/2012-11/msg00029.html
the conversation was roughly:

   > Here's a very general (and rather complicated) interface,
   I wasn't after something as complicated as that.  I just
   wanted to model alignment.

and it looks like this patch implements the very general and rather
complicated interface. :-)  Obviously you're completely free to do that,
but it means that any comments from me are going to be same (and as
unhelpful and unproductive) as last time.  So I still think I should
bow out of this one.

In case that seems unfair, here's a summary of my concerns:

1) As Richard B says, having "locked lengths" with the comment "care must
   be taken to avoid cycles" doesn't sound like good design.  So the
   question was: without this, why would the length be going up and down
   "arbitrarily", even though we're trying to reach a fixed point?
   And the good answer seemed to be: because the backend wants to grow
   some instructions in order to give a particular (mis)alignment to
   later ones.  There's no inherent cycle problem with that, because
   although the length of an instruction can decrease when the padding
   it previously contained is no longer needed, the instruction addresses
   themselves still only move in one direction.  We don't need a locked
   length for this case.

   That kind of alignment optimisation sounds like a useful thing to have,
   but it should be modelled explicitly.  The new patch does model it
   explicitly but still keeps the "locked length: to avoid cycles" thing
   as well.

2) Another reason the lengths could go up and down "arbitrarily" is because
   the backend wants to perform if-conversion, predication, call-it-what-
   you-will on the fly during shorten_branches, and can therefore delete
   or restore instructions relative to previous iterations.  I'm still
   not convinced that's something we should support.

3) As mentioned previously, I don't think ADJUST_LENGTH should be
   aware of ADJUST_LENGTH calls for other instructions.  The way that
   the ARC port "simulates" instructions between the insn passed in the
   previous and current calls doesn't seem right.

4) The patch provides a complex interface and allows for complex decisions
   to be made, but the results of those decisions aren't stored in the rtl.
   That's not a problem for ARC because ARC keeps an on-the-side FSM
   to track these things.  I don't think we should encourage or require
   that though.

FWIW, one interface that would deal with the alignment side of things is:

unsigned HOST_WIDE_INT preferred_insn_alignment (rtx insn);

which returns an instruction's preferred alignment, and which could
(if useful) be called for SEQUENCEs and/or the insns inside them.

rtx longer_insn_form (rtx insn, unsigned HOST_WIDE_INT current_length,
  unsigned HOST_WIDE_INT target_length);

which returns a pattern that would turn INSN from being CURRENT_LENGTH
to TARGET_LENGTH bytes in length, or null if no such pattern exists.

shorten_branches would only use these hooks when optimising the insns
for speed rather than size (the posted patch doesn't seem to account
for that).  It could record the original pattern internally so that
it can revert to that pattern if the extra padding is no longer needed.
longer_insn_form could be used for label alignment as well as
instruction alignment.

There's the potential for a third hook to return the possible target
lengths for a given instruction, a bit like the one in the patch,
so that alignment could be distributed over several instructions.
I think that's an unnecessary complication at this stage though.
(I don't think the original ARC submission supported it.)

But like I say, that's all just for the record.

Richard


Committed: handle negative numbers in gcc-gdb-test.exp

2012-11-24 Thread Hans-Peter Nilsson
Without this, I got weird ERRORs and those Tcl backtraces you come
to hate, instead of the expected FAILs.  Committed as obvious after
running a failing guality test and obvserving the intended change.
(Well ok, *adding an explanatory comment* is apparently not obvious,
but I'll take that liberty...)

testsuite:
* lib/gcc-gdb-test.exp (gdb-test): Pass -- as first argument
to to send_log.

Index: gcc/testsuite/lib/gcc-gdb-test.exp
===
--- gcc/testsuite/lib/gcc-gdb-test.exp  (revision 192677)
+++ gcc/testsuite/lib/gcc-gdb-test.exp  (working copy)
@@ -74,7 +74,9 @@ proc gdb-test { args } {
if { $first == $second } {
pass "$testname"
} else {
-   send_log "$first != $second\n"
+   # We need the -- to disambiguate $first from an option,
+   # as it may be negative.
+   send_log -- "$first != $second\n"
fail "$testname"
}
remote_close target

brgds, H-P


Re: [Patch,testsuite] ad PR52641: More fixes for not-so-common targets

2012-11-24 Thread Hans-Peter Nilsson
On Fri, 23 Nov 2012, Georg-Johann Lay wrote:
> Here are some more fixes for 16-bit int and similar.

>   * gcc.c-torture/execute/20120919-1.x: New file (int32plus).

No, you should be able to use dg-directives in the main file these days.
(The .x files are obsolete since a few years, IIRC.)

brgds, H-P


Re: [PATCH] Allocate extra 16 bytes for -fsanitize=address

2012-11-24 Thread Hans-Peter Nilsson
On Fri, 23 Nov 2012, H.J. Lu wrote:
> On Fri, Nov 23, 2012 at 9:38 AM, Jakub Jelinek  wrote:
> > On Fri, Nov 23, 2012 at 09:23:37AM -0800, H.J. Lu wrote:
> >> This patch allocates extra 16 bytes for -fsanitize=address so that
> >> asan won't report read beyond memory buffer. It is used by
> >> bootstrap-asan.  OK to install?
> >
> > As valgrind warns about that too, I'd say we should do that unconditionally,
> > the additional 16-bytes just aren't a big deal.
>
> This isn't sufficient for valgrind since valgrind will also report
> reading uninitialized data,

Only if that initialized data is actually used in a conditional.

> which requires additional memset
> on extra 16 bytes.

(For plain operations, valgrind just operate on, typically
passing along, the undefinedness.)

Maybe that's what you meant, in which case my comment just
clarifies what I saw as an ambiguity in your statement.

brgds, H-P


Re: [tsan] Instrument atomics

2012-11-24 Thread Dmitry Vyukov
On Fri, Nov 23, 2012 at 8:39 PM, Jakub Jelinek  wrote:
> On Fri, Nov 23, 2012 at 08:10:39PM +0400, Dmitry Vyukov wrote:
>> > This patch attempts to instrument __atomic_* and __sync_* builtins.
>> > Unfortunately for none of the builtins there is 1:1 mapping to the tsan
>> > replacements, tsan uses weirdo memory model encoding (instead of values
>> > from 0 to 5 apparently 1 << 0 to 1 << 5, as if one could have more than
>> > one memory model at the same time), so even for the easy cases GCC
>> > has to transform them.
>>
>>
>> gcc must be using old version of the library. I've switched to ABI
>> constants some time ago.
>
> Ah, it was just me looking at llvm compiler-rt tsan checkout from a few days
> ago.  Guess I'll need to update the patch.  So, it now expects 0 for relaxed
> up to 5 for sequentially consistent?

Yes.


>> > More importantly, there is no way to pass through
>> > __ATOMIC_HLE_ACQUIRE/__ATOMIC_HLE_RELEASE.  Right now the patch just gives
>> > up and doesn't replace anything if e.g.
>> > __atomic_store_n (p, 0, __ATOMIC_HLE_RELEASE | __ATOMIC_RELEASE);
>> > is seen, perhaps one could ignore the hle bits which are just an
>> > optimization, but there is no way to find out in the generic code
>> > whether the extra bits are just an optimization or change the behavior
>> > of the builtin.
>> >
>>
>> Do you mean hardware lock elission? oh, boy
>
> Yes.
>
>> It's not just "an atomic". It should be legal to downgrade them to plain
>> atomic ops (however, I am not sure what memory order they must have... is
>> it possible to have HLE_ACQUIRE before seq_cst atomic rmw?). And I think
>> that's what we need to do.
>
> Perhaps if there wasn't the compatibility hack or what is that 100500
> comparison, or if it could be tweaked, we could pass through the HLE bits
> too and let the library decide what to do with it.

100500 was a tricky multi-step migration to the new scheme, because we
can't deploy compiler and runtime atomically.
I think we just need to drop HLE bits. I don't know what the runtime
can possibly do with them. And I don't want one more tricky multi-step
migration.



>
> If HLE bits are set, the low order bits (model & 65535) contains the
> normal memory model, i.e. 0 (relaxed) through 5 (seq_cst), and either 65536
> (hle acquire) or 131072 (hle release) is ored with that.
>
>> Well, it's a dirty implementation that relies on x86 memory model (and no
>> compiler optimizations, well, apparently there are data races :)).
>> I think eventually I will just replace them with mutexes.
>
> I don't see why mutexes would be better than just plain __atomic_* builtins.
> With mutexes there wouldn't be any atomicity...

For race detector any atomic operations is a heavy operations, which
is not atomic anyway.
Currently I do:

update vector clock
do the atomic operation
update vector clock

where 'update vector clock' is
lock container mutex
find atomic descriptor
lock atomic descriptor
unlock container mutex
update clock (O(N))
unlock atomic descriptor

it's much wiser to combine 2 vector clock updates and do the atomic
operation itself under the atomic descriptor mutex.



>> I've just committed a patch to llvm with failure_memory_order (r168518).
>
> Ok, I'll adjust the patch to pass both memory models through then.
>
>> Yeah, I think it's better to transform them to a more standard ones (llvm
>> also has own weird atomic ops and own memory orders).
>
> Ok, no change on the GCC side then needed for that beyond what I posted.
>
>> > Oh, and there are no 16-byte atomics provided by libtsan, the library
>> > would need to be built with -mcx16 on x86_64 for that and have the
>> > additional entry points for unsigned __int128.  The patch doesn't touch
>> > the 16-byte atomics, leaves them as is.
>>
>> I think that's fine for now.
>
> Perhaps.  Note that such atomics are used also e.g. for #pragma omp atomic
> on long double etc.
>
>> That's what llvm does as well. But it inserts a fast path before
>> __cxa_guard_acquire -- acquire-load of the lock word. Doesn't gcc do the
>> same?
>> tsan intercepts __cxa_guard functions.
>
> Yes, except it isn't __atomic_load_*, but plain memory read.
>   _3 = MEM[(char *)&_ZGVZ3foovE1a];
>   if (_3 == 0)
> goto ;
>   else
> goto ;
>
>   :
>   fast path, whatever;
>
>   :
>   _5 = __cxa_guard_acquire (&_ZGVZ3foovE1a);
>   ...
>
> So, right now tsan would just instrument it as __tsan_read1 from
> &_ZGVZ3foovE1a rather than any atomic load.


Looks like a bug. That needs to be load-acquire with proper compiler
and hardware memory ordering.




>> Well, yes, the compiler module must pass to the runtime all memory
>> accesses, whatever form they have in compiler internal representation.
>> Yes, I think I need to provide range memory access functions in runtime. I
>> already have this issue for Go language, there are a lot of range accesses
>> due to arrays and slices.
>> I will add them next week.
>
> Ok.  A slight problem then is that where the tsan pass sits ri

Re: Ping / update: Re: RFA: hookize ADJUST_INSN_LENGTH

2012-11-24 Thread Joern Rennecke

Quoting Richard Sandiford :


[responding because you kept me on cc:]

Joern Rennecke  writes:

This uses the same interface as my previous patch:
http://gcc.gnu.org/ml/gcc-patches/2012-11/msg00473.html ,
but I refined the algorithm for the get_insn_variants
mechanism to work properly with the reworked ARC port -
http://gcc.gnu.org/ml/gcc-patches/2012-11/msg01891.html -
the only user so far, and added some documentation.

Bootstrapped in r193731 on i686-pc-linux-gnu .


In http://gcc.gnu.org/ml/gcc-patches/2012-11/msg00029.html
the conversation was roughly:

   > Here's a very general (and rather complicated) interface,
   I wasn't after something as complicated as that.  I just
   wanted to model alignment.

and it looks like this patch implements the very general and rather
complicated interface. :-)


For the port writer, the complexities exposed are proportional to the
complexities of the port that (s)he wants to describe.


 Obviously you're completely free to do that,
but it means that any comments from me are going to be same (and as
unhelpful and unproductive) as last time.  So I still think I should
bow out of this one.

In case that seems unfair, here's a summary of my concerns:

1) As Richard B says, having "locked lengths" with the comment "care must
   be taken to avoid cycles" doesn't sound like good design.  So the
   question was: without this, why would the length be going up and down
   "arbitrarily", even though we're trying to reach a fixed point?
   And the good answer seemed to be: because the backend wants to grow
   some instructions in order to give a particular (mis)alignment to
   later ones.  There's no inherent cycle problem with that, because
   although the length of an instruction can decrease when the padding
   it previously contained is no longer needed, the instruction addresses
   themselves still only move in one direction.  We don't need a locked
   length for this case.


I've seen cycles between different alignment requirements for different
instructions and branch lengths that were borderline.
The new approach to stabilize on instruction variants that are available
should give a saner convergence, but I was afraid there might be more complex
cycles that remain.  Well, I'll try to remove the lock_length logic and see
what happens.  But obviously I can't test with all possible code that could
be compiled - I just do a test build of elf32 arc-linux-uclibc toolchain
including libraries and linux kernels.  It could happen that that suceeds
but we find a few months later that there is something else that causes
cycles.


2) Another reason the lengths could go up and down "arbitrarily" is because
   the backend wants to perform if-conversion, predication, call-it-what-
   you-will on the fly during shorten_branches, and can therefore delete
   or restore instructions relative to previous iterations.  I'm still
   not convinced that's something we should support.

3) As mentioned previously, I don't think ADJUST_LENGTH should be
   aware of ADJUST_LENGTH calls for other instructions.  The way that
   the ARC port "simulates" instructions between the insn passed in the
   previous and current calls doesn't seem right.


I have removed these aspects of the ARC port, as you can see in:
http://gcc.gnu.org/ml/gcc-patches/2012-11/msg01891.html

The new approach is to do an arc-specific if-conversion based on the ccfsm
action, and run this pass at a few strategic places, so that branch_shortening
sees these effects in the rtl.


4) The patch provides a complex interface and allows for complex decisions
   to be made, but the results of those decisions aren't stored in the rtl.
   That's not a problem for ARC because ARC keeps an on-the-side FSM
   to track these things.  I don't think we should encourage or require
   that though.


As stated above, the arc port no longer uses the FSM during branch shortening.
The interface needs to be complex because the target instruction selection
has to take complex circumstances into account.


FWIW, one interface that would deal with the alignment side of things is:

unsigned HOST_WIDE_INT preferred_insn_alignment (rtx insn);

which returns an instruction's preferred alignment, and which could
(if useful) be called for SEQUENCEs and/or the insns inside them.



No, that doesn't make sense, because alignment cost differentials depend
on instruction length.
As length changes during branch shortening, you'd get different alignment
requirements, and thus infinite cycles.


rtx longer_insn_form (rtx insn, unsigned HOST_WIDE_INT current_length,
  unsigned HOST_WIDE_INT target_length);

which returns a pattern that would turn INSN from being CURRENT_LENGTH
to TARGET_LENGTH bytes in length, or null if no such pattern exists.


And again you would ignore the cost and alignment interactions.


shorten_branches would only use these hooks when optimising the insns
for speed rather than size (the posted patch doesn't see

Re: Ping / update: Re: RFA: hookize ADJUST_INSN_LENGTH

2012-11-24 Thread Joern Rennecke

Quoting Joern Rennecke :


Quoting Richard Sandiford :



1) As Richard B says, having "locked lengths" with the comment "care must
  be taken to avoid cycles" doesn't sound like good design.  So the
  question was: without this, why would the length be going up and down
  "arbitrarily", even though we're trying to reach a fixed point?
  And the good answer seemed to be: because the backend wants to grow
  some instructions in order to give a particular (mis)alignment to
  later ones.  There's no inherent cycle problem with that, because
  although the length of an instruction can decrease when the padding
  it previously contained is no longer needed, the instruction addresses
  themselves still only move in one direction.  We don't need a locked
  length for this case.


I've seen cycles between different alignment requirements for different
instructions and branch lengths that were borderline.
The new approach to stabilize on instruction variants that are available
should give a saner convergence, but I was afraid there might be more complex
cycles that remain.  Well, I'll try to remove the lock_length logic and see
what happens.  But obviously I can't test with all possible code that could
be compiled - I just do a test build of elf32 arc-linux-uclibc toolchain
including libraries and linux kernels.  It could happen that that suceeds
but we find a few months later that there is something else that causes
cycles.


Indeed, it seems to work OK so far without the lock_length mechanism.

Currently bootstrapping in r193777 on i686-pc-linux-gnu, it has  
already built stage2 final.o .
2012-11-24  Joern Rennecke  

* doc/tm.texi.in (@hook TARGET_ADJUST_INSN_LENGTH): Add.
(@hook TARGET_INSN_LENGTH_PARAMETERS): Add.
* doc/tm.texi: Regenerate.
* final.c (get_attr_length_1): Assert HAVE_ATTR_length.
Use targetm.adjust_insn_length instead of ADJUST_INSN_LENGTH.
(shorten_branches_context_t): New typedef.
(adjust_length): New function.
(shorten_branches): Use adjust_length instead of ADJUST_INSN_LENGTH.
Try to satisfy alignment by using variable length instructions.
* target.def (adjust_insn_length, insn_length_parameters): New hooks.
* target.h (insn_length_variant_t, insn_length_parameters_t): New types.
* targhooks.c (default_adjust_insn_length): New function.
* targhooks.h (default_adjust_insn_length): Declare.
* genattrtab.c (make_length_attrs): Generate an insn_current_length
function that is also valid for prima facie invariant length
instructions.

Index: doc/tm.texi
===
--- doc/tm.texi (revision 2691)
+++ doc/tm.texi (working copy)
@@ -11365,3 +11365,15 @@ @deftypefn {Target Hook} {unsigned HOST_
 @deftypevr {Target Hook} {unsigned char} TARGET_ATOMIC_TEST_AND_SET_TRUEVAL
 This value should be set if the result written by @code{atomic_test_and_set} 
is not exactly 1, i.e. the @code{bool} @code{true}.
 @end deftypevr
+
+@deftypefn {Target Hook} int TARGET_ADJUST_INSN_LENGTH (rtx @var{insn}, int 
@var{length}, bool @var{in_delay_sequence})
+Return an adjusted length for @var{insn}.  @var{length} is the value that has 
been calculated using the @code{length} instruction attribute.  
@var{in_delay_sequence} if @var{insn} forms part of a delay sequence.  The 
default implementation uses @code{ADJUST_INSN_LENGTH}, if defined.
+@end deftypefn
+
+@deftypefn {Target Hook} void TARGET_INSN_LENGTH_PARAMETERS 
(insn_length_parameters_t *@var{insn_length_parameters})
+Fill in values used for branch shortening.  The type  
@code{insn_length_parameters_t} is defined in @file{target-def.h}.  The main 
feature is the @code{get_variants} function. @smallexample
+int (*get_variants) (rtx insn, int length, bool sequence_p, bool target_p,
+insn_length_variant_t *variants)
+@end smallexample
+ For instructions where the ordinary monotonic branch shortening is 
insufficeint to describe the alternatives, e.g. because there is alignemnt 
involved, get_variants can provide two or more variants for the instruction.  
The return value is the number of variants filled in, which must never exceed 
the number filled in by @code{insn_length_parameters} in the @var{max_variants} 
field.  The set of variants for any given instruction filled in should not vary 
during branch shortening, but rather unavailable variants should be flagged 
with a @samp{false} @var{enabled} field.  This allows @code{shorten_branches} 
to keep track of varaints that have been ever disabled in a previous iteration 
and keep them disabled, so as to avoid infinite looping inside 
@code{shorten_branches}.  The @var{length} parameter provides the length 
calculated previously from attributes.  @var{sequence_p} indicates if the 
instruction presented is inside a @code{SEQUENCE}.  Note, if you make 
@code{get_variants} provide variants for an entire @code{SEQUE

[patch, fortran] Fix PR 55314, rejects-valid regression

2012-11-24 Thread Thomas Koenig

Hello world,

the attached patch fixes a 4.6/4.7/4.8 rejects-valid regression.
OK for trunk?

Thomas

2012-11-24  Thomas Koenig  

PR fortran/55314
* resolve.c (resolve_allocate_deallocate):  Compare all
subscripts when deciding if to reject a (de)allocate
statement.

2012-11-24  Thomas Koenig  

PR fortran/55314
* gfortran.dg/allocate_error_4.f90:  New test.
! { dg-do compile }
! PR fortran/55314 - the second allocate statement was rejected.

program main
  implicit none
  integer :: max_nb
  type comm_mask
integer(4), pointer :: mask(:)
  end type comm_mask
  type (comm_mask), allocatable, save :: encode(:,:)
  max_nb=2
  allocate( encode(1:1,1:max_nb))
  allocate( encode(1,1)%mask(1),encode(1,2)%mask(1))
  deallocate( encode(1,1)%mask,encode(1,2)%mask)
  allocate( encode(1,1)%mask(1),encode(1,1)%mask(1))  ! { dg-error "also appears at" }
end program main
Index: resolve.c
===
--- resolve.c	(Revision 192894)
+++ resolve.c	(Arbeitskopie)
@@ -7618,12 +7618,18 @@ resolve_allocate_deallocate (gfc_code *code, const
 
 		  if (pr->next && qr->next)
 			{
+			  int i;
 			  gfc_array_ref *par = &(pr->u.ar);
 			  gfc_array_ref *qar = &(qr->u.ar);
-			  if ((par->start[0] != NULL || qar->start[0] != NULL)
-			  && gfc_dep_compare_expr (par->start[0],
-		   qar->start[0]) != 0)
-			break;
+
+			  for (i=0; idimen; i++)
+			{
+			  if ((par->start[i] != NULL
+   || qar->start[i] != NULL)
+  && gfc_dep_compare_expr (par->start[i],
+			   qar->start[i]) != 0)
+goto break_label;
+			}
 			}
 		}
 		  else
@@ -7635,6 +7641,8 @@ resolve_allocate_deallocate (gfc_code *code, const
 		  pr = pr->next;
 		  qr = qr->next;
 		}
+	break_label:
+	  ;
 	}
 	}
 }


Re: [patch, fortran] Fix PR 55314, rejects-valid regression

2012-11-24 Thread Janus Weil
Hi Thomas,

> the attached patch fixes a 4.6/4.7/4.8 rejects-valid regression.
> OK for trunk?

looks good to me. Thanks for the patch. Ok for trunk and also for the
4.6/4.7 branches ...

Cheers,
Janus




> 2012-11-24  Thomas Koenig  
>
> PR fortran/55314
> * resolve.c (resolve_allocate_deallocate):  Compare all
> subscripts when deciding if to reject a (de)allocate
> statement.
>
> 2012-11-24  Thomas Koenig  
>
> PR fortran/55314
> * gfortran.dg/allocate_error_4.f90:  New test.


[patch] Fix x32 multiarch name (x86_64-linux-gnux32)

2012-11-24 Thread Matthias Klose
For x86_64-linux-gnux32 I used 'x32-linux-gnu' as the multiarch name, which is
wrong, should be 'x86_64-linux-gnux32' instead (see [1]). Pointed out by Daniel
Schepler.

Committed as obvious.

  Matthias

[1] http://bugs.debian.org/667037

* gcc/config/i386/t-linux64 (MULTILIB_OSDIRNAMES): Use
x86_64-linux-gnux32 as multiarch name for x32.

Index: gcc/config/i386/t-linux64
===
--- gcc/config/i386/t-linux64   (Revision 193778)
+++ gcc/config/i386/t-linux64   (Arbeitskopie)
@@ -36,4 +36,4 @@
 MULTILIB_DIRNAMES   = $(patsubst m%, %, $(subst /, ,$(MULTILIB_OPTIONS)))
 MULTILIB_OSDIRNAMES = m64=../lib64$(call if_multiarch,:x86_64-linux-gnu)
 MULTILIB_OSDIRNAMES+= m32=$(if $(wildcard $(shell echo
$(SYSTEM_HEADER_DIR))/../../usr/lib32),../lib32,../lib)$(call
if_multiarch,:i386-linux-gnu)
-MULTILIB_OSDIRNAMES+= mx32=../libx32$(call if_multiarch,:x32-linux-gnu)
+MULTILIB_OSDIRNAMES+= mx32=../libx32$(call if_multiarch,:x86_64-linux-gnux32)


Re: [C++ Patch] PR 55446

2012-11-24 Thread Jason Merrill

OK.

Jason


PR 55438: Incorrect use of BIGGEST_ALIGNMENT

2012-11-24 Thread Richard Sandiford
In http://gcc.gnu.org/ml/gcc-patches/2012-11/msg00257.html I said:

  get_best_mode has various checks to decide what counts as an acceptable
  bitfield mode.  It actually has two copies of them, with slightly different
  alignment checks:

MIN (unit, BIGGEST_ALIGNMENT) > align

  vs.

unit <= MIN (align, BIGGEST_ALIGNMENT)

  The second looks more correct, since we can't necessarily guarantee
  larger alignments than BIGGEST_ALIGNMENT in all cases.

PR 55438 shows why I was wrong.  BIGGEST_ALIGNMENT is not (as I thought)
the biggest supported alignment, but the:

  Biggest alignment that any data type can require on this machine, in
  bits.  Note that this is not the biggest alignment that is supported,
  just the biggest alignment that, when violated, may cause a fault.

So it's perfectly possible for BIGGEST_ALIGNMENT to be 8 on 32-bit machines.

As things stand, my change causes get_best_mode to reject anything larger
than a byte for that combination, which causes infinite recursion between
store_fixed_bit_field and store_split_bit_field.

There are two problems.  First:

  if (!bitregion_end_)
{
  /* We can assume that any aligned chunk of ALIGN_ bits that overlaps
 the bitfield is mapped and won't trap.  */
  bitregion_end_ = bitpos + bitsize + align_ - 1;
  bitregion_end_ -= bitregion_end_ % align_ + 1;
}

is too conservative if the aligment is capped to BIGGEST_ALIGNMENT.
We should be able to guarantee that word-aligned memory is mapped
in at least word-sized chunks.  (If you think now is the time to
add a MIN_PAGE_SIZE macro, perhaps with MAX (BITS_PER_WORD,
BIGGEST_ALIGNMENT) as its default, I can do that instead.)

Also, in cases like these, the supposedly conservative:

 && GET_MODE_BITSIZE (mode) <= align

doesn't preserve the cap in the original:

MIN (unit, BIGGEST_ALIGNMENT) > align

Fixed by using GET_MODE_ALIGNMENT instead.

Tested on x86_64-linux-gnu, powerpc64-linux-gnu and cris-elf.
I checked that the output for a set of gcc .ii files didn't
change for the first two.  OK to install?

Richard


gcc/
PR middle-end/55438
* stor-layout.c (bit_field_mode_iterator::bit_field_mode_iterator):
Don't limit ALIGN_.  Assume that memory is mapped in chunks of at
least word size, regardless of BIGGEST_ALIGNMENT.
(get_best_mode): Use GET_MODE_ALIGNMENT.

Index: gcc/stor-layout.c
===
--- gcc/stor-layout.c   2012-11-24 10:24:46.0 +
+++ gcc/stor-layout.c   2012-11-24 10:33:19.011617853 +
@@ -2643,15 +2643,17 @@ fixup_unsigned_type (tree type)
   unsigned int align, bool volatilep)
 : mode_ (GET_CLASS_NARROWEST_MODE (MODE_INT)), bitsize_ (bitsize),
   bitpos_ (bitpos), bitregion_start_ (bitregion_start),
-  bitregion_end_ (bitregion_end), align_ (MIN (align, BIGGEST_ALIGNMENT)),
+  bitregion_end_ (bitregion_end), align_ (align),
   volatilep_ (volatilep), count_ (0)
 {
   if (!bitregion_end_)
 {
-  /* We can assume that any aligned chunk of ALIGN_ bits that overlaps
+  /* We can assume that any aligned chunk of UNITS bits that overlaps
 the bitfield is mapped and won't trap.  */
-  bitregion_end_ = bitpos + bitsize + align_ - 1;
-  bitregion_end_ -= bitregion_end_ % align_ + 1;
+  unsigned HOST_WIDE_INT units = MIN (align, MAX (BIGGEST_ALIGNMENT,
+ BITS_PER_WORD));
+  bitregion_end_ = bitpos + bitsize + units - 1;
+  bitregion_end_ -= bitregion_end_ % units + 1;
 }
 }
 
@@ -2808,7 +2810,7 @@ get_best_mode (int bitsize, int bitpos,
causes store_bit_field to keep a 128-bit memory reference,
so that the final bitfield reference still has a MEM_EXPR
and MEM_OFFSET.  */
-&& GET_MODE_BITSIZE (mode) <= align
+&& GET_MODE_ALIGNMENT (mode) <= align
 && (largest_mode == VOIDmode
 || GET_MODE_SIZE (mode) <= GET_MODE_SIZE (largest_mode)))
 {


Re: [www-docs] Add note to gcc-4.8/changes.html that DWARF4 is now the default.

2012-11-24 Thread Gerald Pfeifer
On Wed, 21 Nov 2012, Mark Wielaard wrote:
> As mentioned in some bug reports it should be documented that DWARF4 is
> now the default for 4.8 when -g is used (and that one might need a newer
> version of debugger/profiling/tracing tools to use it). So I added the
> following:

I applied the following patch on top of yours, which adds
... markup for command-line options and a line
break.

Gerald

Index: changes.html
===
RCS file: /cvs/gcc/wwwdocs/htdocs/gcc-4.8/changes.html,v
retrieving revision 1.65
diff -u -3 -p -r1.65 changes.html
--- changes.html23 Nov 2012 13:19:03 -  1.65
+++ changes.html24 Nov 2012 19:43:40 -
@@ -54,13 +54,16 @@ by this change.
 
   
 DWARF4 is now the default when generating DWARF debug information.
-  When -g is used on a platform that uses DWARF debugging information,
-  GCC will now default to -gdwarf-4 -fno-debug-types-section.
+  When -g is used on a platform that uses DWARF debugging
+  information, GCC will now default to
+  -gdwarf-4 -fno-debug-types-section.
   GDB 7.5, Valgrind 3.8.0 and elfutils 0.154 debug information consumers
   support DWARF4 by default. Before GCC 4.8 the default version used
-  was DWARF2. To make GCC 4.8 generate an older DWARF version use -g
-  together with -gdwarf-2 or -gdwarf-3. The default for Darwin and
-  VxWorks is still -gdwarf-2 -gstrict-dwarf.
+  was DWARF2. To make GCC 4.8 generate an older DWARF version use
+  -g together with -gdwarf-2 or
+  -gdwarf-3.
+  The default for Darwin and VxWorks is still
+  -gdwarf-2 -gstrict-dwarf.
 
 A new general optimization level, -Og, has been
   introduced.  It addresses the need for fast compilation and a


libgo patch committed: Fix handling of Unix domain @ addresses

2012-11-24 Thread Ian Lance Taylor
This patch to libgo fixes the handling of Unix domain @ addresses, a
GNU/Linux feature.  The library was computing the length incorrectly.  I
actually fixed this in the master library in January 2011, but somehow
failed to propagate the fix into the gccgo library.  Bootstrapped and
ran Go testsuite on x86_64-unknown-linux-gnu.  Committed to mainline and
4.7 branch.

Ian

diff -r 8ddfd13a6bc8 libgo/go/syscall/socket.go
--- a/libgo/go/syscall/socket.go	Tue Nov 20 22:38:30 2012 -0800
+++ b/libgo/go/syscall/socket.go	Sat Nov 24 12:19:05 2012 -0800
@@ -87,12 +87,16 @@
 	for i := 0; i < n; i++ {
 		sa.raw.Path[i] = int8(name[i])
 	}
+	// length is family (uint16), name, NUL.
+	sl := 2 + Socklen_t(n) + 1
 	if sa.raw.Path[0] == '@' {
 		sa.raw.Path[0] = 0
+		// Don't count trailing NUL for abstract address.
+		sl--
 	}
 
 	// length is family (uint16), name, NUL.
-	return (*RawSockaddrAny)(unsafe.Pointer(&sa.raw)), 2 + Socklen_t(n) + 1, nil
+	return (*RawSockaddrAny)(unsafe.Pointer(&sa.raw)), sl, nil
 }
 
 func anyToSockaddr(rsa *RawSockaddrAny) (Sockaddr, error) {
diff -r 8ddfd13a6bc8 libgo/go/syscall/socket_linux.go
--- a/libgo/go/syscall/socket_linux.go	Tue Nov 20 22:38:30 2012 -0800
+++ b/libgo/go/syscall/socket_linux.go	Sat Nov 24 12:19:05 2012 -0800
@@ -103,7 +103,7 @@
 	// to be uninterpreted fixed-size binary blobs--but
 	// everyone uses this convention.
 	n := 0
-	for n < len(sa.Path)-3 && sa.Path[n] != 0 {
+	for n < len(sa.Path) && sa.Path[n] != 0 {
 		n++
 	}
 


[PATCH] Remove unused DELAY_SLOTS_FOR_EPILOGUE target macro

2012-11-24 Thread Steven Bosscher
Hello,

The DELAY_SLOTS_FOR_EPILOGUE target macro, and the related
ELIGIBLE_FOR_EPILOGUE_DELAY macro, are unused. The attached patch
removes them.

OK for trunk?

Ciao!
Steven


dbr_sched_janitor.diff
Description: Binary data


[RFA 1/8] validate_failures.py: add TestResult ordinal

2012-11-24 Thread Doug Evans
Hi.
This set of 8 patch makes some minor tweaks to validate_failures.py,
and provides some new functionality.
They are relatively independent, but the patches are intended to be applied in 
order.

This first patch adds an ordinal to TestResult so that the sorting
keeps the tests in order of appearance.

Ok to check in?

2012-11-24  Doug Evans  

* testsuite-management/validate_failures.py: Record ordinal with 
TestResult.

--- validate_failures.py=   2012-11-19 11:47:29.997632548 -0800
+++ validate_failures.py2012-11-24 13:19:22.183836303 -0800
@@ -91,9 +91,12 @@ class TestResult(object):
 state: One of UNRESOLVED, XPASS or FAIL.
 name: File name for the test.
 description: String describing the test (flags used, dejagnu message, etc)
+ordinal: Monotonically increasing integer.
+ It is used to keep results for one .exp file sorted
+ by the order the tests were run.
   """
 
-  def __init__(self, summary_line):
+  def __init__(self, summary_line, ordinal=-1):
 try:
   self.attrs = ''
   if '|' in summary_line:
@@ -109,6 +112,7 @@ class TestResult(object):
   self.attrs = self.attrs.strip()
   self.state = self.state.strip()
   self.description = self.description.strip()
+  self.ordinal = ordinal
 except ValueError:
   Error('Cannot parse summary line "%s"' % summary_line)
 
@@ -117,7 +121,8 @@ class TestResult(object):
 self.state, summary_line, self))
 
   def __lt__(self, other):
-return self.name < other.name
+return (self.name < other.name or
+(self.name == other.name and self.ordinal < other.ordinal))
 
   def __hash__(self):
 return hash(self.state) ^ hash(self.name) ^ hash(self.description)
@@ -196,10 +201,14 @@ def IsInterestingResult(line):
 def ParseSummary(sum_fname):
   """Create a set of TestResult instances from the given summary file."""
   result_set = set()
+  # ordinal is used when sorting the results so that tests within each
+  # .exp file are kept sorted.
+  ordinal=0
   sum_file = open(sum_fname)
   for line in sum_file:
 if IsInterestingResult(line):
-  result = TestResult(line)
+  result = TestResult(line, ordinal)
+  ordinal += 1
   if result.HasExpired():
 # Tests that have expired are not added to the set of expected
 # results. If they are still present in the set of actual results,


[RFA 2/8] validate_failures.py: use target_alias

2012-11-24 Thread Doug Evans
Hi.
This second patch uses "target_alias" instead of "target" in the help text,
and makes some minor whitespace changes.

Ok to check in?

2012-11-24  Doug Evans  

* testsuite-management/validate_failures.py: Use  instead 
of
.  Minor whitespace changes.

--- validate_failures.py=   2012-11-19 11:47:29.997632548 -0800
+++ validate_failures.py2012-11-24 13:22:25.805598185 -0800
@@ -217,8 +217,7 @@ def GetManifest(manifest_name):
   Each entry in the manifest file should have the format understood
   by the TestResult constructor.
 
-  If no manifest file exists for this target, it returns an empty
-  set.
+  If no manifest file exists for this target, it returns an empty set.
   """
   if os.path.exists(manifest_name):
 return ParseSummary(manifest_name)
@@ -409,7 +408,7 @@ def Main(argv):
   parser.add_option('--manifest', action='store', type='string',
 dest='manifest', default=None,
 help='Name of the manifest file to use (default = '
-'taken from contrib/testsuite-managment/.xfail)')
+'taken from 
contrib/testsuite-managment/.xfail)')
   parser.add_option('--produce_manifest', action='store_true',
 dest='produce_manifest', default=False,
 help='Produce the manifest for the current '
@@ -436,6 +435,7 @@ def Main(argv):
   else:
 return 1
 
+
 if __name__ == '__main__':
   retval = Main(sys.argv)
   sys.exit(retval)


[RFA 3/8] validate_failures.py: pass options.results for clean build case

2012-11-24 Thread Doug Evans
Hi.
This third patch passes options.results to GetSumFiles when fetching the results
for the clean build.
It is useful in my use cases, but I'm not sure it's useful for upstream.
[An alternative is to add another option to specify the results file(s) for the 
clean
build, but I'm being conservative and not adding an option if I don't have to.]

Ok to check in?

2012-11-24  Doug Evans  

* testsuite-management/validate_failures.py (CompareBuilds): Pass 
options.results
to GetSumFiles for clean build.

--- validate_failures.py=   2012-11-19 11:47:29.997632548 -0800
+++ validate_failures.py2012-11-24 13:26:07.727726123 -0800
@@ -373,7 +373,7 @@ def CompareBuilds(options):
   sum_files = GetSumFiles(options.results, options.build_dir)
   actual = GetResults(sum_files)
 
-  clean_sum_files = GetSumFiles(None, options.clean_build)
+  clean_sum_files = GetSumFiles(options.results, options.clean_build)
   clean = GetResults(clean_sum_files)
 
   return PerformComparison(clean, actual, options.ignore_missing_failures)


[RFA 4/8] validate_failures.py: rename --manifest to --manifest_path

2012-11-24 Thread Doug Evans
Hi.
This fourth patch renames option --manifest to --manifest_path.
I have a later patch that adds a --manifest_name option, making "--manifest"
too confusing/ambiguous.

Ok to check in?
[I still have to update invocations that use --manifest,
but that's a separate patch.]

2012-11-24  Doug Evans  

* testsuite-management/validate_failures.py: Rename option --manifest to
--manifest_path.  Rename local variables manifest_name to manifest_path.

--- validate_failures.py=   2012-11-19 11:47:29.997632548 -0800
+++ validate_failures.py2012-11-24 13:31:29.77080 -0800
@@ -211,7 +211,7 @@ def ParseSummary(sum_fname):
   return result_set
 
 
-def GetManifest(manifest_name):
+def GetManifest(manifest_path):
   """Build a set of expected failures from the manifest file.
 
   Each entry in the manifest file should have the format understood
@@ -220,8 +220,8 @@ def GetManifest(manifest_name):
   If no manifest file exists for this target, it returns an empty
   set.
   """
-  if os.path.exists(manifest_name):
-return ParseSummary(manifest_name)
+  if os.path.exists(manifest_path):
+return ParseManifest(manifest_path)
   else:
 return set()
 
@@ -326,14 +326,14 @@ def CheckExpectedResults(options):
 (srcdir, target, valid_build) = GetBuildData(options)
 if not valid_build:
   return False
-manifest_name = _MANIFEST_PATH_PATTERN % (srcdir, target)
+manifest_path = _MANIFEST_PATH_PATTERN % (srcdir, target)
   else:
-manifest_name = options.manifest
-if not os.path.exists(manifest_name):
-  Error('Manifest file %s does not exist.' % manifest_name)
+manifest_path = options.manifest
+if not os.path.exists(manifest_path):
+  Error('Manifest file %s does not exist.' % manifest_path)
 
-  print 'Manifest: %s' % manifest_name
-  manifest = GetManifest(manifest_name)
+  print 'Manifest: %s' % manifest_path
+  manifest = GetManifest(manifest_path)
   sum_files = GetSumFiles(options.results, options.build_dir)
   actual = GetResults(sum_files)
 
@@ -349,14 +349,14 @@ def ProduceManifest(options):
   if not valid_build:
 return False
 
-  manifest_name = _MANIFEST_PATH_PATTERN % (srcdir, target)
-  if os.path.exists(manifest_name) and not options.force:
+  manifest_path = _MANIFEST_PATH_PATTERN % (srcdir, target)
+  if os.path.exists(manifest_path) and not options.force:
 Error('Manifest file %s already exists.\nUse --force to overwrite.' %
-  manifest_name)
+  manifest_path)
 
   sum_files = GetSumFiles(options.results, options.build_dir)
   actual = GetResults(sum_files)
-  manifest_file = open(manifest_name, 'w')
+  manifest_file = open(manifest_path, 'w')
   for result in sorted(actual):
 print result
 manifest_file.write('%s\n' % result)
@@ -406,8 +406,8 @@ def Main(argv):
 'that the expected failure has been fixed, or '
 'it did not run, or it may simply be flaky '
 '(default = False)')
-  parser.add_option('--manifest', action='store', type='string',
-dest='manifest', default=None,
+  parser.add_option('--manifest_path', action='store', type='string',
+dest='manifest_path', default=None,
 help='Name of the manifest file to use (default = '
 'taken from contrib/testsuite-managment/.xfail)')
   parser.add_option('--produce_manifest', action='store_true',


[RFA 5/8] validate_failures.py: make options a global

2012-11-24 Thread Doug Evans
Hi.
This fifth patch makes options a global variable.
As validate_failures.py becomes more complex, passing it around everywhere
becomes cumbersome with no real gain.

Ok to check in?

2012-11-24  Doug Evans  

* testsuite-management/validate_failures.py: Make options a global 
variable.

--- validate_failures.py.manifest-path  2012-11-24 13:32:47.131550661 -0800
+++ validate_failures.py2012-11-24 13:40:47.016143990 -0800
@@ -60,6 +60,9 @@ _VALID_TEST_RESULTS = [ 'FAIL', 'UNRESOL
 # target triple used during the build.
 _MANIFEST_PATH_PATTERN = '%s/contrib/testsuite-management/%s.xfail'
 
+# The options passed to the program.
+options = None
+
 def Error(msg):
   print >>sys.stderr, '\nerror: %s' % msg
   sys.exit(1)
@@ -273,7 +276,7 @@ def CompareResults(manifest, actual):
   return actual_vs_manifest, manifest_vs_actual
 
 
-def GetBuildData(options):
+def GetBuildData():
   target = GetMakefileValue('%s/Makefile' % options.build_dir, 'target_alias=')
   srcdir = GetMakefileValue('%s/Makefile' % options.build_dir, 'srcdir =')
   if not ValidBuildDirectory(options.build_dir, target):
@@ -321,9 +324,9 @@ def PerformComparison(expected, actual, 
   return tests_ok
 
 
-def CheckExpectedResults(options):
+def CheckExpectedResults():
   if not options.manifest:
-(srcdir, target, valid_build) = GetBuildData(options)
+(srcdir, target, valid_build) = GetBuildData()
 if not valid_build:
   return False
 manifest_path = _MANIFEST_PATH_PATTERN % (srcdir, target)
@@ -344,8 +347,8 @@ def CheckExpectedResults(options):
   return PerformComparison(manifest, actual, options.ignore_missing_failures)
 
 
-def ProduceManifest(options):
-  (srcdir, target, valid_build) = GetBuildData(options)
+def ProduceManifest():
+  (srcdir, target, valid_build) = GetBuildData()
   if not valid_build:
 return False
 
@@ -365,8 +368,8 @@ def ProduceManifest(options):
   return True
 
 
-def CompareBuilds(options):
-  (srcdir, target, valid_build) = GetBuildData(options)
+def CompareBuilds():
+  (srcdir, target, valid_build) = GetBuildData()
   if not valid_build:
 return False
 
@@ -422,14 +425,15 @@ def Main(argv):
 '.sum files collected from the build directory).')
   parser.add_option('--verbosity', action='store', dest='verbosity',
 type='int', default=0, help='Verbosity level (default = 
0)')
+  global options
   (options, _) = parser.parse_args(argv[1:])
 
   if options.produce_manifest:
-retval = ProduceManifest(options)
+retval = ProduceManifest()
   elif options.clean_build:
-retval = CompareBuilds(options)
+retval = CompareBuilds()
   else:
-retval = CheckExpectedResults(options)
+retval = CheckExpectedResults()
 
   if retval:
 return 0


[RFA 6/8] validate_failures.py: remove pass/fail from GetBuildData

2012-11-24 Thread Doug Evans
Hi.
This sixth patch simplifies calls to GetBuildData.
It never returns false and always terminates the process with an error message
(which is fine by me).

Ok to check in?

2012-11-24  Doug Evans  

* testsuite-management/validate_failures.py: Remove pass/fail indicator 
from
result of GetBuildData.

--- validate_failures.py.options2012-11-24 13:40:57.616245431 -0800
+++ validate_failures.py2012-11-24 13:44:05.118039626 -0800
@@ -284,7 +284,7 @@ def GetBuildData():
   options.build_dir)
   print 'Source directory: %s' % srcdir
   print 'Build target: %s' % target
-  return srcdir, target, True
+  return srcdir, target
 
 
 def PrintSummary(msg, summary):
@@ -326,9 +326,7 @@ def PerformComparison(expected, actual, 
 
 def CheckExpectedResults():
   if not options.manifest:
-(srcdir, target, valid_build) = GetBuildData()
-if not valid_build:
-  return False
+(srcdir, target) = GetBuildData()
 manifest_path = _MANIFEST_PATH_PATTERN % (srcdir, target)
   else:
 manifest_path = options.manifest
@@ -348,10 +346,7 @@ def CheckExpectedResults():
 
 
 def ProduceManifest():
-  (srcdir, target, valid_build) = GetBuildData()
-  if not valid_build:
-return False
-
+  (srcdir, target) = GetBuildData()
   manifest_path = _MANIFEST_PATH_PATTERN % (srcdir, target)
   if os.path.exists(manifest_path) and not options.force:
 Error('Manifest file %s already exists.\nUse --force to overwrite.' %
@@ -369,9 +364,7 @@ def ProduceManifest():
 
 
 def CompareBuilds():
-  (srcdir, target, valid_build) = GetBuildData()
-  if not valid_build:
-return False
+  (srcdir, target) = GetBuildData()
 
   sum_files = GetSumFiles(options.results, options.build_dir)
   actual = GetResults(sum_files)


[RFA 7/8] validate_failures.py: New options --manifest_subdir, --manifest_name

2012-11-24 Thread Doug Evans
Hi.
This seventh patch adds new options --manifest_subdir, --manifest_name.
Useful when using validate_failures.py with a different tool, instead of gcc.

Ok to check in?

2012-11-24  Doug Evans  

* testsuite-management/validate_failures.py: New options 
--manifest_subdir,
--manifest_name.

--- validate_failures.py.with-first-6-patches   2012-11-24 13:47:44.410137689 
-0800
+++ validate_failures.py2012-11-24 13:53:40.253541475 -0800
@@ -37,7 +37,7 @@ executed it will:
 1- Determine the target built: TARGET
 2- Determine the source directory: SRCDIR
 3- Look for a failure manifest file in
-   /contrib/testsuite-management/.xfail
+   //.xfail
 4- Collect all the .sum files from the build tree.
 5- Produce a report stating:
a- Failures expected in the manifest but not present in the build.
@@ -55,10 +55,15 @@ import sys
 # Handled test results.
 _VALID_TEST_RESULTS = [ 'FAIL', 'UNRESOLVED', 'XPASS', 'ERROR' ]
 
-# Pattern for naming manifest files.  The first argument should be
-# the toplevel GCC source directory.  The second argument is the
-# target triple used during the build.
-_MANIFEST_PATH_PATTERN = '%s/contrib/testsuite-management/%s.xfail'
+# Default subdirectory of srcdir in which to find the manifest file.
+_MANIFEST_DEFAULT_SUBDIR = 'contrib/testsuite-management'
+
+# Pattern for naming manifest files.
+# The first argument should be the toplevel GCC(/GNU tool) source directory.
+# The second argument is the manifest subdir.
+# The third argument is the manifest target, which defaults to the target
+# triplet used during the build.
+_MANIFEST_PATH_PATTERN = '%s/%s/%s.xfail'
 
 # The options passed to the program.
 options = None
@@ -284,6 +289,22 @@ def CompareResults(manifest, actual):
   return actual_vs_manifest, manifest_vs_actual
 
 
+def GetManifestPath(srcdir, target, user_provided_must_exist):
+  """Return the full path to the manifest file."""
+  manifest_path = None
+  if options.manifest_path:
+manifest_path = options.manifest_path
+  elif options.manifest_name:
+manifest_path = _MANIFEST_PATH_PATTERN % (
+  srcdir, options.manifest_subdir, options.manifest_name)
+  if manifest_path:
+if user_provided_must_exist and not os.path.exists(manifest_path):
+  Error('Manifest does not exist: %s' % manifest_path)
+return manifest_path
+  else:
+return _MANIFEST_PATH_PATTERN % (srcdir, options.manifest_subdir, target)
+
+
 def GetBuildData():
   target = GetMakefileValue('%s/Makefile' % options.build_dir, 'target_alias=')
   srcdir = GetMakefileValue('%s/Makefile' % options.build_dir, 'srcdir =')
@@ -333,14 +354,8 @@ def PerformComparison(expected, actual, 
 
 
 def CheckExpectedResults():
-  if not options.manifest:
-(srcdir, target) = GetBuildData()
-manifest_path = _MANIFEST_PATH_PATTERN % (srcdir, target)
-  else:
-manifest_path = options.manifest
-if not os.path.exists(manifest_path):
-  Error('Manifest file %s does not exist.' % manifest_path)
-
+  (srcdir, target) = GetBuildData()
+  manifest_path = GetManifestPath(srcdir, target, True)
   print 'Manifest: %s' % manifest_path
   manifest = GetManifest(manifest_path)
   sum_files = GetSumFiles(options.results, options.build_dir)
@@ -355,7 +370,8 @@ def CheckExpectedResults():
 
 def ProduceManifest():
   (srcdir, target) = GetBuildData()
-  manifest_path = _MANIFEST_PATH_PATTERN % (srcdir, target)
+  manifest_path = GetManifestPath(srcdir, target, False)
+  print 'Manifest: %s' % manifest_path
   if os.path.exists(manifest_path) and not options.force:
 Error('Manifest file %s already exists.\nUse --force to overwrite.' %
   manifest_path)
@@ -418,6 +434,16 @@ def Main(argv):
 dest='produce_manifest', default=False,
 help='Produce the manifest for the current '
 'build (default = False)')
+  parser.add_option('--manifest_subdir', action='store', type='string',
+dest='manifest_subdir',
+default=_MANIFEST_DEFAULT_SUBDIR,
+help=('Subdirectory of srcdir to find manifest file in\n'
+  '(default = %s)' % _MANIFEST_DEFAULT_SUBDIR))
+  parser.add_option('--manifest_name', action='store', type='string',
+dest='manifest_name',
+default=None,
+help=('Name of manifest file, sans .xfail suffix'
+  ' (default = target_alias)'))
   parser.add_option('--results', action='store', type='string',
 dest='results', default=None, help='Space-separated list '
 'of .sum files with the testing results to check. The '


[RFA 8/8] validate_failures.py: New directives @include, @remove.

2012-11-24 Thread Doug Evans
Hi.
This eighth patch adds new directives to manifests: @include, @remove.
These are useful when needing to handle a multi-dimensional matrix of test
configurations, and not wanting to maintain the otherwise necessary duplication
across all of the axes.
[@remove should be used sparingly, but I've found it useful.]

Ok to check in?

2012-11-24  Doug Evans  

* testsuite-management/validate_failures.py: Add support for @include, 
@remove
directives in manifest files.

--- validate_failures.py.manifest-subdir-name   2012-11-24 13:54:03.263761546 
-0800
+++ validate_failures.py2012-11-24 14:23:09.840487566 -0800
@@ -44,6 +44,14 @@ executed it will:
b- Failures in the build not expected in the manifest.
 6- If all the build failures are expected in the manifest, it exits
with exit code 0.  Otherwise, it exits with error code 1.
+
+Manifest files contain expected DejaGNU results that are otherwise
+treated as failures.
+They may also contain additional text:
+
+# This is a comment.  - self explanatory
+@include file - the file is a path relative to the includer
+@remove result text   - result text is removed from the expected set
 """
 
 import datetime
@@ -192,11 +200,13 @@ def ValidBuildDirectory(builddir, target
   return True
 
 
+def IsComment(line):
+  """Return True if line is a comment."""
+  return line.startswith('#')
+
+
 def IsInterestingResult(line):
-  """Return True if the given line is one of the summary lines we care 
about."""
-  line = line.strip()
-  if line.startswith('#'):
-return False
+  """Return True if line is one of the summary lines we care about."""
   if '|' in line:
 (_, line) = line.split('|', 1)
   line = line.strip()
@@ -206,6 +216,58 @@ def IsInterestingResult(line):
   return False
 
 
+def IsInclude(line):
+  """Return True if line is an include of another file."""
+  return line.startswith("@include ")
+
+
+def GetIncludeFile(line, includer):
+  """Extract the name of the include file from line."""
+  includer_dir = os.path.dirname(includer)
+  include_file = line[len("@include "):]
+  return os.path.join(includer_dir, include_file.strip())
+
+
+def IsNegativeResult(line):
+  """Return True if line should be removed from the expected results."""
+  return line.startswith("@remove ")
+
+
+def GetNegativeResult(line):
+  """Extract the name of the negative result from line."""
+  line = line[len("@remove "):]
+  return line.strip()
+
+
+def ParseManifestWorker(result_set, manifest_path):
+  """Read manifest_path, adding the contents to result_set."""
+  if options.verbosity >= 1:
+print 'Parsing manifest file %s.' % manifest_path
+  manifest_file = open(manifest_path)
+  for line in manifest_file:
+line = line.strip()
+if line == "":
+  pass
+elif IsComment(line):
+  pass
+elif IsNegativeResult(line):
+  result_set.remove(TestResult(GetNegativeResult(line)))
+elif IsInclude(line):
+  ParseManifestWorker(result_set, GetIncludeFile(line, manifest_path))
+elif IsInterestingResult(line):
+  result_set.add(TestResult(line))
+else:
+  Error('Unrecognized line in manifest file: %s' % line)
+  manifest_file.close()
+
+
+def ParseManifest(manifest_path):
+  """Create a set of TestResult instances from the given manifest file."""
+  result_set = set()
+  ParseManifestWorker(result_set, manifest_path)
+  return result_set
+
+
 def ParseSummary(sum_fname):
   """Create a set of TestResult instances from the given summary file."""
   result_set = set()


Re: RFA: contribute Synopsis DesignWare ARC port

2012-11-24 Thread Steven Bosscher
On Thu, Nov 22, 2012 at 9:22 PM, Joern Rennecke wrote:
(nothing but a ChangeLog :-)

Looking at the ARC port a bit, and IMHO it doesn't look very messy...

First some general comments:

This target apparently wants to play tricks with reload (e.g.
arc_preserve_reload_p). IMHO at this point ports that don't work with
LRA should not be accepted.

TARGET_EXPAND_ADDDI is not covered in any test. Actually most target
options are not covered.

Can't the following option be just removed:
mold-di-patterns
Target Var(TARGET_OLD_DI_PATTERNS)
enable use of old DI patterns that have presumably been obsoleted by
subreg lowering.

There quite some old cruft that should be cleaned up. A few examples:

1. Things that probably worked before GCC3 but shouldn't be in a new port:
/*/\* Compute LOG_LINKS.  *\/ */
/*for (bb = 0; bb < current_nr_blocks; bb++) */
/*  compute_block_backward_dependences (bb); */

2. Old scheduler description:
;; Function units of the ARC

;; (define_function_unit {name} {num-units} {n-users} {test}
;;   {ready-delay} {issue-delay} [{conflict-list}])
(etc...)

3. Presumably no longer applicable comments if this port is supposed
to be part of the FSF repo:
;; *** N.B.: the use of '* return...' in "*movsi_insn", et al, rely
;; on ARC-local changes to genoutput.c's process_template() function



Some random observations in config/arc/arc.c:
arc_dead_or_set_postreload_p() and the related functions is not used AFAICT.
Many functions (most?) have no leading comments.



About config/arc/arc.md:

What's this comment about?
;;  ashwin : include options.h from build dir
;; (include "arc.c")

The is_NON_SIBCALL attribute appears to be unused. If it can be
removed, then so can the is_SIBCALL attribute.
How is the "*millicode_sibthunk_ld" pattern used?

What are the commented-out patterns (e.g. "*adc_0") for?

Before the sibcall and return_i patterns:
; Since the demise of REG_N_SETS, it is no longer possible to find out
; in the prologue / epilogue expanders how many times blink is set.
I don't understand this, REG_N_SETS still exists.

C-style comment in a .md file (does that even work??):
/* Store a value to directly to memory.  The location might also be cached.
   Since the cached copy can cause a write-back at unpredictable times,
   we first write cached, then we write uncached.  */

Why are there patterns that have a "switch (which_alternative)" to
emit the insn template, instead of an "@..." string? See e.g.
(define_insn "*zero_extendhisi2_i" and "*zero_extendqisi2_ac".


It seems to me that a bit of cleaning up is in order before this port
is accepted.

Ciao!
Steven


Re: [PATCH] Remove unused DELAY_SLOTS_FOR_EPILOGUE target macro

2012-11-24 Thread Hans-Peter Nilsson
On Sat, 24 Nov 2012, Steven Bosscher wrote:
> Hello,
>
> The DELAY_SLOTS_FOR_EPILOGUE target macro, and the related
> ELIGIBLE_FOR_EPILOGUE_DELAY macro, are unused. The attached patch
> removes them.
>
> OK for trunk?

Obviously; Jeff Law preapproved their removal long ago.
(I forgot about it.)  But please poison them too!

brgds, H-P


Re: [PATCH] Remove unused DELAY_SLOTS_FOR_EPILOGUE target macro

2012-11-24 Thread Steven Bosscher
On Sun, Nov 25, 2012 at 12:54 AM, Hans-Peter Nilsson wrote:
> On Sat, 24 Nov 2012, Steven Bosscher wrote:
>> Hello,
>>
>> The DELAY_SLOTS_FOR_EPILOGUE target macro, and the related
>> ELIGIBLE_FOR_EPILOGUE_DELAY macro, are unused. The attached patch
>> removes them.
>>
>> OK for trunk?
>
> Obviously; Jeff Law preapproved their removal long ago.
> (I forgot about it.)  But please poison them too!

Right, committed with this additional hunk:

Index: system.h
===
--- system.h(revision 193766)
+++ system.h(working copy)
@@ -902,7 +902,8 @@ extern void fancy_abort (const char *, i
UNALIGNED_LONG_ASM_OP UNALIGNED_DOUBLE_INT_ASM_OP  \
USE_COMMON_FOR_ONE_ONLY IFCVT_EXTRA_FIELDS IFCVT_INIT_EXTRA_FIELDS \
CASE_USE_BIT_TESTS FIXUNS_TRUNC_LIKE_FIX_TRUNC \
-GO_IF_MODE_DEPENDENT_ADDRESS
+GO_IF_MODE_DEPENDENT_ADDRESS DELAY_SLOTS_FOR_EPILOGUE  \
+ELIGIBLE_FOR_EPILOGUE_DELAY

 /* Hooks that are no longer used.  */
  #pragma GCC poison LANG_HOOKS_FUNCTION_MARK LANG_HOOKS_FUNCTION_FREE  \


Ciao!
Steven


Go patch committed: Don't use memcmp for struct == if trailing padding

2012-11-24 Thread Ian Lance Taylor
The Go frontend was using memcmp for struct equality if there was no
internal padding but there was trailing padding.  This doesn't work when
the struct values are on the stack, because there is nothing that
explicitly zeroes out the trailing padding.  This patch fixes the bug.
The patch is larger than expected because to implement this I had to
change a Type method to be non-const.  This had no real effect as it was
never called with a const pointer anyhow.  Bootstrapped and ran Go
testsuite on x86_64-unknown-linux-gnu.  Committed to mainline and 4.7
branch.

Ian

diff -r 12661f3c99d9 go/types.cc
--- a/go/types.cc	Sat Nov 24 12:20:02 2012 -0800
+++ b/go/types.cc	Sat Nov 24 16:37:16 2012 -0800
@@ -2382,7 +2382,7 @@
 
  protected:
   bool
-  do_compare_is_identity(Gogo*) const
+  do_compare_is_identity(Gogo*)
   { return false; }
 
   Btype*
@@ -2420,7 +2420,7 @@
 
  protected:
   bool
-  do_compare_is_identity(Gogo*) const
+  do_compare_is_identity(Gogo*)
   { return false; }
 
   Btype*
@@ -2458,7 +2458,7 @@
 
  protected:
   bool
-  do_compare_is_identity(Gogo*) const
+  do_compare_is_identity(Gogo*)
   { return true; }
 
   Btype*
@@ -3085,7 +3085,7 @@
 
  protected:
   bool
-  do_compare_is_identity(Gogo*) const
+  do_compare_is_identity(Gogo*)
   { return false; }
 
   Btype*
@@ -3963,7 +3963,7 @@
 
  protected:
   bool
-  do_compare_is_identity(Gogo*) const
+  do_compare_is_identity(Gogo*)
   { return false; }
 
   Btype*
@@ -4014,7 +4014,7 @@
   }
 
   bool
-  do_compare_is_identity(Gogo*) const
+  do_compare_is_identity(Gogo*)
   { return false; }
 
   Btype*
@@ -4291,7 +4291,7 @@
 // comparisons.
 
 bool
-Struct_type::do_compare_is_identity(Gogo* gogo) const
+Struct_type::do_compare_is_identity(Gogo* gogo)
 {
   const Struct_field_list* fields = this->fields_;
   if (fields == NULL)
@@ -4323,6 +4323,16 @@
 	return false;
   offset += field_size;
 }
+
+  unsigned int struct_size;
+  if (!this->backend_type_size(gogo, &struct_size))
+return false;
+  if (offset != struct_size)
+{
+  // Trailing padding may not be zero when on the stack.
+  return false;
+}
+
   return true;
 }
 
@@ -5267,7 +5277,7 @@
 // Whether we can use memcmp to compare this array.
 
 bool
-Array_type::do_compare_is_identity(Gogo* gogo) const
+Array_type::do_compare_is_identity(Gogo* gogo)
 {
   if (this->length_ == NULL)
 return false;
@@ -7968,7 +7978,7 @@
 // function.
 
 bool
-Named_type::do_compare_is_identity(Gogo* gogo) const
+Named_type::do_compare_is_identity(Gogo* gogo)
 {
   // We don't use this->seen_ here because compare_is_identity may
   // call base() later, and that will mess up if seen_ is set here.
diff -r 12661f3c99d9 go/types.h
--- a/go/types.h	Sat Nov 24 12:20:02 2012 -0800
+++ b/go/types.h	Sat Nov 24 16:37:16 2012 -0800
@@ -576,7 +576,7 @@
   // identity function which gets nothing but a pointer to the value
   // and a size.
   bool
-  compare_is_identity(Gogo* gogo) const
+  compare_is_identity(Gogo* gogo)
   { return this->do_compare_is_identity(gogo); }
 
   // Return a hash code for this type for the method hash table.
@@ -950,7 +950,7 @@
   { return false; }
 
   virtual bool
-  do_compare_is_identity(Gogo*) const = 0;
+  do_compare_is_identity(Gogo*) = 0;
 
   virtual unsigned int
   do_hash_for_method(Gogo*) const;
@@ -1458,7 +1458,7 @@
 
 protected:
   bool
-  do_compare_is_identity(Gogo*) const
+  do_compare_is_identity(Gogo*)
   { return true; }
 
   unsigned int
@@ -1535,7 +1535,7 @@
 
  protected:
   bool
-  do_compare_is_identity(Gogo*) const
+  do_compare_is_identity(Gogo*)
   { return false; }
 
   unsigned int
@@ -1604,7 +1604,7 @@
 
  protected:
   bool
-  do_compare_is_identity(Gogo*) const
+  do_compare_is_identity(Gogo*)
   { return false; }
 
   unsigned int
@@ -1664,7 +1664,7 @@
   { return true; }
 
   bool
-  do_compare_is_identity(Gogo*) const
+  do_compare_is_identity(Gogo*)
   { return false; }
 
   Btype*
@@ -1778,7 +1778,7 @@
   { return true; }
 
   bool
-  do_compare_is_identity(Gogo*) const
+  do_compare_is_identity(Gogo*)
   { return false; }
 
   unsigned int
@@ -1853,7 +1853,7 @@
   { return true; }
 
   bool
-  do_compare_is_identity(Gogo*) const
+  do_compare_is_identity(Gogo*)
   { return true; }
 
   unsigned int
@@ -2139,7 +2139,7 @@
   do_has_pointer() const;
 
   bool
-  do_compare_is_identity(Gogo*) const;
+  do_compare_is_identity(Gogo*);
 
   unsigned int
   do_hash_for_method(Gogo*) const;
@@ -2272,7 +2272,7 @@
   }
 
   bool
-  do_compare_is_identity(Gogo*) const;
+  do_compare_is_identity(Gogo*);
 
   unsigned int
   do_hash_for_method(Gogo*) const;
@@ -2365,7 +2365,7 @@
   { return true; }
 
   bool
-  do_compare_is_identity(Gogo*) const
+  do_compare_is_identity(Gogo*)
   { return false; }
 
   unsigned int
@@ -2451,7 +2451,7 @@
   { return true; }
 
   bool
-  do_compare_is_identity(Gogo*) const
+  do_compare_is_identity(Gogo*)
   { return true; }
 
   unsigned int
@@ -2582,7 +2582,7 @@
   { return true; }
 
   bool
-  do_c