Re: build failure, GMP not available

2006-10-31 Thread Alexandre Oliva
On Oct 31, 2006, Paul Brook <[EMAIL PROTECTED]> wrote:

> In my experience the first thing you do bringing up a new system is build a 
> cross compiler and use that to build glibc and coreutils/busybox. This is all 
> done on an established host that has gmp/mpfr ported to it.
> Bootstrapping a native compiler comes quite late in system bringup, and only 
> happens at all on server/workstation-class hardware.

But then, for those, if you want to make sure you got a reproducible
build, you'll want to cross-build gmp and mpfr, cross-build a native
toolchain, bootstrap it, build gmp and mpfr with it and then bootstrap
again.

It would be *so* much nicer if one could just drop gmp and mpfr into
the GCC source tree and we'd use it during bootstrap.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Secretary for FSF Latin Americahttp://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: [c++] switch ( enum ) vs. default statment.

2007-02-06 Thread Alexandre Oliva
On Jan 29, 2007, "Manuel López-Ibáñez" <[EMAIL PROTECTED]> wrote:

> * You can add a return 0 or an exit(1) at the end of the function or
> in a default label. Since in your case the code is unreachable, the
> optimiser may remove it or it will never be executed.

But this would generate additional code for no useful purpose.
See Ralf Baechle's posting with Subject: False ‘noreturn’ function
does return warnings.

/me thinks it would make sense for us to add a __builtin_unreachable()
that would indicate to GCC that no further action is needed from that
point on.  A command-line flag could then tell whether GCC should
generate abort() for such cases, or take a more general "undefined
behavior" approach without generating additional code to that end.

Meanwhile, there's __builtin_trap() already, and Ralf might use that
even to remove the asm volatile, and Paweł could use it in a default:
label.  It's still worse than a __builtin_assume(e == X || e == Y),
but it's probably much simpler to implement.  But then,
__builtin_unreachable() might very well be implemented as
__builtin_assume(0).

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Announcing Kaz as sh maintainer

2007-02-19 Thread Alexandre Oliva
On Feb 19, 2007, Gerald Pfeifer <[EMAIL PROTECTED]> wrote:

> It is my pleasure to announce that the steering committee has appointed 
> Kaz Kojima maintainer of the sh port.

Awesome!  Congratulations to Kaz, and thanks to the Steering Committee
for having accepted our recommendation.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Updating libtool in GCC and srctree

2007-03-08 Thread Alexandre Oliva
On Mar  8, 2007, Steve Ellcey <[EMAIL PROTECTED]> wrote:

>> You'll still need libtool.m4.

> Are you sure?

Yep.  You can do away without a manually-managed libtool.m4 if you use
aclocal and you trust it will always bring in the right version of
libtool.m4 (i.e., in the one that is compatible with the ltmain.sh in
your tree).

Personally, I prefer to manage both libtool.m4 and ltmain.sh by hand,
because then I can make sure they are in sync.

So, yes, don't remove it, replace it with the version from the same
CVS tree.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Updating libtool in GCC and srctree

2007-03-13 Thread Alexandre Oliva
On Mar  9, 2007, Steve Ellcey <[EMAIL PROTECTED]> wrote:

> If I just run autoconf I get errors because I am not
> including the new ltoptions.m4, ltsugar.m4, and ltversion.m4 files.

I'd just prepend them to our local copy of libtool.m4, pretty much
like aclocal would have done into aclocal.m4.

> But boehm-gc has no acinclude.m4 file

That's equivalent to an empty one.  I'd guess an existing acinclude.m4
was removed in a merge or something like that, because its aclocal.m4
couldn't possibly contain the sinclude statements it does otherwise.
Unless someone added them to an aclocal-generated file, which doesn't
seem to match this file's history.

How about copying the m4_include statements to acinclude.m4,
(incorrectly) removed a while ago, and then re-create aclocal.m4 with
aclocal?

> and while libffi has an acinclude.m4 file, it doesn't have an
> include of libtool.m4.

This is a bug similar to that of boehm-gc above.

> So my question is, how is the include of libtool.m4 getting into
> aclocal.m4?

Magic ;-)

Someone probably failed to realize that they should be in acinclude.m4
in order for them to survive an aclocal run.

> This is aclocal 1.9.6.  Any idea on what I need to do here to fix this
> error?  Why do some acinclude.m4 files have explicit includes for
> libtool files (libgfortran, libgomp, etc) but other's don't (libffi,
> gcc).

libffi/ is a bug (it's in aclocal.m4, but not in acinclude.m4).  gcc/
doesn't use libtool at all.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: PATCH: make_relative_prefix oddity

2007-03-13 Thread Alexandre Oliva
On Mar 13, 2007, Mark Mitchell <[EMAIL PROTECTED]> wrote:

> It treats only "/opt" as a common component of the two paths, rathe
> than "/opt/foo".  If you use "/opt/foo/" (instead of "/opt/foo") for
> the last argument, the answer is as I expected.  This seems odd to me;
> is it the intended behavior?

IIRC this is intended behavior, to enable stuff such as
"/some/dir-suffix" and "/another/different-suffix" to generate the
correct relative pathnames, when given "/some/dir" as prefix.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: No ifcvt during ce1 pass (fails i386/ssefp-2.c)

2007-03-14 Thread Alexandre Oliva
On Mar 14, 2007, "Uros Bizjak" <[EMAIL PROTECTED]> wrote:

> Recent committed patch breaks i386ssefp-2.c testcase, where maxsd is
> not generated anymore.

FWIW, I saw it both before and after the patch for PR 31127.  I've
just tried reverting PR 30643 as well, but the problem remains.  So
it's unrelated.

Have you been able to narrow it down to any other patch?

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: -fdump-translation-unit output and GPL

2007-03-20 Thread Alexandre Oliva
On Mar 20, 2007, [EMAIL PROTECTED] (Richard Kenner) wrote:

> infringes our copyright

> Patent law

Please be careful to not spread the confusion that already exists
between these unrelated laws.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: We're out of tree codes; now what?

2007-03-20 Thread Alexandre Oliva
On Mar 20, 2007, Mark Mitchell <[EMAIL PROTECTED]> wrote:

> (I fully expect that -- without change -- the traditional GCC build
> process, using parallel make, will become bound mostly by the serial
> nature of the configure scripts, rather by the actual compilation of the
> compiler and runtime libraries.)

/me mumbles something about LTO and threads.

As for configure scripts...  autoconf -j is long overdue ;-)

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
FSF Latin America Board Member http://www.fsfla.org/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: [Bug c++/19199] [3.3/3.4/4.0/4.1 Regression] Wrong warning about returning a reference to a temporary

2005-03-07 Thread Alexandre Oliva
On Mar  7, 2005, Roger Sayle <[EMAIL PROTECTED]> wrote:

> For example, I believe that Alex's proposed solution to PR c++/19199
> isn't an appropriate fix.  It's perfectly reasonable for fold to convert
> a C++ COND_EXPR into a MIN_EXPR or MAX_EXPR, as according to the C++
> front-end all three of these tree nodes are valid lvalues.

But not if the values it places in the MIN_EXPR or MAX_EXPR operands
aren't lvalues themselves, otherwise you're turning an lvalue into an
rvalue.  In the example of this testcase, the compare operands are
> (rvalues), whereas the cond_expr operands are
 (lvalues), and the min_expr operands end up being
>.  So the transformation is in error.

It wouldn't be if we managed to first remove the extensions from the
compare operands, and only then turn them into min/max_expr.  Any
reason to avoid comparing the enums directly, without wrapping them in
conversions to int?

> It also doesn't help with the problematic C++ extension:

>   (a >? b) = c;

> which use MIN_EXPR/MAX_EXPR as a valid lvalue in C++ without any help
> from "fold".

As long as there isn't any conversion needed for the compare.

> The first is to see if in the C++ parser provides enough information
> such that it knows we're if parsing an lvalue or an rvalue.

It doesn't, and it can't.  Consider:

( void ) ( ( a ? b : c )  )
( void ) ( ( a ? b : c ) = 0)
  ^ you're here

how could you possibly tell one from the other without looking too far
ahead?

> In these cases, the C++ front-end can avoid calling "fold" on the
> COND_EXPR its creating in build_conditional_expr.  This only impacts
> that C++ front-end and doesn't impact performance of C/fortran/java
> code that may usefully take advantage of min/max.

See that I've already suggested the solution for the other languages
in which cond_expr is never an lvalue: make sure they turn the
operands of the cond_expr into non-lvalues, even if this demands
gratuitous non_lvalue_exprs or nops.

> An improvement on this approach is for the C++ front-end never to create
> COND_EXPR, MIN_EXPR or MAX_EXPR as the target of MODIFY_EXPRs, and lower
> them itself upon creation.  By forbidding them in "generic", support for
> COND_EXPRs as lvalues can be removed from language-independent
> gimplification, fold, non_lvalue, etc... greatly simplifying the compiler.

This isn't a complete solution.  MODIFY_EXPR is not the only situation
in which lvalueness of a COND_EXPR is relevant.  You may have to bind
a reference to the result of a COND_EXPR.

> Finally, the third approach would be to introduce new tree codes for
> LCOND_EXPR, LMIN_EXPR and LMAX_EXPR that reflect the subtley different
> semantics of these operators when uses as lvalues.

Why bother?  We already handle them all properly, except for this one
case that mis-handles COND_EXPRs whose compares include extensions.
We can fix that, and not lose performance by getting front-ends that
can offer guarantees about non-lvalueness of COND_EXPRs to give this
information to the middle end.

> My final comment is that in the lowering of (a ? b : c) = d by using
> a pointer temporary runs into the problems with bitfields "b" and "c",
> and marking of objects addressable when they need not be.

Yup.

> runs into problems.  A better approach it to use a temporary for "d".

Yup, we have a transformation that does just that at some point.  But
again, it only operates on MODIFY_EXPRs, it can't help for the other
important lvalue case, that of reference-binding.


Hmm...  Maybe we could generate the COND_EXPR without folding, and
then, when we use it as an rvalue, we fold it; when it use it as an
lvalue, we transform it in something else that is fold-resistant.  I'm
not sure I like this, but I suppose it's the only way to go if we
don't want to enable COND_EXPR, MIN_EXPR and MAX_EXPR to be handled as
lvalues in the middle end.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: request for timings - makedepend

2005-03-07 Thread Alexandre Oliva
On Mar  7, 2005, Zack Weinberg <[EMAIL PROTECTED]> wrote:

> (c) whether or not you would be willing to trade that much
> additional delay in an edit-compile-debug cycle for not having to
> write dependencies manually anymore.

I wouldn't.  automake has a much better solution for this that doesn't
introduce any such delays.  And, since I know you object to automake
in itself, I'll point out that you don't even need to be using
automake to enjoy that solution.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: request for timings - makedepend

2005-03-07 Thread Alexandre Oliva
On Mar  8, 2005, Zack Weinberg <[EMAIL PROTECTED]> wrote:

> Alexandre Oliva <[EMAIL PROTECTED]> writes:

>> I wouldn't.  automake has a much better solution for this that doesn't
>> introduce any such delays.

> Well, yes, but tell me how to make it play nice with all our generated
> header files.

You're getting confused by mixing two different issues.

One is how to generate dependencies efficiently.

The other is how to guarantee you have the generated headers in place
so that you can generate dependencies correctly.


Automake's approach to generating dependencies addresses the first
issue in a far more efficient way than yours, and has no significant
differences regarding generated headers.

As for addressing the second issue, Tom Tromey did already:
BUILT_SOURCES is one approach; explicitly coding dependencies on such
files is another.  There really isn't a magic bullet for this problem,
since you generally have to determine what other headers a generated
header might include, and some of them might be generated as well.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: [Bug c++/19199] [3.3/3.4/4.0/4.1 Regression] Wrong warning about returning a reference to a temporary

2005-03-08 Thread Alexandre Oliva
On Mar  8, 2005, Richard Henderson <[EMAIL PROTECTED]> wrote:

>> As has been described earlier on this thread, GCC has folded the C++
>> source "(a >= b ? a : b) = c" into "MAX_EXPR (a,b) = c" and equivalently
>> "(a > b ? a : b) = c" into "MAX_EXPR (b,a) = c" since the creation of
>> current CVS.

> Which, as we've been seeing in this thread, is also a mistake.

Not quite.  The folding above is not a mistake at all, if all the
expressions are exactly as displayed.  The problem occurs when we
turn:

  ((int)a > (int)b ? a : b) = c

into

  (__typeof(a))(MAX_EXPR ((int)a, (int)b)) = c

and avoiding this kind of lvalue-dropping transformation is exactly
what the patch I proposed fixes.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: [Bug c++/19199] [3.3/3.4/4.0/4.1 Regression] Wrong warning about returning a reference to a temporary

2005-03-08 Thread Alexandre Oliva
On Mar  7, 2005, Roger Sayle <[EMAIL PROTECTED]> wrote:

> For example, I believe that Alex's proposed solution to PR c++/19199
> isn't an appropriate fix.  It's perfectly reasonable for fold to convert
> a C++ COND_EXPR into a MIN_EXPR or MAX_EXPR, as according to the C++
> front-end all three of these tree nodes are valid lvalues.  Hence it's
> not this transformation in fold that's in error.

Bugzilla was kept out of the long discussion that ensued, so I'll try
to summarize.  The problem is that the conversion is turning a
COND_EXPR such as:

  ((int)a < (int)b ? a : b)

into

  (__typeof(a)) ((int)a  Simply disabling the COND_EXPR -> MIN_EXPR/MAX_EXPR transformation is
> also likely to be a serious performance penalty, especially on targets
> that support efficient sequences for "min" and "max".

This was not what I intended to do with my patch, FWIW.
Unfortunately, I goofed in the added call to operand_equal_p, limiting
too much the situations in which the optimization could be applied.
The patch fixes this problem, and updates the patch such that it
applies cleanly again.

As for other languages whose COND_EXPRs can't be lvalues, we can
probably arrange quite easily for them to ensure at least one of their
result operands is not an lvalue, so as to enable the transformation
again.

Comments?  Ok to install?

Index: gcc/ChangeLog
from  Alexandre Oliva  <[EMAIL PROTECTED]>

	* fold-const.c (non_lvalue): Split tests into...
	(maybe_lvalue_p): New function.
	(fold_ternary): Use it to avoid turning a COND_EXPR lvalue into
	a MIN_EXPR rvalue.

Index: gcc/fold-const.c
===
RCS file: /cvs/gcc/gcc/gcc/fold-const.c,v
retrieving revision 1.535
diff -u -p -r1.535 fold-const.c
--- gcc/fold-const.c 7 Mar 2005 21:24:21 - 1.535
+++ gcc/fold-const.c 8 Mar 2005 22:07:52 -
@@ -2005,16 +2005,12 @@ fold_convert (tree type, tree arg)
 }
 }
 
-/* Return an expr equal to X but certainly not valid as an lvalue.  */
+/* Return false if expr can be assumed to not be an lvalue, true
+   otherwise.  */
 
-tree
-non_lvalue (tree x)
+static bool
+maybe_lvalue_p (tree x)
 {
-  /* While we are in GIMPLE, NON_LVALUE_EXPR doesn't mean anything to
- us.  */
-  if (in_gimple_form)
-return x;
-
   /* We only need to wrap lvalue tree codes.  */
   switch (TREE_CODE (x))
   {
@@ -2054,8 +2050,24 @@ non_lvalue (tree x)
 /* Assume the worst for front-end tree codes.  */
 if ((int)TREE_CODE (x) >= NUM_TREE_CODES)
   break;
-return x;
+return false;
   }
+
+  return true;
+}
+
+/* Return an expr equal to X but certainly not valid as an lvalue.  */
+
+tree
+non_lvalue (tree x)
+{
+  /* While we are in GIMPLE, NON_LVALUE_EXPR doesn't mean anything to
+ us.  */
+  if (in_gimple_form)
+return x;
+
+  if (! maybe_lvalue_p (x))
+return x;
   return build1 (NON_LVALUE_EXPR, TREE_TYPE (x), x);
 }
 
@@ -9734,10 +9746,16 @@ fold_ternary (tree expr)
 	 of B and C.  Signed zeros prevent all of these transformations,
 	 for reasons given above each one.
 
+	 We don't want to use operand_equal_for_comparison_p here,
+	 because it might turn an lvalue COND_EXPR into an rvalue one,
+	 see PR c++/19199.
+
  Also try swapping the arguments and inverting the conditional.  */
   if (COMPARISON_CLASS_P (arg0)
-	  && operand_equal_for_comparison_p (TREE_OPERAND (arg0, 0),
-	 arg1, TREE_OPERAND (arg0, 1))
+	  && ((maybe_lvalue_p (op1) && maybe_lvalue_p (op2))
+	  ? operand_equal_p (TREE_OPERAND (arg0, 0), op1, 0)
+	  : operand_equal_for_comparison_p (TREE_OPERAND (arg0, 0),
+		arg1, TREE_OPERAND (arg0, 1)))
 	  && !HONOR_SIGNED_ZEROS (TYPE_MODE (TREE_TYPE (arg1
 	{
 	  tem = fold_cond_expr_with_comparison (type, arg0, op1, op2);
@@ -9746,9 +9764,10 @@ fold_ternary (tree expr)
 	}
 
   if (COMPARISON_CLASS_P (arg0)
-	  && operand_equal_for_comparison_p (TREE_OPERAND (arg0, 0),
-	 op2,
-	 TREE_OPERAND (arg0, 1))
+	  && ((maybe_lvalue_p (op1) && maybe_lvalue_p (op2))
+	  ? operand_equal_p (TREE_OPERAND (arg0, 0), op2, 0)
+	  : operand_equal_for_comparison_p (TREE_OPERAND (arg0, 0),
+						op2, TREE_OPERAND (arg0, 1)))
 	  && !HONOR_SIGNED_ZEROS (TYPE_MODE (TREE_TYPE (op2
 	{
 	  tem = invert_truthvalue (arg0);
Index: gcc/testsuite/ChangeLog
from  Alexandre Oliva  <[EMAIL PROTECTED]>

	PR c++/19199
	* g++.dg/expr/lval2.C: New.

Index: gcc/testsuite/g++.dg/expr/lval2.C
===
RCS file: gcc/testsuite/g++.dg/expr/lval2.C
diff -N gcc/testsuite/g++.dg/expr/lval2.C
--- /dev/null	1 Jan 1970 00:00:00 -
+++ gcc/testsuite/g++.dg/expr/lval2.C 8 Mar 2005 22:08:07 -
@@ -0,0 +1,26 @@
+// PR c++/19199
+
+// { dg-do run }
+
+// We used to turn the COND_EXPR

Re: Merging calls to `abort'

2005-03-15 Thread Alexandre Oliva
On Mar 14, 2005, Eric Christopher <[EMAIL PROTECTED]> wrote:

>> > Now, I wouldn't object to hacking GCC to avoid cross-jumping calls to
>> > abort.  It's just that I don't think that the common GNU use of abort
>> > serves the users.
>> Agreed.  And as someone suggested, rather than treating abort
>> specially within GCC, I think we'd be better off with a function
>> attribute which prevented cross jumping to any function with
>> the attribute set.

> I think it makes sense to just not crossjump noreturn attribute
> functions if we're going to do this.

I think this is a slippery slope.  Crossjumping calls to abort(), or
even arbitrary noreturn functions, is just yet another technique of
compiler optimization that makes debugging more difficult.
Crossjumping calls to returning functions can be just as confusing if
you're looking at a backtrace in a debugger, especially if a function
gets called from lots of different functions that all get inlined into
one that ends up being identified as the frame owner, and if they get
cross-jumped, things get very confusing.

Then, there's sibcalling, that can also contribute for more confusing
backtraces.

To sum up my point: I don't think avoiding crossjumping of calls to
abort will address the underlying issue, which is that backtraces
obtained from optimized programs may surprise people.  Such avoidance
will only paper over the issue, making the compiler behavior less
consistent and thus even more confusing.  If people want accurate
stack traces, they shouldn't be using optimizations that can mess with
backtraces.

What we might want to do is provide an option to disable all such
optimizations.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: A plan for eliminating cc0

2005-03-23 Thread Alexandre Oliva
On Mar 23, 2005, Ian Lance Taylor  wrote:

>2a) Define the clobbercc attribute to be "yes" for each insn which
>changes the condition codes unpredictably.  Typically the
>default would be "yes", and then either clobbercc would be
>written to use cond to select instruction for which it should
>be "no", or those instructions would be marked specifically.

Currently, mn10300 and h8300 use `cc' as the attribute to indicate how
an instruction affects cc0.  If you look at say mn10300 closely,
you'll see that different alternatives for several instructions have
significantly different behavior regarding cc0.  For example, `none'
and `none_0hit' don't change flags at all, where `set_*' change some
flags, and `compare' and `clobber' sets all flags.

Replacing them all with a clobber is sub-optimal, because some
alternatives don't really clobber cc0.  I'm not sure how much of a
problem this would be, or even if it actually makes any difference,
but it feels like it would be appropriate to somehow model such
conditional clobbering and, perhaps, even the actual operation
performed on cc0.

>2b) Convert conditional branch instructions from separate
>cmpMODE/bCOND instructions to a single conditional branch
>instruction, either by saving condition codes in cmpMODE or
>tstMODE or by using cbranch.

The problem here IIRC is a combinatorial explosion on the number of
alternatives for the now-merged compare&branch instructions.  Have a
look, for example, at the cc0-setting patterns on h8300.md, including
the zero-extend and zero-extract ones.  There are many of them, with
operands that aren't easy to get into a single cbranch instruction.

>2c) Similarly convert setCOND, movCOND, and addCOND instructions.

And then, there are these, which add to the combinatorial explosion.

> At this point we have eliminated cc0 for the target.  The generated
> code should be correct.

I think a better approach would be to enable cc0 to be used in
patterns, but have some machinery behind the scenes that combines
expanders for cc0-setting and cc0-using instructions, creates the
post-reload splitters and makes the actual insns conditional on
reload_completed.  Then one can still model the port in the most
natural way, but not overburden the rest of the compiler with cc0
support.

> The instruction which sets the condition
> codes will be kept in synch with the instruction which uses the
> condition codes, because most other instructions will have an
> automatically inserted clobber.

If you have combined all cc0 setters and users, the clobber is mostly
pointless, at least in the absence of scheduling.  Sure enough, when
the time comes to split the setters and readers, they must be in
place, so perhaps the addition of the clobbers could be taken care of
at that time as well, taking cc alternatives into account?  The same
machinery that adds the combined patterns and splitters could also
generate splitters for the potential-cc0-setting patterns to be
applied at that same time, adding a clobber or set pattern based on
the alternative selected by reload, according to rules specified by a
mechanism equivalent to NOTICE_CC_UPDATE.  Then we'd run new code (the
new pass you suggested) to find redundant cc0 setters and remove them.

> The main ideas here are to use the clobbercc attribute to avoid having
> to modify every single instruction,

How does it avoid this?  On cc0 targets, almost every single
instruction modifies the flags anyway.  How does having to add the
clobbercc attribute help?

Or do you mean we'd introduce the clobbercc attribute with a default
to yes, and then set it to no on the rare instructions/alternatives
that don't clobber it?  Why not use the relatively-common-in-cc0-ports
cc attribute for this purpose?

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: A plan for eliminating cc0

2005-03-24 Thread Alexandre Oliva
_set_operand, or whether we could just
accept all possible variations in cc0_set_operand.  I'm not even sure
it actually makes sense to handle anything but clobber, since we're
going to retain the NOTICE_CC_UPDATE machinery unchanged.

One more thing to note is that we have to force splitters to run after
reload, with cc0_expanding_in_progress, such that patterns that don't
have the clobbers or some dummy pattern in its stead don't survive
past the point in which we split cc0 setters and users out of single
combined insns.  It might make sense to use cc0_expanding_in_progress
and cc0_expanding_completed in the conditions of the modified cmpsi
and condbranch patterns above, but it doesn't seem to be necessary.

The port would have to implement the gen_cc0_set function (or just a
cc0_set pattern) and the cc0_set_operand predicate, but these should
be fairly trivial to add given knowledge about the attributes.

Ideally, we should have some means to avoid duplicating all insns with
and without the cc0_set operand, but I don't see a simple way to tell
whether an attribute holds a value for all alternatives of an insn, in
the general case.

>> Why not use the relatively-common-in-cc0-ports
>> cc attribute for this purpose?

> That would be fine.  It would be trivial to define clobbercc in terms
> of cc for ports which have it.

See the problem of alternatives above.  Would it not be a problem in
your proposal?

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


bootstrap compare failure in ada/targparm.o on i686-pc-linux-gnu?

2005-03-26 Thread Alexandre Oliva
: 75 c9   jne13ca 
> 
> 1401: e9 19 fb ff ff  jmpf1f 
> 
> 1406: 8b 75 08mov0x8(%ebp),%esi
> 1409: 01 de   add%ebx,%esi
> 140b: 89 b5 c4 fd ff ff   mov%esi,0xfdc4(%ebp)
> 1411: 0f b6 c1movzbl %cl,%eax
> 1414: 89 04 24mov%eax,(%esp)
> 1417: e8 fc ff ff ff  call   1418 
> 
>   1418: R_386_PC32output__write_char
> 141c: ff 85 c4 fd ff ff   incl   0xfdc4(%ebp)
> 1422: 8b 95 d8 fd ff ff   mov0xfdd8(%ebp),%edx
> 1428: 8b 85 c4 fd ff ff   mov0xfdc4(%ebp),%eax
> 142e: 42  inc%edx
> 142f: 89 95 d8 fd ff ff   mov%edx,0xfdd8(%ebp)
> 1435: 0f b6 08movzbl (%eax),%ecx
> 1438: 80 f9 0acmp$0xa,%cl
> 143b: 0f 95 c2setne  %dl
> 143e: 80 f9 29cmp$0x29,%cl
> 1441: 0f 95 c0setne  %al
> 1444: 84 d0   test   %dl,%al
> 1446: 75 c9   jne1411 
> 
> 1448: e9 d2 fa ff ff  jmpf1f 
> 
> 144d: 8b 75 08mov0x8(%ebp),%esi
> 1450: 8b 85 d8 fd ff ff   mov0xfdd8(%ebp),%eax
> 1456: 80 3c 06 22 cmpb   $0x22,(%esi,%eax,1)
> 145a: 75 07   jne1463 
> 
> 145c: 80 7c 30 01 3b  cmpb   $0x3b,0x1(%eax,%esi,1)
> 1461: 74 38   je 149b 
> 
> 1463: e8 fc ff ff ff  call   1464 
> 
>   1464: R_386_PC32output__set_standard_error
> 1468: b8 88 02 00 00  mov$0x288,%eax
>   1469: R_386_32  .rodata
> 146d: 89 85 20 fd ff ff   mov%eax,0xfd20(%ebp)
> 1473: b8 40 00 00 00  mov$0x40,%eax
1437,1458c1438,1458
< 1478: 89 85 20 fd ff ff   mov%eax,0xfd20(%ebp)
< 147e: b8 40 00 00 00  mov$0x40,%eax
<   147f: R_386_32  .rodata
< 1483: 89 85 24 fd ff ff   mov%eax,0xfd24(%ebp)
< 1489: 8b 8d 20 fd ff ff   mov0xfd20(%ebp),%ecx
< 148f: 8b 9d 24 fd ff ff   mov0xfd24(%ebp),%ebx
< 1495: 89 0c 24mov%ecx,(%esp)
< 1498: 89 5c 24 04 mov%ebx,0x4(%esp)
< 149c: e8 fc ff ff ff  call   149d 

<   149d: R_386_PC32output__write_line
< 14a1: e9 7e fa ff ff  jmpf24 

< 14a6: 0f b6 44 30 02  movzbl 0x2(%eax,%esi,1),%eax
< 14ab: 3c 0a   cmp$0xa,%al
< 14ad: 0f 95 c2setne  %dl
< 14b0: 3c 0d   cmp$0xd,%al
< 14b2: 0f 95 c0setne  %al
< 14b5: 84 d0   test   %dl,%al
< 14b7: 75 b5   jne146e 

< 14b9: e8 fc ff ff ff  call   14ba 

<   14ba: R_386_PC32namet__name_enter
< 14be: a3 00 00 00 00  mov%eax,0x0
<   14bf: R_386_32  targparm__run_time_name_on_target
---
> 1478: 89 85 24 fd ff ff   mov%eax,0xfd24(%ebp)
> 147e: 8b 8d 20 fd ff ff   mov0xfd20(%ebp),%ecx
> 1484: 8b 9d 24 fd ff ff   mov0xfd24(%ebp),%ebx
> 148a: 89 0c 24mov%ecx,(%esp)
> 148d: 89 5c 24 04 mov%ebx,0x4(%esp)
> 1491: e8 fc ff ff ff  call   1492 
> 
>   1492: R_386_PC32output__write_line
> 1496: e9 89 fa ff ff  jmpf24 
> 
> 149b: 0f b6 44 30 02  movzbl 0x2(%eax,%esi,1),%eax
> 14a0: 3c 0a   cmp$0xa,%al
> 14a2: 0f 95 c2setne  %dl
> 14a5: 3c 0d   cmp$0xd,%al
> 14a7: 0f 95 c0setne  %al
> 14aa: 84 d0   test   %dl,%al
> 14ac: 75 b5   jne1463 
> 
> 14ae: e8 fc ff ff ff  call   14af 
> 
>   14af: R_386_PC32namet__name_enter
> 14b3: a3 00 00 00 00  mov%eax,0x0
>   14b4: R_386_32  targparm__run_time_name_on_target
> 14b8: e9 22 f1 ff ff  jmp5df 
> 
> 14bd: 8b a5 04 fd ff ff   mov0xfd04(%ebp),%esp


Anyone else seeing this?

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: bootstrap compare failure in ada/targparm.o on i686-pc-linux-gnu?

2005-04-04 Thread Alexandre Oliva
On Mar 26, 2005, Graham Stott <[EMAIL PROTECTED]> wrote:

> I do regular bootstraps of mainline all languages on FC3
> i686-pc-linuux-gnu and haven't seen any problemss upto Friday. I'm
> using --enable-checking=tree,misc,rtl,rtlflag which might make a
> difference.

I'm still observing this problem every now and then.  It's not
consistent or easily reproducible, unfortunately.  I suspect we're
using pointers somewhere, and that stack/mmap/whatever address
randomization is causing different results.  I'm looking into it.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: bootstrap compare failure in ada/targparm.o on i686-pc-linux-gnu?

2005-04-04 Thread Alexandre Oliva
On Apr  4, 2005, Dale Johannesen <[EMAIL PROTECTED]> wrote:

> On Apr 4, 2005, at 2:32 PM, Alexandre Oliva wrote:
>> On Mar 26, 2005, Graham Stott <[EMAIL PROTECTED]> wrote:
>>> I do regular bootstraps of mainline all languages on FC3
>>> i686-pc-linuux-gnu and haven't seen any problemss upto Friday. I'm
>>> using --enable-checking=tree,misc,rtl,rtlflag which might make a
>>> difference.

>> I'm still observing this problem every now and then.  It's not
>> consistent or easily reproducible, unfortunately.  I suspect we're
>> using pointers somewhere, and that stack/mmap/whatever address
>> randomization is causing different results.  I'm looking into it.

> I've found 2 bugs over the last 6 months where the problem is exposed
> only if two pointers happen to hash to the same bucket.  It's occurred
> to me that doing a bootstrap with all hashtable sizes set to 1 might be
> a good idea.

Perhaps.  But the fundamental problem is that we shouldn't be hashing
on pointers, and tree-eh.c does just that for finally_tree and
throw_stmt_table.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: bootstrap compare failure in ada/targparm.o on i686-pc-linux-gnu?

2005-04-04 Thread Alexandre Oliva
On Apr  4, 2005, Dale Johannesen <[EMAIL PROTECTED]> wrote:

> On Apr 4, 2005, at 3:21 PM, Alexandre Oliva wrote:

>> On Apr  4, 2005, Dale Johannesen <[EMAIL PROTECTED]> wrote:
>> 
>>> On Apr 4, 2005, at 2:32 PM, Alexandre Oliva wrote:
>>>> On Mar 26, 2005, Graham Stott <[EMAIL PROTECTED]> wrote:
>>>>> I do regular bootstraps of mainline all languages on FC3
>>>>> i686-pc-linuux-gnu and haven't seen any problemss upto Friday. I'm
>>>>> using --enable-checking=tree,misc,rtl,rtlflag which might make a
>>>>> difference.
>> 
>>>> I'm still observing this problem every now and then.  It's not
>>>> consistent or easily reproducible, unfortunately.  I suspect we're
>>>> using pointers somewhere, and that stack/mmap/whatever address
>>>> randomization is causing different results.  I'm looking into it.
>> 
>>> I've found 2 bugs over the last 6 months where the problem is exposed
>>> only if two pointers happen to hash to the same bucket.  It's occurred
>>> to me that doing a bootstrap with all hashtable sizes set to 1
>>> might be
>>> a good idea.
>> 
>> Perhaps.  But the fundamental problem is that we shouldn't be hashing
>> on pointers, and tree-eh.c does just that for finally_tree and
>> throw_stmt_table.

> Hmm.  Of the earlier bugs, in
> http://gcc.gnu.org/ml/gcc-patches/2004-12/msg01760.html
> the hash table in question is built by DOM, and in
> http://gcc.gnu.org/ml/gcc-patches/2005-03/msg01810.html
> it's built by PRE (VN).  I don't think there's general agreement
> that "we shouldn't be hashing on pointers"

Odd...  I was pretty sure there was.  Maybe it's only in binutils?  I
thought we'd been sufficiently bitten by pointer-hashing problems in
GCC to know better.

Anyhow...  The only way I can think of to enable us to hash on
pointers that doesn't involve adding an id to every tree node
(assuming the issue only shows up with tree nodes) is to create a
separate table to maps arbitrary pointers to an intptr_t counter that
gets incremented every time a pointer is added to this hash table.
Computing the hash of a pointer would then amount to looking the
pointer up in this table and obtaining its value.  We could set a
number aside (say 0 or -1) to use in NO_INSERT lookups.

My head hurts about the GGC implications of opaque pointers in such a
hash table, and retaining pointers in the hash table that have already
been otherwise freed.  It surely doesn't hurt, unless you actually
want to save/restore this table; it's perfectly ok to reuse an id for
a pointer to a recycled memory area.  But this table would spend
memory and slow things down.  Maybe we should only use it for
bootstraps (enabled with a flag in the default BOOT_CFLAGS), and
simply hash pointers if the flag is not given?

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: bootstrap compare failure in ada/targparm.o on i686-pc-linux-gnu?

2005-04-11 Thread Alexandre Oliva
On Apr  4, 2005, Richard Henderson <[EMAIL PROTECTED]> wrote:

> On Mon, Apr 04, 2005 at 07:57:09PM -0300, Alexandre Oliva wrote:
>> My head hurts about the GGC implications of opaque pointers in such a
>> hash table, and retaining pointers in the hash table that have already
>> been otherwise freed.

> These are solved problems.

Only in the mathematical sense.  We still have such incorrect uses in
our tree, as the bootstrap problem I reported shows.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Can't build gcc cvs trunk 20050409 gnat tools on sparc-linux: tree check: accessed operand 2 of view_convert_expr with 1 operands in visit_assignment, at tree-ssa-ccp.c:1074

2005-04-11 Thread Alexandre Oliva
On Apr 10, 2005, Andreas Jaeger <[EMAIL PROTECTED]> wrote:

> Laurent GUERBY <[EMAIL PROTECTED]> writes:

>> stage1/xgcc -Bstage1/ 
>> -B/home/guerby/work/gcc/install/install-20050410T003153/i686-pc-linux-gnu/bin/
>>  -c -O2 -g -fomit-frame-pointer  -gnatpg -gnata -I--I. -Iada 
>> -I/home/guerby/work/gcc/version-head/gcc/ada 
>> /home/guerby/work/gcc/version-head/gcc/ada/errout.adb -o ada/errout.o
>> +===GNAT BUG DETECTED==+
>> | 4.1.0 20050409 (experimental) (i686-pc-linux-gnu) GCC error: |
>> | tree check: accessed operand 2 of view_convert_expr with 1 operands  |
>> |in visit_assignment, at tree-ssa-ccp.c:1074   |
>> | Error detected at errout.adb:2563:1  |

> And on x86_64-linux-gnu I see now:

> stage1/xgcc -Bstage1/ -B/opt/gcc/4.1-devel/x86_64-suse-linux-gnu/bin/ -c -g 
> -O2  -gnatpg -gnata -I- -I. -Iada -I/cvs/gcc/gcc/ada 
> /cvs/gcc/gcc/ada/ada.ads -o ada/ada.o
> +===GNAT BUG DETECTED==+
> | 4.1.0 20050409 (experimental) (x86_64-suse-linux-gnu) Segmentation fault |
> | Error detected at ada.ads:17:1   |
> | Please submit a bug report; see http://gcc.gnu.org/bugs.html.|
> | Use a subject line meaningful to you and us to track the bug.|
> | Include the entire contents of this bug box in the report.   |
> | Include the exact gcc or gnatmake command that you entered.  |
> | Also include sources listed below in gnatchop format |
> | (concatenated together with no headers between files).   |
> +==+

I'm testing the following patches to fix these two problems.  I'll
submit them formally when bootstrap is complete, but early review and
maybe even pre-approval wouldn't hurt :-)

Index: tree-ssa-ccp.c
===
RCS file: /cvs/gcc/gcc/gcc/tree-ssa-ccp.c,v
retrieving revision 2.64
diff -u -p -d -u -p -d -u -p -r2.64 tree-ssa-ccp.c
--- tree-ssa-ccp.c  9 Apr 2005 01:37:23 -   2.64
+++ tree-ssa-ccp.c  11 Apr 2005 19:54:39 -
@@ -1071,7 +1071,7 @@ visit_assignment (tree stmt, tree *outpu
   TREE_TYPE (TREE_OPERAND (orig_lhs, 0)),
   val.value));
 
-   orig_lhs = TREE_OPERAND (orig_lhs, 1);
+   orig_lhs = TREE_OPERAND (orig_lhs, 0);
if (w && is_gimple_min_invariant (w))
  val.value = w;
else
Index: varasm.c
===
RCS file: /cvs/gcc/gcc/gcc/varasm.c,v
retrieving revision 1.495
diff -u -p -d -u -p -d -u -p -r1.495 varasm.c
--- varasm.c9 Apr 2005 20:41:49 -   1.495
+++ varasm.c11 Apr 2005 19:55:05 -
@@ -307,7 +307,7 @@ in_unlikely_text_section (void)
   cfun = DECL_STRUCT_FUNCTION (current_function_decl);
 
   ret_val = ((in_section == in_unlikely_executed_text)
-|| (in_section == in_named
+|| (in_section == in_named && cfun
 && cfun->unlikely_text_section_name
 && strcmp (in_named_name, 
cfun->unlikely_text_section_name) == 0));


-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: bootstrap compare failure in ada/targparm.o on i686-pc-linux-gnu?

2005-04-11 Thread Alexandre Oliva
On Apr 11, 2005, Alexandre Oliva <[EMAIL PROTECTED]> wrote:

> On Apr  4, 2005, Richard Henderson <[EMAIL PROTECTED]> wrote:
>> On Mon, Apr 04, 2005 at 07:57:09PM -0300, Alexandre Oliva wrote:
>>> My head hurts about the GGC implications of opaque pointers in such a
>>> hash table, and retaining pointers in the hash table that have already
>>> been otherwise freed.

>> These are solved problems.

> Only in the mathematical sense.  We still have such incorrect uses in
> our tree, as the bootstrap problem I reported shows.

I take that back.  The hash tables seem to be fine.  I suspect it's
the sorting on pointers in the goto_queue that triggers the problem.
In fact, I'm pretty sure comparing pointers that are not guaranteed to
be in the same array invokes undefined behavior, and I do remember
having run into errors because of such abuse in bfd many moons ago.
Ugh...

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: bootstrap compare failure in ada/targparm.o on i686-pc-linux-gnu?

2005-04-11 Thread Alexandre Oliva
On Apr 11, 2005, Richard Henderson <[EMAIL PROTECTED]> wrote:

> On Mon, Apr 11, 2005 at 05:41:56PM -0300, Alexandre Oliva wrote:
>> I take that back.  The hash tables seem to be fine.  I suspect it's
>> the sorting on pointers in the goto_queue that triggers the problem.
>> In fact, I'm pretty sure comparing pointers that are not guaranteed to
>> be in the same array invokes undefined behavior, and I do remember
>> having run into errors because of such abuse in bfd many moons ago.

> Technically, yes.  I deny that it's a problem in practice except
> for segmented architectures.  Which includes no gcc hosts.

The comparison of pointers may not be a problem for GCC hosts, but
it's indeed what breaks bootstraps.  We sort the goto queue on
the address of its stmt, and then walk the array in sequential order
in lower_try_finally_switch().  This is exactly the sort of dependency
on pointers that we've just agreed we should avoid.

In case the problem isn't clear, we assign switch indexes growing from
0, then we sort the goto_queue VARRAY by stmt address, then generate
the switch stmt along with new artificial labels.  Even though the
switch statement is later sorted by index, its label names and
redirecting BBs that follow are in a different order, so bootstrap
comparison fails.  Here's a diff between two compiles of
targparm.adb's t12.eh dump:

@@ -1526,8 +1526,8 @@
   system__soft_links__set_jmpbuf_address_soft (JMPBUF_SAVED.1373);
   switch (finally_tmp.174D.3471)
 {
-  case 0: goto ;
-  case 2: goto ;
+  case 0: goto ;
+  case 1: goto ;
   case 3: goto ;
   default : goto ;
 }
@@ -1544,13 +1544,13 @@
   goto ;
   :;
   __builtin_stack_restore (saved_stack.54D.1849);
-  goto ;
+  goto ploop_continueD.1185;
   :;
   __builtin_stack_restore (saved_stack.54D.1849);
-  goto ploop_continueD.1185;
+  goto line_loop_continueD.1186;
   :;
   __builtin_stack_restore (saved_stack.54D.1849);
-  goto line_loop_continueD.1186;
+  goto ;
   :;
   ploop_continueD.1185:;
   kD.1369 = kD.1369 + 1;

See how we've simply rotated the final target of the switch indexes?
The intermediate group of labels that are goto targets from the switch
statement are each followed by a goto statement to a label in the
second hunk of the diff, in the same order.


It looks like it wouldn't be too hard to overcome this problem by
generating the artificial labels in case_index order, instead of in
goto_queue order, but it's not obvious to me that the potential
randomness from sorting of stmt addresses in the goto_queue that have
the same index couldn't possibly affect the outcome.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: bootstrap compare failure in ada/targparm.o on i686-pc-linux-gnu?

2005-04-11 Thread Alexandre Oliva
On Apr 12, 2005, Alexandre Oliva <[EMAIL PROTECTED]> wrote:

> It looks like it wouldn't be too hard to overcome this problem by
> generating the artificial labels in case_index order, instead of in
> goto_queue order, but it's not obvious to me that the potential
> randomness from sorting of stmt addresses in the goto_queue that have
> the same index couldn't possibly affect the outcome.

This is what I had in mind with the paragraph above.  Does it feel
like a reasonable approach?  (Note that the two sets of
last_case_index were dead, so the patch removes them)

Index: gcc/tree-eh.c
===
RCS file: /cvs/gcc/gcc/gcc/tree-eh.c,v
retrieving revision 2.28
diff -u -p -r2.28 tree-eh.c
--- gcc/tree-eh.c 1 Apr 2005 03:42:44 - 2.28
+++ gcc/tree-eh.c 12 Apr 2005 05:51:36 -
@@ -1194,7 +1194,6 @@ lower_try_finally_switch (struct leh_sta
   q = tf->goto_queue;
   qe = q + tf->goto_queue_active;
   j = last_case_index + tf->may_return;
-  last_case_index += nlabels;
   for (; q < qe; ++q)
 {
   tree mod;
@@ -1217,20 +1216,37 @@ lower_try_finally_switch (struct leh_sta
 
   case_index = j + q->index;
   if (!TREE_VEC_ELT (case_label_vec, case_index))
-	{
-	  last_case = build (CASE_LABEL_EXPR, void_type_node,
-			 build_int_cst (NULL_TREE, switch_id), NULL,
-			 create_artificial_label ());
-	  TREE_VEC_ELT (case_label_vec, case_index) = last_case;
-
-	  x = build (LABEL_EXPR, void_type_node, CASE_LABEL (last_case));
-	  append_to_statement_list (x, &switch_body);
-	  append_to_statement_list (q->cont_stmt, &switch_body);
-	  maybe_record_in_goto_queue (state, q->cont_stmt);
-	}
+	TREE_VEC_ELT (case_label_vec, case_index)
+	  = build (CASE_LABEL_EXPR, void_type_node,
+		   build_int_cst (NULL_TREE, switch_id), NULL,
+		   /* We store the cont_stmt in the
+		  CASE_LABEL, so that we can recover it
+		  in the loop below.  We don't create
+		  the new label while walking the
+		  goto_queue because pointers don't
+		  offer a stable order.  */
+		   q->cont_stmt);
+}
+  for (j = last_case_index; j < last_case_index + nlabels; j++)
+{
+  tree label;
+  tree cont_stmt;
+
+  last_case = TREE_VEC_ELT (case_label_vec, j);
+
+  gcc_assert (last_case);
+
+  cont_stmt = CASE_LABEL (last_case);
+
+  label = create_artificial_label ();
+  CASE_LABEL (last_case) = label;
+
+  x = build (LABEL_EXPR, void_type_node, label);
+  append_to_statement_list (x, &switch_body);
+  append_to_statement_list (cont_stmt, &switch_body);
+  maybe_record_in_goto_queue (state, cont_stmt);
 }
   replace_goto_queue (tf);
-  last_case_index += nlabels;
 
   /* Make sure that the last case is the default label, as one is required.
  Then sort the labels, which is also required in GIMPLE.  */

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: bootstrap compare failure in ada/targparm.o on i686-pc-linux-gnu?

2005-04-14 Thread Alexandre Oliva
On Apr 12, 2005, Alexandre Oliva <[EMAIL PROTECTED]> wrote:

> On Apr 12, 2005, Alexandre Oliva <[EMAIL PROTECTED]> wrote:
>> It looks like it wouldn't be too hard to overcome this problem by
>> generating the artificial labels in case_index order, instead of in
>> goto_queue order, but it's not obvious to me that the potential
>> randomness from sorting of stmt addresses in the goto_queue that have
>> the same index couldn't possibly affect the outcome.

> This is what I had in mind with the paragraph above.  Does it feel
> like a reasonable approach?  (Note that the two sets of
> last_case_index were dead, so the patch removes them)

The patch I posted before wasn't enough to fix the problem, there was
another portion of code that walked the goto queue in order.  This one
fixes the problem, and enables GCC mainline as of a week ago or so to
bootstrap on i686-pc-linux-gnu (Fedora Core devel, with exec-shield
mmap randomization enabled).  I haven't used a newer tree to test this
because I've been unable to bootstrap it for other reasons.

Ok to install?

Index: gcc/ChangeLog
from  Alexandre Oliva  <[EMAIL PROTECTED]>

	* tree-eh.c (lower_try_finally_copy): Generate new code in
	response to goto_queue entries as if the queue was sorted by
	index, not pointers.
	(lower_try_finally_switch): Likewise.

Index: gcc/tree-eh.c
===
RCS file: /cvs/gcc/gcc/gcc/tree-eh.c,v
retrieving revision 2.28
diff -u -p -r2.28 tree-eh.c
--- gcc/tree-eh.c 1 Apr 2005 03:42:44 - 2.28
+++ gcc/tree-eh.c 14 Apr 2005 01:15:45 -
@@ -1038,47 +1038,72 @@ lower_try_finally_copy (struct leh_state
 {
   struct goto_queue_node *q, *qe;
   tree return_val = NULL;
-  int return_index;
-  tree *labels;
+  int return_index, index;
+  struct
+  {
+	struct goto_queue_node *q;
+	tree label;
+  } *labels;
 
   if (tf->dest_array)
 	return_index = VARRAY_ACTIVE_SIZE (tf->dest_array);
   else
 	return_index = 0;
-  labels = xcalloc (sizeof (tree), return_index + 1);
+  labels = xcalloc (sizeof (*labels), return_index + 1);
 
   q = tf->goto_queue;
   qe = q + tf->goto_queue_active;
   for (; q < qe; q++)
 	{
-	  int index = q->index < 0 ? return_index : q->index;
-	  tree lab = labels[index];
-	  bool build_p = false;
+	  index = q->index < 0 ? return_index : q->index;
 
-	  if (!lab)
-	{
-	  labels[index] = lab = create_artificial_label ();
-	  build_p = true;
-	}
+	  if (!labels[index].q)
+	labels[index].q = q;
+	}
+
+  for (index = 0; index < return_index + 1; index++)
+	{
+	  tree lab;
+
+	  q = labels[index].q;
+	  if (! q)
+	continue;
+
+	  lab = labels[index].label = create_artificial_label ();
 
 	  if (index == return_index)
 	do_return_redirection (q, lab, NULL, &return_val);
 	  else
 	do_goto_redirection (q, lab, NULL);
 
-	  if (build_p)
-	{
-	  x = build1 (LABEL_EXPR, void_type_node, lab);
-	  append_to_statement_list (x, &new_stmt);
+	  x = build1 (LABEL_EXPR, void_type_node, lab);
+	  append_to_statement_list (x, &new_stmt);
 
-	  x = lower_try_finally_dup_block (finally, state);
-	  lower_eh_constructs_1 (state, &x);
-	  append_to_statement_list (x, &new_stmt);
+	  x = lower_try_finally_dup_block (finally, state);
+	  lower_eh_constructs_1 (state, &x);
+	  append_to_statement_list (x, &new_stmt);
 
-	  append_to_statement_list (q->cont_stmt, &new_stmt);
-	  maybe_record_in_goto_queue (state, q->cont_stmt);
-	}
+	  append_to_statement_list (q->cont_stmt, &new_stmt);
+	  maybe_record_in_goto_queue (state, q->cont_stmt);
+	}
+
+  for (q = tf->goto_queue; q < qe; q++)
+	{
+	  tree lab;
+
+	  index = q->index < 0 ? return_index : q->index;
+
+	  if (labels[index].q == q)
+	continue;
+
+	  lab = labels[index].label;
+
+	  if (index == return_index)
+	do_return_redirection (q, lab, NULL, &return_val);
+	  else
+	do_goto_redirection (q, lab, NULL);
 	}
+	
   replace_goto_queue (tf);
   free (labels);
 }
@@ -1194,7 +1219,6 @@ lower_try_finally_switch (struct leh_sta
   q = tf->goto_queue;
   qe = q + tf->goto_queue_active;
   j = last_case_index + tf->may_return;
-  last_case_index += nlabels;
   for (; q < qe; ++q)
 {
   tree mod;
@@ -1217,20 +1241,37 @@ lower_try_finally_switch (struct leh_sta
 
   case_index = j + q->index;
   if (!TREE_VEC_ELT (case_label_vec, case_index))
-	{
-	  last_case = build (CASE_LABEL_EXPR, void_type_node,
-			 build_int_cst (NULL_TREE, switch_id), NULL,
-			 create_artificial_label ());
-	  TREE_VEC_ELT (case_label_vec, case_index) = last_case;
-
-	  x = build (LABEL_EXPR, void_type_node, CASE_LABEL (last_case));
-	  append_to_statement_list 

Re: bootstrap compare failure in ada/targparm.o on i686-pc-linux-gnu?

2005-04-14 Thread Alexandre Oliva
On Apr 14, 2005, Richard Henderson <[EMAIL PROTECTED]> wrote:

> On Thu, Apr 14, 2005 at 12:13:59PM -0300, Alexandre Oliva wrote:
>> * tree-eh.c (lower_try_finally_copy): Generate new code in
>> response to goto_queue entries as if the queue was sorted by
>> index, not pointers.
>> (lower_try_finally_switch): Likewise.

> Ok.

Mark, ok for 4.0 as well?

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: bootstrap compare failure in ada/targparm.o on i686-pc-linux-gnu?

2005-04-14 Thread Alexandre Oliva
On Apr 14, 2005, Mark Mitchell <[EMAIL PROTECTED]> wrote:

> Alexandre Oliva wrote:
>> On Apr 14, 2005, Richard Henderson <[EMAIL PROTECTED]> wrote:
>> 
>>> On Thu, Apr 14, 2005 at 12:13:59PM -0300, Alexandre Oliva wrote:
>>> 
>>>> * tree-eh.c (lower_try_finally_copy): Generate new code in
>>>> response to goto_queue entries as if the queue was sorted by
>>>> index, not pointers.
>>>> (lower_try_finally_switch): Likewise.
>> 
>>> Ok.
>> Mark, ok for 4.0 as well?

> Richard, what's your level of confidence here?  I'd rather not break
> C++ or Java...

If you look closely, you'll see it's just enforcing a known order to
the operations.  If you prefer, I can prepare a patch that copies the
goto_queue, sorts that by index, and iterates on that instead of the
sorted-by-stmt goto_queue.

If it helps any, bootstrap and regtest passed on 4.0 branch,
amd64-linux-gnu.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Heads-up: volatile and C++

2005-04-14 Thread Alexandre Oliva
On Apr 15, 2005, Marcin Dalecki <[EMAIL PROTECTED]> wrote:

> On 2005-04-15, at 01:10, Richard Henderson wrote:

>> template T acquire(T *ptr);
>> template void release(T *ptr, T val);
>> 
>> where the functions do the indirection plus the memory ordering?

> Templates are a no-go for a well known and well defined subset for C++
> for embedded programming known commonly as well embedded C++.

It doesn't really have to be templates.  Consider static_cast et al.
They look like template function calls, but aren't.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: bootstrap compare failure in ada/targparm.o on i686-pc-linux-gnu?

2005-04-15 Thread Alexandre Oliva
On Apr 15, 2005, Mark Mitchell <[EMAIL PROTECTED]> wrote:

> Richard Henderson wrote:
>> On Thu, Apr 14, 2005 at 05:26:15PM -0700, Mark Mitchell wrote:
>> 
>>> Richard, what's your level of confidence here?  I'd rather not
>>> break C++ or Java...
>> I think it's pretty safe.

> OK, Alexandre, please install the patch.

It's in.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: A plan for eliminating cc0

2005-04-26 Thread Alexandre Oliva
Sorry, I dropped the ball on this one.

On Mar 24, 2005, Ian Lance Taylor  wrote:

> Alexandre Oliva <[EMAIL PROTECTED]> writes:
>> I realize the sequence construct is already taken for delayed
>> branches, but that's only in the outermost insn pattern.  We could
>> overload the meaning, or just enclose the (sequence) in some other
>> construct (a dummy parallel?), give it a different mode (a CC mode?)
>> or something else to indicate that this is a combined pattern.  Or
>> create a different rtl code for cc0 sequences of instructions.  The
>> syntax is not all that important at this point.

> I don't understand why you are suggesting a sequence at all.  Why not
> this:

> (define_insn_and_split "cc0_condbranch_cmpsi"
>   [(set (pc)
> (if_then_else (match_operator 3 "comparison_operator"
>(match_operand:SI 0 "register_operand" "!*d*a*x,dax")
>(match_operand:SI 1 "nonmemory_operand" "*0,daxi"
>  (label_ref (match_operand 2 "" ""))
>  (pc)))]
>   )]

Because the above assumes the insn that sets cc is a single set,
without anything else in parallel.  If you have to handle parallels,
you may be tempted to just add them to the combined insn, and then in
some ugly cases you can end up having ambiguities.  Consider a
(clobber (reg X)) in parallel with the (set (cc)), for example.
Consider that some instructions that use cc may also have such
clobbers, and some don't, and that the clobber is the only difference
between them.  How would you deal with this?  Consider, for the worst
case, that whatever strategy you use to decide where to add the
clobber is the strategy the port maintainer took when deciding where
to add the explicit clobber to the two insn variants that use cc.q


Besides, I feel the combined sequences might reduce the actual effects
of the pattern explosion, since you won't have to generate code to
match the whole thing and the components, you just match a sequence
with the given components.  But maybe it doesn't make any significant
difference in this regard.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: A plan for eliminating cc0

2005-04-26 Thread Alexandre Oliva
On Mar 28, 2005, Paul Schlie <[EMAIL PROTECTED]> wrote:

> More specifically, if GCC enabled set to optionally specify multiple targets
> for a single rtl source expression, i.e.:

>   (set ((reg:xx %0) (reg CC) ...) (some-expression:xx ...))

There's always (set (parallel (...)) (some-expression)).  We use
parallels with similar semantics in cumulative function arguments, so
this wouldn't be entirely new, but I suppose most rtl handlers would
need a bit of work to fully understand the implications of this.

Also, the fact that reg CC has a different mode might require some
further tweaking.

Given this, we could figure out some way to create lisp-like macros to
translate input shorthands such as (set_cc (match_operand) (value))
into the uglier (set (parallel (match_operand) (cc0)) (value)),
including the possibility of a port introducing multiple
variants/modes for set_cc, corresponding to different sets of flags
that various instructions may set.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-03 Thread Alexandre Oliva
On Apr 28, 2005, David Edelsohn <[EMAIL PROTECTED]> wrote:

>>>>>> Joe Buck writes:
Joe> Is there a reason why we aren't using a recent libtool?

>   Porting and testing effort to upgrade. 

FWIW, I'd love to switch to a newer version of libtool, but I don't
have easy access to as many OSs as I used to several years ago, so
whatever testing I could offer would be quite limited.

The other issue is that I'm aware of some changes that we've adopted
in GCC libtool that are in libtool CVS mainline (very unstable), but
not in the libtool 1.5 branch (stable releases come out of it) nor in
the 2.0 branch (where the next major stable release is hopefully soon
coming from).

As much as I'd rather avoid switching from one random CVS snapshot of
libtool, now heavily patched, to another random CVS snapshot, it's
either that or waiting a long time until 2.0 is released, then
backport whatever features from libtool mainline we happen to be
relying on.  Or even wait for 2.2.

At this point, it doesn't feel like switching to 1.5.16 is worth the
effort.  2.0 should be far more maintainable, and hopefully
significantly more efficient on hosts where the use of shell functions
optimized for properties of the build machine and/or the host
machine can bring us such improvement.

Thoughts?

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-03 Thread Alexandre Oliva
On Apr 29, 2005, Jakub Jelinek <[EMAIL PROTECTED]> wrote:

> On Fri, Apr 29, 2005 at 10:47:06AM +0100, Andrew Haley wrote:
>> Ian Lance Taylor writes:
>> > 
>> > And, yes, we clearly need to do something about the libjava build.
>> 
>> OK, I know nothing about libtool so this might not be possible, but
>> IMO the easiest way of making a dramatic difference is to cease to
>> compile every file twice, once with PIC and once without.  There would
>> be a small performance regression for statically linked Java apps, but
>> in practice Java is very hard to use with static linkage.

> Definitely.  For -static you either have the choice of linking the
> binary with -Wl,--whole-archive for libgcj.a (and likely other Java libs),
> or spend a lot of time adding more and more objects that are really
> needed, but linker doesn't pick them up.

> For the distribution, we simply remove all Java *.a libraries, but it would
> be a build time win if we don't have to compile them altogether.

We had a patch that did exactly this at some point, but RTH said it
broke GNU/Linux/alpha and never gave me the details on what the
failure mode was, and I wasn't able to trigger the error myself.  I
still have the patch in my tree, and it does indeed save lots of
cycles.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-03 Thread Alexandre Oliva
On Apr 29, 2005, Richard Henderson <[EMAIL PROTECTED]> wrote:

> On Fri, Apr 29, 2005 at 01:30:13PM -0400, Ian Lance Taylor wrote:
>> I don't know of a way to tell libtool to not do duplicate compiles.
>> You can use -prefer-pic, but at least from looking at the script it
>> will still compile twice, albeit with -fPIC both times.

> Incidentally, libtool does not compile things twice when you use
> convenience libraries.

Yes, it does, because when you compile libtool still doesn't know
you're going to only use the object file for convenience libraries.
Besides, the fact that only the PIC version of object files is used
for convenience libraries is effectively a limitation of libtool, that
should eventually be addressed.

We should try to reinstate that --tag disable-static patch and get
detailed information on what broke for you, and fix that.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-05 Thread Alexandre Oliva
On May  5, 2005, Andrew Haley <[EMAIL PROTECTED]> wrote:

> Per Bothner writes:
>> 
>> We could also save time by making --disable-static the default.
>> Building static libraries is not very useful on other than
>> embedded-class systems.

> I strongly agree.

The savings of creating static libraries would be small if we
refrained from building non-PIC object files.  This is exactly what
--tag disable-static for compilations accomplishes, and we had a patch
to use that in our tree for some time, but RTH ran into a (probably
libtool) bug on alpha that led him to revert the change, and then
didn't provide enough info for anyone else without access to an alpha
box to figure out what the problem was and then try to fix it :-(

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: RFC: (use) useful for avoiding unnecessary compare instructions during cc0->CCmode ?!

2005-05-14 Thread Alexandre Oliva
On May 14, 2005, BjÃrn Haase <[EMAIL PROTECTED]> wrote:

> I.e. expand
> would insert two instructions after the double-set instruction that contain
> the two individual sets and an additional "use" statement. I.e. above
> sequence after expand then would look like

> (parallel[
> (set reg:SI 100) (minus:SI (reg:SI 101) (reg:SI 102))
> (set reg:CC CC_xxx) (compare (reg:SI 101) (reg:SI 102)))])
> (parallel[ (set reg:SI 100) (minus:SI (reg:SI 101) (reg:SI 102))
>(use (reg:SI 100) ])
> (parallel[ (set reg:CC_xxx CC) (compare (reg:SI 101) (reg:SI 102)))
>(use (reg:CC])

You'd then have to some how arrange for the second and third insns to
not be removed as redundant, and come up with some additional work
around for the case when there's an overlap between output and input.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Alexandre Oliva
On May 16, 2005, Robert Dewar <[EMAIL PROTECTED]> wrote:

> After all, you can buy from Dell today a 2.4GHz machine with a 17"
> monitor, DVD drive, and 256Meg memory for $299 complete. Sure, some
> people cannot even afford that, but it is not clear that the gcc
> project can regard this as a major user segment that should be taken
> into account.

Just step back for a second and consider that the most common
computation platform these days is cell phones.  Also consider that a
number of cell phone manufacturers are adopting, or considering
adopting, GNU/Linux.  Consider that at least some of them are going to
enable users to download programs into the cell phones and run them.
Also consider that not all cell phones are identical.

Now wouldn't it be nice to be able to download some useful program in
source form and build it locally on your cell phone, while on the
road?  Sure, few people might be able to accomplish that without a
nice building wizard front-end, but that's doable.  Would we want GCC
to be tool that prevents this vision from coming true?

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Alexandre Oliva
On May 16, 2005, Russ Allbery <[EMAIL PROTECTED]> wrote:

> And package maintainers will never take cross-compilation seriously even
> if they really want to because they, for the most part, can't test it.

configure --build=i686-pc-linux-gnu \
--host=i686-somethingelse-linux-gnu 

should be enough to exercise most of the cross-compilation issues, if
you're using a sufficiently recent version of autoconf, but I believe
you already knew that.

The most serious problem regarding cross compilation is that it's
regarded as hard, so many people would rather not even bother to try
to figure it out.  So it indeed becomes a hard problem, because then
you have to fix a lot of stuff in order to get it to work.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Alexandre Oliva
On May 16, 2005, Russ Allbery <[EMAIL PROTECTED]> wrote:

> Alexandre Oliva <[EMAIL PROTECTED]> writes:
>> On May 16, 2005, Russ Allbery <[EMAIL PROTECTED]> wrote:

>>> And package maintainers will never take cross-compilation seriously
>>> even if they really want to because they, for the most part, can't test
>>> it.

>> configure --build=i686-pc-linux-gnu \
>> --host=i686-somethingelse-linux-gnu 

>> should be enough to exercise most of the cross-compilation issues, if
>> you're using a sufficiently recent version of autoconf, but I believe
>> you already knew that.

> What, you mean my lovingly hacked upon Autoconf 2.13 doesn't work?

No, just that it doesn't have the code that just compares build with
host to decide whether to enter cross-compilation mode.  Unless you
back-ported that from autoconf 2.5x, that is.

> Seriously, though, I think the above only tests things out to the degree
> that Autoconf would already be warning about no default specified for
> cross-compiling, yes?

I believe so, yes.  A configure script written with no regard to
cross-compilation may still fail to fail in catastrophic ways if
tested with native-cross.

> Wouldn't you have to at least cross-compile from a
> system with one endianness and int size to a system with a different
> endianness and int size and then try to run the resulting binaries to
> really see if the package would cross-compile?

Different endianness is indeed a harsh test on a package's
cross-compilation suitability.  Simple reliance on size of certain
types can already get you enough breakage.  Cross-building to x86 on
an x86_64 system may already catch a number of these.

> A scary number of packages, even ones that use Autoconf, bypass Autoconf
> completely when checking certain things or roll their own broken macros to
> do so.

+1

> I have never once gotten a single bug report, request, or report of
> anyone cross-compiling INN.  Given that, it's hard to care except in
> some abstract cleanliness sense

But see, you do care, and you're aware of the issues, so it just
works.  Unfortunately not all maintainers have as much knowledge or
even awareness about the subject as you do.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Alexandre Oliva
On May 16, 2005, Florian Weimer <[EMAIL PROTECTED]> wrote:

> Is this really necessary?  I would think that a LD_PRELOADed DSO which
> prevents execution of freshly compiled binaries would be sufficient to
> catch the most obvious errors.

This would break legitimate tests on the build environment, that use
e.g. CC_FOR_BUILD.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: GCC 3.4.4 RC2

2005-05-16 Thread Alexandre Oliva
On May 16, 2005, Georg Bauhaus <[EMAIL PROTECTED]> wrote:

> - cd ada/doctools && gnatmake -q xgnatugn
> + cd ada/doctools && gnatmake -q --GCC=$(CC) xgnatugn -largs --GCC=$(CC)

Don't you need quotes around $(CC), for the general case in which it's
not as simple as `gcc', but rather something like `ccache distcc gcc'
or just `gcc -many -Options'?

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-17 Thread Alexandre Oliva
On May 17, 2005, Karel Gardas <[EMAIL PROTECTED]> wrote:

> you see that 4.0 added "embedded" platforms like arm-none-elf and
> mips-none-elf to the primary platforms list.

These are only embedded targets.  You can't run GCC natively on them,
so they don't help embedded hosts in any way.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: The utility of standard's semantics for overflow

2005-07-01 Thread Alexandre Oliva
On Jun 29, 2005, Olivier Galibert <[EMAIL PROTECTED]> wrote:

>   char x = (char)((unsigned char)y + (unsigned char)z)

> is too ugly to live.

Yeah, indeed, one shouldn't assume char is signed :-) :-)

(strictly speaking, you don't, but I thought this would be funny so I
went ahead and posted it)

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: The utility of standard's semantics for overflow

2005-07-01 Thread Alexandre Oliva
On Jun 29, 2005, "Dave Korn" <[EMAIL PROTECTED]> wrote:

>   In fact, doesn't this suggest that in _most_ circumstances, *saturation*
> would be the best behaviour?

If that's for user-introduced overflows, it could be useful in some
cases, but I wouldn't go as far as `most'.

For compiler-introduced overflows (e.g., re-association), wrap
semantics enable some optimizations that you just couldn't perform
otherwise.  Consider the already-mentioned example:

  (a + b) + c

if you re-group this as:

  a + (b + c)

and `b + c' overflows, and then the addition with a underflows, that's
not a problem, since modulo semantics will give you the correct
result.

Turning:

  a * b + a * c

into

  a * (b + c)

might be a problem should b + c overflow, but if you use modulo
semantics, you'll always get the correct result for cases in which
none of the original operations overflowed.


What you must be careful to avoid, however, is deriving further
assumptions from `(b + c) does not overflow'.  Since it's a
compiler-introduced operation in both cases, that's only valid if you
can assume modulo semantics, assuming it doesn't overflow will lead
you to wrong conclusions.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: updating libtool, etc.

2005-07-01 Thread Alexandre Oliva
On Jun 30, 2005, Geoff Keating <[EMAIL PROTECTED]> wrote:

> Does anyone mind if I update libtool to the latest released version,
> 1.5.18, and regenerate everything with automake 1.9.5?

I'm pretty sure the latest released version would introduce some
regressions for features that we added to the libtool 2.0 branch, but
not to the 1.5 branch :-(

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Some notes on the Wiki

2005-07-12 Thread Alexandre Oliva
On Jul 11, 2005, Daniel Berlin <[EMAIL PROTECTED]> wrote:

> In fact, a lot of projects don't even bother to distribute anything but
> HTML docs anymore (regardless of how they browse it).

And that's a pity, because it's a bit of a pain to turn the output of
grep -r regexp docs/HTML into something the browser will display
properly, especially when there are multiple hits.

The stand-alone info tool just rules at that; it's invaluable to
search GCC docs like that.  Having dozens of web pages instead would
make such searches intolerable.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: 4.2 Project: "@file" support

2005-08-27 Thread Alexandre Oliva
On Aug 25, 2005, DJ Delorie <[EMAIL PROTECTED]> wrote:

> If "@string" is seen, but "string" does not represent an existing
> file, the string "@string" is passed to the program as-is.

With the terrible side effect of letting people think their
applications will just work, but introducing the very serious risk of
security problems, leading to, say:

gcc: dj:yourpassword:1234:567:DJ: invalid argument

instead of 

gcc: @/etc/passwd: invalid argument


Sure this is probably not so much of an issue for GCC (although remote
compile servers are not totally unheard of), but it could easily
become a very serious problem for other applications that might take
filenames from the network and worry about quoting - but not @; those
would then need fixing.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: When is it legal to compare any pair of pointers?

2005-09-14 Thread Alexandre Oliva
On Sep 13, 2005, Daniel Jacobowitz <[EMAIL PROTECTED]> wrote:

> This bit binutils, in the form of a crash in a hash function on
> Solaris.  I think that was pointer subtraction, rather than comparison,
> however.

> Perhaps someone who remembers this problem more clearly than
> I do can chip in if I've gotten it totally wrong.

Yep, it was pointer subtraction, and GCC actually optimized the
division, that could in theory be assumed to be exact, into a
multiplication by a large constant (aah, the wonders of modulo
arithmetics :-), and that's what broke some sorting function on
Solaris.  And I was the lucky guy who got to debug that :-)

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: checksum files not ^C safe

2005-09-15 Thread Alexandre Oliva
On Sep 15, 2005, Geoffrey Keating <[EMAIL PROTECTED]> wrote:

> On 14/09/2005, at 5:32 PM, Mike Stump wrote:

>> If you output to a temp file, and then mv them to the final file,
>> they will be (I think) safe.

> From the 'make' documentation, node 'Interrupts':

>> If `make' gets a fatal signal while a command is executing, it may
>> delete the target file that the command was supposed to update.
>> This is
>> done if the target file's last-modification time has changed since
>> `make' first checked it.

> So, I think this is safe.  The file will be deleted and then re-built
> next time you run 'make'.

That unfortunately doesn't cover power failures and so, which may
leave an incomplete file behind.  The use of a temp file has been the
right approach, recommended forever, and used all over the place in
the GCC build machinery.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: When is it legal to compare any pair of pointers?

2005-09-15 Thread Alexandre Oliva
On Sep 14, 2005, Joe Buck <[EMAIL PROTECTED]> wrote:

> On Wed, Sep 14, 2005 at 02:15:43PM -0300, Alexandre Oliva wrote:

>> Yep, it was pointer subtraction, and GCC actually optimized the
>> division, that could in theory be assumed to be exact, into a
>> multiplication by a large constant (aah, the wonders of modulo
>> arithmetics :-)

> People that don't like the GCC optimization

It's not entirely clear that you got the impression I didn't like it.
I have no objection at all, I was just providing the additional
details as to the bug we'd run into because of unspecified uses of
pointer subtraction, as requested by DanJ.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


RFC: TLS improvements for IA32 and AMD64/EM64T

2005-09-15 Thread Alexandre Oliva
Over the past few months, I've been working on porting to IA32 and
AMD64/EM64T the interesting bits of the TLS design I came up with for
FR-V, achieving some impressive speedups along with slight code size
reductions in the most common cases.

Although the design is not set in stone yet, it's fully implemented
and functional with patches I'm about to post for binutils, gcc and
glibc mainline, as follow-ups to this message, except that the GCC
patch will go to gcc-patches, as expected.

The specs RFC is attached.  Comments are welcome.

Thread Local Storage Descriptors for IA32 and AMD64/EM64T

  Version 0.9.2 - 2005-09-15

 Alexandre Oliva <[EMAIL PROTECTED], [EMAIL PROTECTED]>


Introduction


While porting NPTL to the FR-V architecture, an idea occurred to me
that would enable significant improvements to the General Dynamic and
Local Dynamic access models to thread-local variables.  These methods
are known to be extremely inefficient because of the need to call a
function to obtain the address of a thread-local variable, a function
that can often be quite inefficient.

The reason for calling such a function is that, when code is compiled
for a dynamic library (the cases in which these access models are
used), it is not generally possible to know whether a thread-local
variable is going to be allocated in the Static Thread Local Storage
Block or not.  Dynamic libraries that are loaded at program start up
can have their thread local storage sections assigned to this static
block, since their TLS requirements are all known before the program
is initiated.  Libraries loaded at a later time, however, may need
dynamically-allocated storage for their TLS blocks.

In the former case, the offset from the Thread Pointer, usually held
in a register, to the thread-local variable is going to be the same
for all threads, whereas in the latter case, such offset may vary, and
it may even be necessary to allocate storage for the variable at the
time it is accessed.

Existing implementations of GD and LD access models did not take
advantage of this run-time constant to speed up the case of libraries
loaded before program start up, a case that is certainly the most
common.

Even though libraries can choose more efficient access models, they
can only do so by giving up the ability for the modules that define
such variables to be loaded after program start up, since different
code sequences have to be generated in this case.

The method proposed here doesn't make a compile-time trade off; it
rather decides, at run time, how each variable can be accessed most
efficiently, without any penalty on code or data sizes in case the
variable is defined in an initially-loaded module, and with some
additional data, allocated dynamically, for the case of late-loaded
modules.  In both cases, performance is improved over the traditional
access models.

Another advantage of this novel design for such access models is that
it enables relocations to thread-local variables to be resolved
lazily.


Background
==

Thread-local storage is organized as follows: for every thread, two
blocks of memory are allocated: a Static TLS block and a Dynamic
Thread Vector.  A thread pointer, normally a reserved register, points
to some fixed location in the Static TLS block, that contains a
pointer to the dynamic thread vector at some fixed location as well.

TLS for the main executable, if it has a TLS section, is also at a
fixed offset from the thread pointer.  Other modules loaded before the
program starts will also have their TLS sections assigned to the
Static TLS block.

The dynamic thread vector starts with a generation count, followed by
pointers to TLS blocks holding thread-specific copies of the TLS
sections of each module.

If modules are loaded at run time, the dynamic thread vector may need
to grow, and the corresponding TLS blocks may have to be allocated
dynamically.  The generation count is used to control whether the DTV
is up-to-date with regard to newly-loaded or unloaded modules,
enabling the reuse of entries without confusion on whether a TLS block
has been created and initialized for a newly-loaded module or whether
that block was used by a module already unloaded, that is still
waiting for deallocation.


Programs can access thread-local variables by using code that follows
4 different models: Local Exec, Initial Exec, General Dynamic and
Local Dynamic.

Local Exec is only applicable when code in the main executable
accesses a thread-local variable also defined in the main executable.
In such cases, the offset from the Thread Pointer to the variable can
be computed by the linker, so the access model consists in simply
adding this offset to the Thread Pointer to compute the address of the
variable.

Initial Exec is applicable when code in the main executable accesses
thread-local variables that are defined in other loadable modules that
are dependencies of t

Re: Error: Local symbol '.LTHUNK0' can't be equated to ......

2005-10-01 Thread Alexandre Oliva
On Sep 30, 2005, Benjamin Redelings <[EMAIL PROTECTED]> wrote:

>   Recently I've been getting strange errors on ill-formed code.  It looks
> as if the compiler is not stopping after an error, but running the
> assembler anyway:

Are you compiling with -pipe, by any chance?

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Running ranlib after installation - okay or not?

2005-10-10 Thread Alexandre Oliva
On Sep 29, 2005, "Peter O'Gorman" <[EMAIL PROTECTED]> wrote:

> I posted a patch that nobody has had time to look at for this, even if
> it is not acceptable (it would probably be better if it reset the
> permissions after calling ranlib) I'd appreciate some feedback :)

> <http://gcc.gnu.org/ml/gcc-patches/2005-09/msg00201.html>

I'd missed it, sorry.

The patch is ok, but it's not in yet because the format of the
ChangeLog entries will require manual application in every one of the
files, and that takes time.  If you'd repost the patch in a format
that enables it to be installed with say cl2patch, or even with the
raw ChangeLog diffs such that I use clcleanup and cl2patch to apply it
to the current tree, I'll check it in when the mainline policy allows
it.

Thanks,

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


RFC: weakref GCC attribute and .weakref assembly directive

2005-10-10 Thread Alexandre Oliva
I'll probably post a patch for the assembler, that implements this,
tonight.  The compiler still needs a bit more work in the testsuite
(or in the implementation, to make it independent from assembler
support) so it might take longer.

Comments are welcome.

 Using weakrefs to avoid weakening strong references
   
 Alexandre Oliva <[EMAIL PROTECTED]>
  2005-10-10
   
Introduction


Consider a header file that defines inline functions that would like
to use (or just test for a definition of) a certain symbol (function,
variable, wahtever), if it is defined in the final program or one of
the libraries it links with, but that have alternate code paths in
case the symbol is not defined, so it would like to not force the
symbol to be defined.

This is the case of gthr-* headers in GCC, that libstdc++ uses and
exposes to users, creating a number of problems.

Such a header has traditionally been impossible to implement without
declaring the symbol as weak, which has the effect that any references
to the symbol in the user's code will also be regarded as weak.  This
has two negative side effects:

- if the function is defined in a static library, and the library is
linked into the program, the object file containing the definition may
not be linked in, because all references to it are weak, even
references that should have been strong.

- if the user accidentally fails to link in the library providing the
referenced symbol, she won't get an error message, and the code that
assumed strong references is likely to crash.


Existing solutions
==

One way to avoid this problem is to move the direct reference to the
symbol from the inline function into a function in a separate library,
or even move the entire function there.  The library references the
symbol weakly, without affecting user code.  This probably impacts
performance negatively, and may require a new library to be linked in,
which an all-inline header file (say, C++ template definitions) would
rather avoid.

Another way to avoid the problem it is to create a variable in a
separate library, initialized with a weak reference to the symbol, and
access the variable in the inline function.  This still has a small
impact on performance and may require a new library, but the most
serious problem is that it defines a variable as part of the interface
of a library, which is generally regarded as poor practice.


Weakrefs


The idea to address the problem is to enable the compiler to
distinguish references that are intended to be weak from those that
are to be strong, and combine them in the same way that the linker
would combine an object file with a weak undefined symbol and another
object containing a symbol with the same name.  The idea was to enable
people to write code as if they had combined two such object files
into a single translation unit.

The idea of a weak alias may immediately come to mind, but this is not
what we are looking for.  A weak alias is a definition that is in
itself weak (i.e., it yields to other definitions), that holds the
same value as another definition in the same translation unit.  This
other definition can be strong or weak, but it must be a definition.
A weak alias cannot reference an undefined symbol, weak or strong.

What we need, in contrast, is some means to define an alias that
doesn't, by itself, cause an external definition of the symbol to be
brought in.  If the symbol is referenced directly elsewhere, however,
then it must be defined.  This is similar to the notion of weak
references in garbage collection literature, in which a strong
reference stops an object from being garbage-collected, but a weak
reference does not.  I've decided to name this kind of alias a
weakref.


I could have introduce means in the compiler to create such weakrefs,
and handled them entirely within the compiler, as long as it can see
the entire translation unit before deciding whether to issue or not a
.weak directive for the referenced symbol.

However, since the notion can be useful in the assembler as well,
especially for large or complex preprocessed assembly sources, I went
ahead and decided to implement it in the assembler, and get the
compiler to use that.

This notion may also be useful for compilers that combine multiple
translation units into a single assembly output file.


Assembler implementation


The following syntax was chosen for assembly code:

  .weakref , 

The semantics are as follows:

- if  is referenced or defined, then .weakref has no effect
whatsoever on its symbol;

- if  is never referenced or defined other than in .weakref
directives, but  is, then  is marked as weak undefined
in the symbol table;

- multiple aliases may be weakrefs to the same target, and the effect
is equivalent to having a single 

Re: RFC: weakref GCC attribute and .weakref assembly directive

2005-10-18 Thread Alexandre Oliva
On Oct 13, 2005, Daniel Jacobowitz <[EMAIL PROTECTED]> wrote:

> On Thu, Oct 13, 2005 at 08:33:01AM -0500, Aaron W. LaFramboise wrote:
>> Could you compare your novel weak references to PECOFF's notion of "weak 
>> externals"?
>> 
>> .weak sym1 = sym2  # Analogous to: .weakref sym1, sym2

> The difference is that ".weak sym1 = sym2" resolves to sym1 (if
> available) else sym2; but ".weakref sym1, sym2" resolves to sym2 (if
> available) else zero.  Also sym1 does not become an external, only a
> local alias, IIRC.

Yep.  In `.weak sym1 = sym2', sym1 is a weak alias, which I actually
contrast with a weakref in the spec text I posted.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: A couple more subversion notes

2005-10-20 Thread Alexandre Oliva
On Oct 20, 2005, Daniel Berlin <[EMAIL PROTECTED]> wrote:

> svn diff -r1:r2 is only slow in the very small diff case, where ssh
> handshake time dominates the amount of data to be transferred.

And then, cvs diff -r1 -r2 also requires a ssh handshake, so I don't
get what it is that people have been objecting to.  Can anyone please
clarify?

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: A couple more subversion notes

2005-10-20 Thread Alexandre Oliva
On Oct 20, 2005, [EMAIL PROTECTED] (Richard Kenner) wrote:

> I'm very concerned that we're greating increasing the barrier to entry for
> work on GCC.  cvs is very intuitive and simple to use.

The same can be said of svn, so it's not like a great barrier increase.

> I'm not seeing the same thing for svn/svk, but instead a series of
> increasingly complex suggestions on how to do things efficiently.

Make that *more* efficiently.  AFAIK svn is much more efficient than
cvs by default in all cases, except for disk space use.  I suppose if
you feel strongly about duplicate copies of files in your tree,
there's always hardlink and similar solutions, which will then require
more discipline from you in not accidentally modifying the -base
files.  Yes, that's yet another complex suggestion to make svn even
more efficient, but you're not *required* to use any of them.

> Saying "casual developers of GCC can use snapshots" is not something I think
> we ought to be doing.

Totally agreed.  Fortunately, installing svn in such a way that it
works fine by default is pretty easy, and that leaves room for
additional efficiency improvements that might not even be possible
with cvs.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Problem building svn on x86-64

2005-10-25 Thread Alexandre Oliva
On Oct 25, 2005, [EMAIL PROTECTED] (Richard Kenner) wrote:

> cd subversion/libsvn_subr && /bin/sh /gcc/gcc/subversion-1.2.3/libtool 
> --tag=CC --silent --mode=link gcc  -g -O2  -g -O2 -pthread  -DNEON_ZLIB   
> -rpath /usr/local/lib -o libsvn_subr-1.la  auth.lo cmdline.lo config.lo 
> config_auth.lo config_file.lo config_win.lo ctype.lo date.lo error.lo hash.lo 
> io.lo lock.lo md5.lo opt.lo path.lo pool.lo quoprint.lo sorts.lo stream.lo 
> subst.lo svn_base64.lo svn_string.lo target.lo time.lo utf.lo utf_validate.lo 
> validate.lo version.lo xml.lo 
> /gcc/gcc/subversion-1.2.3/apr-util/libaprutil-0.la -lgdbm -ldb-4.2 -lexpat 
> /gcc/gcc/subversion-1.2.3/apr/libapr-0.la -lrt -lm -lcrypt -lnsl  -lpthread 
> -ldl 
> /usr/lib/libgdbm.so: could not read symbols: Invalid operation
> collect2: ld returned 1 exit status

> The problem is that /usr/lib/libgdbm is a 32-bit file.  It should be using
> /usr/lib64 and ld should know that.  But it looks like something is
> short-circuiting stuff.  Does anybody know enough about svn build
> procedures to know how to fix this?

Libtool searches /usr/lib instead of /usr/lib64 because the Debian
folks decided to not follow the AMD64 ABI, and libtool remained
neutral in this regard (i.e., broken for both Debian and ABI-compliant
distros).  Try adding -L/usr/lib64 or something along these lines to
LDFLAGS, so that libtool finds libs in the right place, and it should
work.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: backslash whitespace newline

2005-10-27 Thread Alexandre Oliva
On Oct 27, 2005, Robert Dewar <[EMAIL PROTECTED]> wrote:

> When you step out of customs at the Bangkok airport, you see a big
> booth saying "ISO 9000 compliant taxi service" :-)

``Sorry, sir, can't drive you $there, my procedure says I can only take
passengers $elsewhere.  I'm afraid the driver whose procedure is to
take people $there is on leave today.''  :-)

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Exception propagation problem on IA-64/Linux

2005-10-28 Thread Alexandre Oliva
On Oct 28, 2005, Mark Mitchell <[EMAIL PROTECTED]> wrote:

> In general, comparison of type_info objects is supposed to be done by
> checking for address equality of the type info strings.

> In the situation where we
> do not use strcmp, I would not expect to see that bug -- because I would
> expect that the type_info objects and the corresponding type_info
> strings are local symbols.

If the strings turn out to be identical and the linker merges them, we
fail...

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: weakref and static

2005-12-11 Thread Alexandre Oliva
On Dec  1, 2005, [EMAIL PROTECTED] (Geoffrey Keating) wrote:

> The 'weakref' attribute is defined in terms of aliases.  Now,
> if the user writes

> void foo(void) { }
> void bar(void) __attribute__((alias ("foo")));

> then that causes 'bar' to be defined.  Other translation units can use
> 'bar'.  If 'weakref' is to define an alias, it should behave the same
> way.

weakref does not define an alias.  It defines a weakref.  It happens
to use some of GCC's aliasing machinery under the covers, but that's
all.  It's not an alias like non-weakref aliases, anyway.

> The easiest solution to this is to require that weakrefs must be
> 'static', because the name that they define is not visible outside
> this translation unit.

While this is true, not all properties of static names hold for
weakrefs.  If the name they refer to is not itself static, none of the
local-binding analysis properties will apply correctly if the wekaref
is marked as static.  I felt it was safer to keep it extern.

>   * doc/extend.texi (Function Attributes): Mention that an alias
>   attribute creates a definition for the thing it's attached to.

Except for weakrefs, that may introduce a local (weakref) definition,
if the assembler supports .weakref, or no definition whatsoever, if it
does not.

>   Change the documentation for weakref to say that the thing
>   it's attached to must be static.

Err...  The above is a bit misleading, in that it at first appeared to
be referring to the target of the weakref, not to the weakref itself.
The weakref may alias to something that is static or not (the whole
point is being able to refer to symbols in other translation units
with weak and non-weak semantics).  The weakref itself could be static
or extern, and both possibilities could be argued for and match
certain uses.  Since attribute alias has traditionally required the
extern keyword, I figured it would make sense to keep it as such, but
if you prefer to change that and adjust all cases in which the use of
static might cause problems, that's certainly fine with me.  I don't
see that you're taking care of such cases, though.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: weakref and static

2005-12-17 Thread Alexandre Oliva
On Dec 11, 2005, Alexandre Oliva <[EMAIL PROTECTED]> wrote:

> On Dec  1, 2005, [EMAIL PROTECTED] (Geoffrey Keating) wrote:

>> The easiest solution to this is to require that weakrefs must be
>> 'static', because the name that they define is not visible outside
>> this translation unit.

> While this is true, not all properties of static names hold for
> weakrefs.  If the name they refer to is not itself static, none of the
> local-binding analysis properties will apply correctly if the wekaref
> is marked as static.  I felt it was safer to keep it extern.

As evidenced by the following testcase:

extern int i;
static int j __attribute__((weakref("i")));
int f() {
  return j;
}

whose output assembly for AMD64 is:
[...]
movlj(%rip), %eax
[...]
.weakrefj,i
[...]

So you see, j(%rip) is only valid when a symbol is known to be defined
in the same loadable module, but this is definitely not known for the
testcase above.

I thus propose your change to be reverted, and request you to explain
what you were trying to fix with this patch so that I can try to do
something about it.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: weakref and static

2005-12-17 Thread Alexandre Oliva
On Dec 17, 2005, Alexandre Oliva <[EMAIL PROTECTED]> wrote:

> On Dec 11, 2005, Alexandre Oliva <[EMAIL PROTECTED]> wrote:
>> On Dec  1, 2005, [EMAIL PROTECTED] (Geoffrey Keating) wrote:

>>> The easiest solution to this is to require that weakrefs must be
>>> 'static', because the name that they define is not visible outside
>>> this translation unit.

>> While this is true, not all properties of static names hold for
>> weakrefs.  If the name they refer to is not itself static, none of the
>> local-binding analysis properties will apply correctly if the wekaref
>> is marked as static.  I felt it was safer to keep it extern.

> As evidenced by the following testcase:

Grr, weakrefs, weak evidence :-)

as it turned out, this is an effect of COPY relocs or something along
these lines, that would indeed cause the data object to be local to
the executable.  As soon as I added -fpic (or even -fpie), it no
longer emitted code that assumed the data object to bind locally, as
expected.

What a great way to get oneself embarrassed in public! :-)

> I thus propose your change to be reverted, and request you to explain
> what you were trying to fix with this patch so that I can try to do
> something about it.

Nevermind this bit :-)


Jakub, do you have any further details on the s390 bootstrap failure
introduced with this patch?  I'm not aware of it, and I feel I should
do something about it, if it's not fixed yet.

That said, I do feel that testing for weakrefs in the binding test is
probably slower than ideal, and could possibly lead to incorrect
decisions elsewhere, in places that should be using the binding
function but aren't, which is why the change still makes me very
uncomfortable.  Geoff, did you actually search for any such places, or
had reasons to expect them to not exist?

Thanks,

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Status and rationale for toplevel bootstrap (was Re: Example of debugging GCC with toplevel bootstrap)

2006-01-18 Thread Alexandre Oliva
On Jan 16, 2006, [EMAIL PROTECTED] (Richard Kenner) wrote:

>  What it used to be "make" and "make bootstrap" are (and will be)
> "./configure --disable-bootstrap && make" and "./configure && make".

> Rerunning configure is a pain!  It's never just "./configure", but
> has the source directory plus any options.

./config.status --recheck will show you exactly what it is.  If you
find that too much trouble, try this:

sed -n '/^#.*\/configure/ s,^#,,p' config.status | sh

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: ChangeLog files - server and client scripts

2020-05-25 Thread Alexandre Oliva
On May 25, 2020, Martin Liška  wrote:

> On 5/21/20 5:14 PM, Rainer Orth wrote:
>> * In changelog_location, you allow only (among others) "a/b/c/" and
>> "\ta/b/c/".  Please also accept the "a/b/c:" and "\ta/b/c:" forms
>> here: especially the second seems quite common.

> Ok, I believe these formats are supported as well. Feel free to mention
> some git revisions that are not recognized.

I've long used the following syntax to start ChangeLog entries:

for  /ChangeLog

It was introduced over 20 years ago, with the (so far never formally
released) GNU CVS-Utilities.  Among other goodies, there were scripts to
turn diffs for ChangeLog files into the above format, and vice-versa,
that I've used to this day.  It went through cvs, svn and git.  It would
be quite nice if I could keep on using it with GCC.

The patch below seems to be enough to pass gcc-verify, and to recognize
and print the expected ChangeLog files.  I suppose I'll have to adjust
the formatting to be able to push it, but, aside from that, is it ok to
install?

Do any hooks need to be adjusted to match?


I'm also a little concerned about '*/ChangeLog.*' files.  Are we no
longer supposed to introduce them, or new ChangeLog entries to them?  Or
should the scripts be extended to cover them?


for  contrib/ChangeLog

* gcc-changelog/git_commit.py (changelog_regex): Accept optional
'for' prefix.

diff --git a/contrib/gcc-changelog/git_commit.py 
b/contrib/gcc-changelog/git_commit.py
index 2cfdbc8..b8362c1 100755
--- a/contrib/gcc-changelog/git_commit.py
+++ b/contrib/gcc-changelog/git_commit.py
@@ -144,7 +144,7 @@ misc_files = [
 author_line_regex = \
 re.compile(r'^(?P\d{4}-\d{2}-\d{2})\ {2}(?P.*  <.*>)')
 additional_author_regex = re.compile(r'^\t(?P\ *)?(?P.*  <.*>)')
-changelog_regex = re.compile(r'^([a-z0-9+-/]*)/ChangeLog:?')
+changelog_regex = re.compile(r'^(?:[fF]or +)([a-z0-9+-/]*)/ChangeLog:?')
 pr_regex = re.compile(r'\tPR (?P[a-z+-]+\/)?([0-9]+)$')
 dr_regex = re.compile(r'\tDR ([0-9]+)$')
 star_prefix_regex = re.compile(r'\t\*(?P\ *)(?P.*)')



-- 
Alexandre Oliva, freedom fighterhe/himhttps://FSFLA.org/blogs/lxo/
Free Software Evangelist  Stallman was right, but he's left :(
GNU Toolchain Engineer   Live long and free, and prosper ethically


Re: Writing automated tests for the GCC driver

2020-05-25 Thread Alexandre Oliva
Hello, Giuliano,

On May 25, 2020, Giuliano Belinassi via Gcc  wrote:

> gcc a.c b.o -o a.out
> gcc a.c b.c
> gcc a.S

> and so on. So if you do some radical change to the GCC driver, making
> sure everything is correct get somewhat painful because you have to do
> a clean bootstrap and find out what is going wrong.

I'm about to install a patch that introduces significant changes to the
GCC driver and a large number of tests that covers some of the above.

Its main focus was to check that aux outputs got named as per the
changes introduced in the patch, but I added some baseline tests that
would cover some of the above.

Feel free to extend gcc/testsuite/gcc.misc-tests/outputs.exp to cover
other such baseline cases you may think of.

-- 
Alexandre Oliva, freedom fighterhe/himhttps://FSFLA.org/blogs/lxo/
Free Software Evangelist  Stallman was right, but he's left :(
GNU Toolchain Engineer   Live long and free, and prosper ethically


Re: ChangeLog files - server and client scripts

2020-05-26 Thread Alexandre Oliva
Hi, Martin,

On May 26, 2020, Martin Liška  wrote:

>> I've long used the following syntax to start ChangeLog entries:
>> 
>> for  /ChangeLog

> Ah, it's new for me.

>> 
>> It was introduced over 20 years ago, with the (so far never formally
>> released) GNU CVS-Utilities.  Among other goodies, there were scripts to
>> turn diffs for ChangeLog files into the above format, and vice-versa,
>> that I've used to this day.  It went through cvs, svn and git.  It would
>> be quite nice if I could keep on using it with GCC. ^^

For clarity, I meant the syntax in the last sentence above.  The
ChangeLog-related functionality in the scripts now becomes mostly
obsolete.

>> The patch below seems to be enough to pass gcc-verify, and to recognize
>> and print the expected ChangeLog files.

'cept it broke cases without 'for' because I missed a '?' in the
regexp.  Good thing I had to adjust for the old format to be able to
push it ;-)  2x0 ;-)

>> Do any hooks need to be adjusted to match?

> Yes, we sync the script from the GCC repository.

Here's what I'm about to push


accept for dir/ChangeLog entries

From: Alexandre Oliva 

I've long introduced ChangeLog entries as "for  dir/ChangeLog", a
format adopted by GNU CVS-Utilities some 20 years ago.  My commits
have been formatted like this forever.

This patch makes it acceptable for git gcc-verify.


contrib/ChangeLog:

* gcc-changelog/git_commit.py (changelog_regex): Accept optional
'for' prefix.
---
 contrib/gcc-changelog/git_commit.py |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/contrib/gcc-changelog/git_commit.py 
b/contrib/gcc-changelog/git_commit.py
index 2cfdbc8..732a9bd8 100755
--- a/contrib/gcc-changelog/git_commit.py
+++ b/contrib/gcc-changelog/git_commit.py
@@ -144,7 +144,7 @@ misc_files = [
 author_line_regex = \
 re.compile(r'^(?P\d{4}-\d{2}-\d{2})\ {2}(?P.*  <.*>)')
 additional_author_regex = re.compile(r'^\t(?P\ *)?(?P.*  <.*>)')
-changelog_regex = re.compile(r'^([a-z0-9+-/]*)/ChangeLog:?')
+changelog_regex = re.compile(r'^(?:[fF]or +)?([a-z0-9+-/]*)/ChangeLog:?')
 pr_regex = re.compile(r'\tPR (?P[a-z+-]+\/)?([0-9]+)$')
 dr_regex = re.compile(r'\tDR ([0-9]+)$')
 star_prefix_regex = re.compile(r'\t\*(?P\ *)(?P.*)')


-- 
Alexandre Oliva, freedom fighterhe/himhttps://FSFLA.org/blogs/lxo/
Free Software Evangelist  Stallman was right, but he's left :(
GNU Toolchain Engineer   Live long and free, and prosper ethically


Re: GCC Testsuite patches break AIX

2020-05-27 Thread Alexandre Oliva
Hello, David,

On May 26, 2020, David Edelsohn  wrote:

> Complaints about -dA, -dD, -dumpbase, etc.

This was the main symptom of the problem fixed in the follow-up commit
r11-635-g6232d02b4fce4c67d39815aa8fb956e4b10a4e1b

Could you please confirm that you did NOT have this commit in your
failing build, and that the patch above fixes the problem for you as it
did for others?


> This patch was not properly tested on all targets.

This problem had nothing to do with targets.  Having Ada enabled, which
I've nearly always and very long done to increase test coverage, was
what kept the preexisting bug latent in my testing.


Sorry that I failed to catch it before the initial check in.

-- 
Alexandre Oliva, freedom fighterhe/himhttps://FSFLA.org/blogs/lxo/
Free Software Evangelist  Stallman was right, but he's left :(
GNU Toolchain Engineer   Live long and free, and prosper ethically


Re: GCC Testsuite patches break AIX

2020-05-27 Thread Alexandre Oliva
On May 27, 2020, Christophe Lyon via Gcc  wrote:

> On Wed, 27 May 2020 at 16:26, Jeff Law via Gcc  wrote:

>> Any thoughts on the massive breakage on the embedded ports in the testsuite?

I wasn't aware of any.  Indeed, one of my last steps before submitting
the patchset was to fix problems that had come up in embedded ports,
with gcc_adjust_linker_flags and corresponding changes to outputs.exp
itself.

>> Essentially every test that links is failing like this:


>> 
>> > Executing on host: /home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/xgcc
>> > -B/home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/
>> > /home/jenkins/gcc/gcc/testsuite/gcc.c-torture/execute/2112-1.c
>> > gcc_tg.o-fno-diagnostics-show-caret -fno-diagnostics-show-line-numbers
>> > -fdiagnostics-color=never  -fdiagnostics-urls=never-O0  -w   -msim {} 
>> > {}  -
>> > Wl,-wrap,exit -Wl,-wrap,_exit -Wl,-wrap,main -Wl,-wrap,abort -lm  -o
>> > ./2112-1.exe(timeout = 300)
>> > spawn -ignore SIGHUP 
>> > /home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/xgcc
>> > -B/home/jenkins/workspace/c6x-elf/c6x-elf-obj/gcc/gcc/
>> > /home/jenkins/gcc/gcc/testsuite/gcc.c-torture/execute/2112-1.c gcc_tg.o
>> > -fno-diagnostics-show-caret -fno-diagnostics-show-line-numbers 
>> > -fdiagnostics-
>> > color=never -fdiagnostics-urls=never -O0 -w -msim   -Wl,-wrap,exit -Wl,-
>> > wrap,_exit -Wl,-wrap,main -Wl,-wrap,abort -lm -o ./2112-1.exe^M
>> > xgcc: error: : No such file or directory^M

>> Sadly there's no additional output that would help us figure out what went 
>> wrong.

> If that helps, I traced this down to the new gcc_adjust_linker_flags function.

Thanks.  Yeah, H-P observed and submitted a similar report that made me
wonder about empty arguments being passed to GCC.  Jeff's report
confirms the suspicion.  See how there are a couple of {}s after -msim
in the "Executing on host" line, that in the "spawn" line are completely
invisible, only suggested by the extra whitespace.  That was not quite
visible in H-P's report, but Jeff's makes it clear.

I suppose this means there are consecutive blanks in e.g. board's
ldflags, and the split function is turning each consecutive pair of
blanks into an empty argument.  I'm testing a fix (kludge?) in
refs/users/aoliva/heads/testme 169b13d14d3c1638e94ea7e8f718cdeaf88aed65

-- 
Alexandre Oliva, freedom fighterhe/himhttps://FSFLA.org/blogs/lxo/
Free Software Evangelist  Stallman was right, but he's left :(
GNU Toolchain Engineer   Live long and free, and prosper ethically


Re: PCH test errors

2020-05-28 Thread Alexandre Oliva
't seem to work for this target.


- regardless, it might make sense to implement the elaborate version of
the gcc.c change above, to get shorter dump names in pch compilation
from B.X to B.X.gch, though the present behavior is quite defensible; we
might prefer to just document it.


Thoughts?

-- 
Alexandre Oliva, freedom fighterhe/himhttps://FSFLA.org/blogs/lxo/
Free Software Evangelist  Stallman was right, but he's left :(
GNU Toolchain Engineer   Live long and free, and prosper ethically


Re: Automatically generated ChangeLog files - script

2020-06-22 Thread Alexandre Oliva
On May 26, 2020, Martin Liška  wrote:

> On 5/26/20 12:15 PM, Pierre-Marie de Rodat wrote:
>>>     * contracts.adb, einfo.adb, exp_ch9.adb, sem_ch12.adb,

> It's not supported right now and it will make the filename parsing
> much more complicated.

Another colleague recently run into a problem with either:

* $filename <$case>:

or

* $filename [$condition]:

I can't recall which one it was, but the following patch is supposed to
implement both.  Alas, I couldn't figure out how to test it:
git_check_commit.py is failing with:

Traceback (most recent call last):
  File "contrib/gcc-changelog/git_check_commit.py", line 38, in 
not args.non_strict_mode):
  File "/l/tmp/build/gcc/contrib/gcc-changelog/git_repository.py", line 57, in 
parse_git_revisions
elif file.renamed_file:
AttributeError: 'Diff' object has no attribute 'renamed_file'


accept  and [cond] in ChangeLog

From: Alexandre Oliva 

Only '(' and ':' currently terminate file lists in ChangeLog entries
in the ChangeLog parser.  This rules out such legitimate entries as:

* filename :
* filename [COND]:

This patch extends the ChangeLog parser to recognize these forms.


for  contrib/ChangeLog

* gcc-changelog/git_commit.py: Support CASE and COND.
---
 contrib/gcc-changelog/git_commit.py |   16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/contrib/gcc-changelog/git_commit.py 
b/contrib/gcc-changelog/git_commit.py
index 4a78793..537c667 100755
--- a/contrib/gcc-changelog/git_commit.py
+++ b/contrib/gcc-changelog/git_commit.py
@@ -154,6 +154,7 @@ changelog_regex = re.compile(r'^(?:[fF]or 
+)?([a-z0-9+-/]*)ChangeLog:?')
 pr_regex = re.compile(r'\tPR (?P[a-z+-]+\/)?([0-9]+)$')
 dr_regex = re.compile(r'\tDR ([0-9]+)$')
 star_prefix_regex = re.compile(r'\t\*(?P\ *)(?P.*)')
+end_of_location_regex = re.compile(r'[[<(:]')
 
 LINE_LIMIT = 100
 TAB_WIDTH = 8
@@ -203,14 +204,13 @@ class ChangeLogEntry:
 line = m.group('content')
 
 if in_location:
-# Strip everything that is not a filename in "line": entities
-# "(NAME)", entry text (the colon, if present, and anything
-# that follows it).
-if '(' in line:
-line = line[:line.index('(')]
-in_location = False
-if ':' in line:
-line = line[:line.index(':')]
+# Strip everything that is not a filename in "line":
+# entities "(NAME)", cases "", conditions
+# "[COND]", entry text (the colon, if present, and
+# anything that follows it).
+m = end_of_location_regex.search(line)
+if m:
+line = line[:m.start()]
 in_location = False
 
 # At this point, all that's left is a list of filenames


-- 
Alexandre Oliva, freedom fighterhe/himhttps://FSFLA.org/blogs/lxo/
Free Software Evangelist  Stallman was right, but he's left :(
GNU Toolchain Engineer   Live long and free, and prosper ethically


Re: TLS Implementation Across Architectures

2020-06-29 Thread Alexandre Oliva
On Jun 25, 2020, Joel Sherrill  wrote:

> Is there some documentation on how it is implemented on architectures not
> in Ulrich's paper?

Uli's paper pre-dates GNU2 TLS, I'm not sure whether he updated it to
cover it, so https://www.fsfla.org/~lxoliva/writeups/TLS/ might be useful.

-- 
Alexandre Oliva, freedom fighterhe/himhttps://FSFLA.org/blogs/lxo/
Free Software Evangelist  Stallman was right, but he's left :(
GNU Toolchain Engineer   Live long and free, and prosper ethically


Re: Automatically generated ChangeLog files - script

2020-07-06 Thread Alexandre Oliva
Sorry it took me so long to get back to this.

On Jun 24, 2020, Martin Liška  wrote:

> Please escape the '[':
> +end_of_location_regex = re.compile(r'[\[<(:]')

check

> and please a test-case for it.

check

Thanks, I've made the changes; sorry it took me so long.

I couldn't figure out how to run the internal gcc-changelog test.


accept  and [cond] in ChangeLog

From: Alexandre Oliva 

Only '(' and ':' currently terminate file lists in ChangeLog entries
in the ChangeLog parser.  This rules out such legitimate entries as:

* filename :
* filename [COND]:

This patch extends the ChangeLog parser to recognize these forms.


for  contrib/ChangeLog

* gcc-changelog/git_commit.py: Support CASE and COND.
* gcc-changelog/test_patches.txt: Add test.
---
 contrib/gcc-changelog/git_commit.py|   16 +++
 contrib/gcc-changelog/test_patches.txt |   35 
 2 files changed, 43 insertions(+), 8 deletions(-)

diff --git a/contrib/gcc-changelog/git_commit.py 
b/contrib/gcc-changelog/git_commit.py
index 4a78793..900a294 100755
--- a/contrib/gcc-changelog/git_commit.py
+++ b/contrib/gcc-changelog/git_commit.py
@@ -154,6 +154,7 @@ changelog_regex = re.compile(r'^(?:[fF]or 
+)?([a-z0-9+-/]*)ChangeLog:?')
 pr_regex = re.compile(r'\tPR (?P[a-z+-]+\/)?([0-9]+)$')
 dr_regex = re.compile(r'\tDR ([0-9]+)$')
 star_prefix_regex = re.compile(r'\t\*(?P\ *)(?P.*)')
+end_of_location_regex = re.compile(r'[\[<(:]')
 
 LINE_LIMIT = 100
 TAB_WIDTH = 8
@@ -203,14 +204,13 @@ class ChangeLogEntry:
 line = m.group('content')
 
 if in_location:
-# Strip everything that is not a filename in "line": entities
-# "(NAME)", entry text (the colon, if present, and anything
-# that follows it).
-if '(' in line:
-line = line[:line.index('(')]
-in_location = False
-if ':' in line:
-line = line[:line.index(':')]
+# Strip everything that is not a filename in "line":
+# entities "(NAME)", cases "", conditions
+# "[COND]", entry text (the colon, if present, and
+# anything that follows it).
+m = end_of_location_regex.search(line)
+if m:
+line = line[:m.start()]
 in_location = False
 
 # At this point, all that's left is a list of filenames
diff --git a/contrib/gcc-changelog/test_patches.txt 
b/contrib/gcc-changelog/test_patches.txt
index 1463fb9..2bf5d1a 100644
--- a/contrib/gcc-changelog/test_patches.txt
+++ b/contrib/gcc-changelog/test_patches.txt
@@ -3160,3 +3160,38 @@ index 823eb539993..4ec22162c12 100644
 -- 
 2.27.0
 
+=== 0001-Check-for-more-missing-math-decls-on-vxworks.patch ===
+From 0edfc1fd22405ee8e946101e44cd8edc0ee12047 Mon Sep 17 00:00:00 2001
+From: Douglas B Rupp 
+Date: Sun, 31 May 2020 13:25:28 -0700
+Subject: [PATCH] Check for more missing math decls on vxworks.
+
+Use the GLIBCXX_CHECK_MATH_DECL macro to check for the full list of
+vxworks math decls.
+
+for libstdc++-v3/ChangeLog:
+
+   * crossconfig.m4 <*-vxworks>: Check for more math decls.
+   * configure [FAKEPATCH]: Rebuild.
+---
+ libstdc++-v3/configure  | 255 
+ libstdc++-v3/crossconfig.m4 |   3 +-
+ 2 files changed, 257 insertions(+), 1 deletion(-)
+
+diff --git a/libstdc++-v3/configure b/libstdc++-v3/configure
+index b5beb45..4ef678e 100755
+--- a/libstdc++-v3/configure
 b/libstdc++-v3/configure
+@@ -1 +1,2 @@
+ 
++
+diff --git a/libstdc++-v3/crossconfig.m4 b/libstdc++-v3/crossconfig.m4
+index fe18288..313f84d 100644
+--- a/libstdc++-v3/crossconfig.m4
 b/libstdc++-v3/crossconfig.m4
+@@ -1 +1,2 @@
+ 
++
+-- 
+2.7.4
+


-- 
Alexandre Oliva, freedom fighterhe/himhttps://FSFLA.org/blogs/lxo/
Free Software Evangelist  Stallman was right, but he's left :(
GNU Toolchain Engineer   Live long and free, and prosper ethically


Re: gnu-gabi group

2016-02-15 Thread Alexandre Oliva
On Feb 12, 2016, Pedro Alves  wrote:

> On 02/11/2016 06:20 PM, Mark Wielaard wrote:
>> If we could ask overseers to setup a new group/list gnu-gabi on sourceware
>> where binutils, gcc, gdb, glibc and other interested parties could join
>> to maintain these extensions and ask for clarifications that would be
>> wonderful. I am not a big fan of google groups mailinglists, they seem
>> to make it hard to subscribe and don't have easy to access archives.
>> Having a local gnu-gabi group on sourceware.org would be better IMHO.

> +1

+1

Since it's GNU tools we're talking about, we'd better use a medium that
we've all already agreed to use, than one that a number of us objects
to.  I, for one, have closed my Google account several Valentine's Days
ago, for privacy reasons, and this makes the archives of lists hidden
there unusable for me.

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist|Red Hat Brasil GNU Toolchain Engineer


Re: gnu-gabi group

2016-02-15 Thread Alexandre Oliva
On Feb 15, 2016, "H.J. Lu"  wrote:

> On Mon, Feb 15, 2016 at 7:37 AM, Alexandre Oliva  wrote:
>> I, for one, have closed my Google account several Valentine's Days
>> ago, for privacy reasons, and this makes the archives of lists hidden
>> there unusable for me.

> Anyone can subscribe Linux-ABI group

I didn't say that was not possible, did I?

> and its archive is to open to everyone

I'm a bit surprised by that, since I recall having trouble accessing
archives of other google groups that I still happen to be in, much to my
disappointment.  Maybe it's not because of the account, whose lack
prevents me from even complaining to Google about being abusively
subscribed to lists I don't want to be in (nothing to do with the one
you proposed, mind you); it could be because I refuse to run proprietary
Javascript on my browser, or because I reject cookies and use Tor, and
so Google requires me to solve a CAPTCHA that I can't even see before it
might or might not grant me access to the archives.  Who knows?  The
important question is Why should we collective choose that over what
we're already using and that's aligned with the interests of our
community?

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist|Red Hat Brasil GNU Toolchain Engineer


Re: gnu-gabi group

2016-02-15 Thread Alexandre Oliva
Mike,

On Feb 15, 2016, Mike Frysinger  wrote:

> On 15 Feb 2016 16:18, Szabolcs Nagy wrote:
>> they need to allow google to execute javascript code on their
>> machine.

> complaining that the web interface executes JS is a bit luddite-ish.

See https://www.gnu.org/philosophy/javascript-trap.html

Do you see anytying familiar in the part of the URL above between 'www.'
and '.org/'? ;-)

Yeah, the GNU that denounces the Javascript trap is the same GNU that
started and encompasses all the Free Software development communities
you addressed in your email, and then some.


Now, should we be surprised that at least *some* of the GNU contributors
take the positions of the political and social movement that motivated
it in earnest?

Should we then proceed to exclude them from technical discussions by
choosing communication media that we can't use without betraying those
very positions?

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist|Red Hat Brasil GNU Toolchain Engineer


Re: gnu-gabi group

2016-02-15 Thread Alexandre Oliva
On Feb 15, 2016, f...@redhat.com (Frank Ch. Eigler) wrote:

> mark wrote:

>> [...]
>>> [...]
>>> >> Having a local gnu-gabi group on sourceware.org would be better IMHO.
>>> > +1
>>> +1
>> 
>> Great. I'll ask overseers to create a mailinglist. [...]

> Done [1] [2].  If y'all need a wiki too, just ask.

Thanks!

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist|Red Hat Brasil GNU Toolchain Engineer


Re: TLSDESC clobber ABI stability/futureproofness?

2018-10-10 Thread Alexandre Oliva
On Oct 10, 2018, Rich Felker  wrote:

> It's recently come up in musl libc development that the tlsdesc asm
> functions, at least for some archs, are potentially not future-proof,
> in that, for a given fixed version of the asm in the dynamic linker,
> it seems possible for a future ISA level and compiler supporting that
> ISA level to produce code, in the C functions called in the dynamic
> fallback case, instructions which clobber registers which are normally
> call-clobbered, but which are non-clobbered in the tlsdesc ABI. This
> does not risk breakage when an existing valid build of libc/ldso is
> used on new hardware and new appliations that provide new registers,
> but it does risk breakage if an existing source version of libc/ldso
> is built with a compiler supporting new extensions, which is difficult
> to preclude and not something we want to try to preclude.

I understand the concern.  I considered it back when I designed TLSDesc,
and my reasoning was that if the implementation of the fallback dynamic
TLS descriptor allocator could possibly use some register, the library
implementation should know about it as well, and work to preserve it.  I
realize this might not be the case for an old library built by a new
compiler for a newer version of the library's target.  Other pieces of
the library may fail as well, if registers unknown to it are available
and used by the compiler (setjmp/longjmp, *context, dynamic PLT
resolution come to mind), besides the usual difficulties building old
code with newer tools, so I figured it wasn't worth sacrificing the
performance of the normal TLSDesc case to make this aspect of register
set extensions easier.

There might be another particularly risky case, namely, that the memory
allocator used by TLS descriptors be overridden by code that uses more
registers than the library knows to preserve.  Memory allocation within
the dynamic loader, including lazy TLS Descriptor relocation resolution,
is a context in which we should probably use internal, non-overridable
memory allocators, if we don't already.  This would reduce the present
risky case to the one in the paragraph above.

> For aarch64 at least, according to discussions I had with Szabolcs
> Nagy, there is an intent that any new extensions to the aarch64
> register file be treated as clobbered by tlsdesc functions, rather
> than preserved.

That's unfortunate.  I'm not sure I understand the reasoning behind this
intent.  Maybe we should discuss it further?

> In the x86 spec, the closest I can find are the phrasing:

> "being able to assume no registers are clobbered by the call"

> and the comment in the pseudo-C:

> /* Preserve any call-clobbered registers not preserved because of
>the above across the call below.  */

> Source: https://www.fsfla.org/~lxoliva/writeups/TLS/RFC-TLSDESC-x86.txt

> What is the policy for i386 and x86_64?

I don't know that my proposal ever became authoritative policy, but even
if it is, I guess I have to agree it is underspecified and the reasoning
above could be added.

> Are normally call-clobbered registers from new register file
> extensions intended to be preserved by the tlsdesc functions, or
> clobberable by them?

My thinking has always been that they should be preserved, which doesn't
necessarily mean they have to be saved and restored.  Only if the
implementation of tlsdesc could possibly modify them should it arrange
for their entry-point values to be restored before returning.  This
implies not calling overridable functions in the internal
implementation, and compiling at least the bits used by the tlsdesc
implementation so as to use only the register set known and supported by
the library.

Anyway, thanks for bringing this up.  I'm amending the x86 TLSDesc
proposal to cover this with the following footnote:

(*) Preserving a register does not necessarily imply saving and
restoring it.  If the system library implementation does not use or
even know about a certain extended register set, it needs not save it,
because it will presumably not modify it.  This assumes the TLS
Descriptor implementation is self-contained within the system library,
without no overridable callbacks.  A consequence is that, even if
other parts of the system library are compiled so as to use an
extended register set, those used by the implementation of TLS
Descriptors, including lazy relocations, should be limited to using
the register set that the interfaces are known to preserve.

after:

[...] This penalizes
the case that requires dynamic TLS, since it must preserve (*) all
call-clobbered registers [...]

Please let me know your thoughts about this change, e.g., whether it's
enough to address your concerns or if you envision a need for more than
that.  Thanks,

-- 
Alexandre Oliva, freedom fighter   https://FSFLA.org/blogs/lxo
Be the c

Re: TLSDESC clobber ABI stability/futureproofness?

2018-10-11 Thread Alexandre Oliva
On Oct 11, 2018, Rich Felker  wrote:

> This is indeed the big risk for glibc right now (with lazy,
> non-fail-safe allocation of dynamic TLS)

Yeah, dynamic TLS was a can of works in that regard even before lazy TLS
relocations.

> that it's unlikely for vector-heavy code to be using TLS where the TLS
> address load can't be hoisted out of the blocks where the
> call-clobbered vector regs are in use. Generally, if such hoisting is
> performed, the main/only advantage of avoiding clobbers is for
> registers which may contain incoming arguments.

I see.  Well, the more registers are preserved, the better for the ideal
fast path, but even if some are not, you're still better off than
explicitly calling tls_get_addr...

> unless there is some future-proof approach to
> save-all/restore-all that works on all archs with TLSDESC

Please don't single-out TLSDESC as if the problem affected it alone.
Lazy relocation with traditional PLT entries for functions are also
supposed to save and restore all registers, and the same issues arise,
except they're a lot more common.  The only difference is that failures
to preserve registers are less visible, because most of the time you're
resolving them to functions that abide by the normal ABI, but once
specialized calling conventions kick in, the very same issues arise.
TLS descriptors are just one case of such specialized calling
conventions.  Indeed, one of the reasons that made me decide this
arrangement was acceptable was precisely because the problem already
existed with preexisting lazy PLT resolution.

-- 
Alexandre Oliva, freedom fighter   https://FSFLA.org/blogs/lxo
Be the change, be Free! FSF Latin America board member
GNU Toolchain EngineerFree Software Evangelist
Hay que enGNUrecerse, pero sin perder la terGNUra jamás-GNUChe


Re: TLSDESC clobber ABI stability/futureproofness?

2018-10-13 Thread Alexandre Oliva
On Oct 11, 2018, Rich Felker  wrote:

> However the only way to omit this path from TLSDESC is
> installing the new TLS to all live threads at dlopen time

Well, one could just as easily drop dynamic TLS altogether, forcing all
TLS into the Static TLS block until it fills up, and failing for good if
it does.  But then, you don't need TLS Descriptors at all, just go with
Initial Exec.  It helps if init can set the Static TLS block size from
an environment variable or somesuch.

But your statement appears to be conflating two separate issues, namely
the allocation of a TLS Descriptor during lazy TLS resolution, and the
allocation of per-thread dynamic TLS blocks upon first access.

The former is just as easy and harmless to disable as lazy function
relocations.  The latter is not exclusive to TLSDesc: __tls_get_addr has
historically used malloc to grow the DTV and to allocate dynamic TLS
blocks, and if overriders to malloc end up using/clobbering unexpected
registers, even if just because of lazy PLT resolution for calls in its
implementation, things might go wrong.  Sure enough, __tls_get_addr
doesn't use a specialized ABI, so this is usually not an issue.

> That's actually not a bad idea -- it drops the compare/branch from the
> dynamic tlsdesc code path, and likewise in __tls_get_addr, making both
> forms of dynamic TLS (possibly considerably) faster.

But then you have to add some form of synchronization so that other
threads can actually mess with your DTV without conflicts, from
releasing dlclose()d dynamic blocks to growing the DTV and releasing the
old DTV while its owner thread is using it.


I wonder if it would make sense to introduce an overridable
call-clobbered-regs-preserving wrapper function, that lazy PLT resolvers
and Dynamic TLSDesc calls would call, and that could be easily extended
to preserve extended register files without having to modify the library
proper.  LD_PRELOAD could bring it in, and it could even use ifunc
relocations, to be able to cover all available registers on arches with
multiple register file extensions.

-- 
Alexandre Oliva, freedom fighter   https://FSFLA.org/blogs/lxo
Be the change, be Free! FSF Latin America board member
GNU Toolchain EngineerFree Software Evangelist
Hay que enGNUrecerse, pero sin perder la terGNUra jamás-GNUChe


Re: Improving GCC's line table information to help GDB

2019-10-18 Thread Alexandre Oliva
On Oct 16, 2019, Luis Machado  wrote:

> It seems, from reading the blog post about SFN's, that it was meant to
> help with debugging optimized binaries.

Indeed.  Getting rid of the dummy jumps would be one kind of
optimization, and then SFN might help preserve some of the loss of
location info in some cases.  However, SFN doesn't kick in at -O0
because the dummy jumps and all other artifacts of unoptimized code are
retained anyway, so SFN wouldn't have a chance to do any of the good
it's meant to do there.

-- 
Alexandre Oliva, freedom fighter  he/him   https://FSFLA.org/blogs/lxo
Be the change, be Free!FSF VP & FSF Latin America board member
GNU Toolchain EngineerFree Software Evangelist
Hay que enGNUrecerse, pero sin perder la terGNUra jamás - Che GNUevara


Re: Proposal for the transition timetable for the move to GIT

2019-12-25 Thread Alexandre Oliva
On Dec 16, 2019, Jeff Law  wrote:

> Yet Joseph just indicated today Maxim's conversion is missing some
> branches.  While I don't consider any of the missed branches important,
> others might.   More importantly, it raises the issue of what other
> branches might be missing and what validation work has been done on
> that conversion.

It also raises another issue, namely the ability to *incrementally* fix
such problems should we find them after the switch-over.

I've got enough experience with git-svn to tell that, if it missed a
branch for whatever reason, it is reasonably easy to create a
configuration that will enable it to properly identify the point of
creation of the branch, and bring in subsequent changes to the branch,
in such a way that the newly-converted branch can be easily pushed onto
live git so that it becomes indistinguishable from other branches that
had been converted before.

I know very little about reposurgeon, but I'm concerned that, should we
make the conversion with it, and later identify e.g. missed branches, we
might be unable to make such an incremental recovery.  Can anyone
alleviate my concerns and let me know we could indeed make such an
incremental recovery of a branch missed in the initial conversion, in
such a way that its commit history would be shared with that of the
already-converted branch it branched from?


Anyway, hopefully we won't have to go through that.  Having not just one
but *two* fully independent conversions of the SVN repo to GIT, using
different tools, makes it a lot less likely that whatever result we
choose contains a significant error, as both can presumably help catch
conversion errors in each other, and the odds that both independent
implementations make the same error are pretty thin, I'd think.

Now, would it be too much of a burden to insist that the commit graphs
out of both conversions be isomorphic, and maybe mappings between the
commit ids (if they can't be made identical to begin with, that is) be
generated and shared, so that the results of both conversions can be
efficiently and mechanically compared (disregarding expected
differences) not only in terms of branch and tag names and commit
graphs, but also tree contents, commit messages and any other metadata?
Has anything like this been done yet?

-- 
Alexandre Oliva, freedom fighter   he/him   https://FSFLA.org/blogs/lxo
Free Software Evangelist   Stallman was right, but he's left :(
GNU Toolchain EngineerFSMatrix: It was he who freed the first of us
FSF & FSFLA board memberThe Savior shall return (true);


Re: Proposal for the transition timetable for the move to GIT

2019-12-25 Thread Alexandre Oliva
On Dec 25, 2019, "Eric S. Raymond"  wrote:

> Reposurgeon has a reparent command.  If you have determined that a
> branch is detached or has an incorrect attachment point, patching the
> metadata of the root node to fix that is very easy.

Thanks, I see how that can enable a missed branch to be converted and
added incrementally to a converted repo even after it went live, at
least as long as there aren't subsequent merges from a converted branch
to the missed one.  I don't quite see how this helps if there are,
though.

> If you're talking about a commit-by-commit comparison between two
> conversions that assumes one or te other is correct

Yeah, minus the assumption; the point would be to find errors in either
one, maybe even in both.  With git, given that the converted trees
should look the same, at least the tree diffs would likely be pretty
fast, since the top-level dir blobs are likely to be identical even if
the commits don't share the same hash, right?  And, should they differ,
a bisection to find the divergence point should be very fast too.

Could make it a requirement that at least the commits associated with
head branches and published tags compare equal in both conversions, or
that differences are known, understood and accepted, before we switch
over to either one?  Going over all corresponding commits might be too
much, but at least a representative random sample would be desirable to
check IMHO.

Of course, besides checking trees, it would be nice to compare metadata
as well.  Alas, the more either conversion diverges from the raw
metadata in svn, the harder it becomes to mechanically ignore expected
differences and identify unexpected ones.  Unless both conversions agree
on transformations to make, such metadata fixes end up conflicting with
the (proposed) goal of enabling mechanical verification of the
conversion results against each other.


> Well, except for split commits. That one would be solvable, albeit
> painful.

Even for split SVN commits, that will amount to at most one GIT commit
per GIT branch/tag, no?  That extra info should make it easy to identify
corresponding GIT commits between two conversions, so as to compare
trees, metadata and DAGs.

> The real problem here would be mergeinfo links.

*nod*.  I don't consider this all that important, considering that GIT
doesn't keep track of cherry-picks at all.  On the same note, it's nice
to identify merges, but since the info is NOT readily available in SVN,
it's arguably not essential that a SVN merge commit be represented as a
GIT merge commit rather than as multi cherry picking, at least provided
that merge metadata is somehow preserved/mapped across the conversion,
perhaps as GIT annotations or so.

I suppose if there are active branches that get merges frequently,
coming up with a merge parent that names at least the latest merged
commit would make the first merge after the transition a lot easier.

> There is another world of hurt lurking in "(disregarding expected
> differences)".  How do you know what differences to expect?

I was thinking someone would look at the differences, possibly
investigate a bit, and then decide whether they indicate a problem in
either conversion or something to be expected, ideally that could be
mechanically identified as expected in subsequent compares, until we
converge on a pair of conversions with only expected differences, if
any.

I suppose we're sort of doing that in a distributed but not very
organized fashion, as repos converted by both tools are made available
for assessment and verification.  Alas, the specification of expected
differences is not (to my knowledge) consolidated in a
publicly-available way, so there may be plenty of duplicate effort
filtering out differences that, if we organized the comparing effort by
sharing configuration data, scripts and tools to compare and to filter
out expected differences, we might be able to do that more efficiently.

-- 
Alexandre Oliva, freedom fighter   he/him   https://FSFLA.org/blogs/lxo
Free Software Evangelist   Stallman was right, but he's left :(
GNU Toolchain EngineerFSMatrix: It was he who freed the first of us
FSF & FSFLA board memberThe Savior shall return (true);


Re: Proposal for the transition timetable for the move to GIT

2019-12-26 Thread Alexandre Oliva
On Dec 26, 2019, "Eric S. Raymond"  wrote:

> Alexandre Oliva :
>> On Dec 25, 2019, "Eric S. Raymond"  wrote:
>> 
>> > Reposurgeon has a reparent command.  If you have determined that a
>> > branch is detached or has an incorrect attachment point, patching the
>> > metadata of the root node to fix that is very easy.
>> 
>> Thanks, I see how that can enable a missed branch to be converted and
>> added incrementally to a converted repo even after it went live, at
>> least as long as there aren't subsequent merges from a converted branch
>> to the missed one.  I don't quite see how this helps if there are,
>> though.

> There's also a command for cutting parent links, ifvthat helps.

I don't see that it does (help).  Incremental conversion of a missed
branch should include the very same parent links that the conversion of
the entire repo would, just linking to the proper commits in the adopted
conversion.  git-svn can do that incrementally, after the fact; I'm not
sure whether either conversion tool we're contemplating does, but being
able to undertake such recovery seems like a desirable feature to me.

> repotool compare does that, and there's a production in the conversion
> makefile that applies it.

> As Joseph says in anotyer reply, he's already doing a lot of the 
> verifications you are suggesting.

>From what I read, he's doing verifications against SVN.  What I'm
suggesting, at this final stage, is for us to do verify one git
converted repo against the other.

Since both claim to be nearing readiness for adoption, I gather it's the
time for both to be comparing with each other (which should be far more
efficient than comparing with SVN) and attempting to narrow down on
differences and converge, so that the community can choose one repo or
another on the actual merits of the converted repositories (e.g. slight
policy differences in metadata conversion), rather than on allegations
by developers of either conversion tool about the reliability of the
tool used by the each other.

Maxim appears to be doing so and finding (easy-to-fix) problems in the
reposurgeon conversion; it would be nice for reposurgeon folks to
reciprocate and maybe even point out problems in the gcc-pretty
conversion, if they can find any, otherwise the allegations of
unsuitability of the tools would have to be taken on blind faith.

I wouldn't like the community to have to decide based on blind faith,
rather than hard data.  I'd much rather we had two great, maybe even
equivalent repos to choose from, possibly with a coin toss if they're
close enough, than pick one over the other on unsubstantiated faith.  It
appears to me that this final stage of collaboration and coopetition,
namely comparing the converted repos proposed for adoption and aiming at
convergence, is in the best interest of our community, even if seemingly
at odds with the promotion of either conversion tool.  I hope we can set
aside these slight conflicts of interest, and do what's best for the
community.

-- 
Alexandre Oliva, freedom fighter   he/him   https://FSFLA.org/blogs/lxo
Free Software Evangelist   Stallman was right, but he's left :(
GNU Toolchain EngineerFSMatrix: It was he who freed the first of us
FSF & FSFLA board memberThe Savior shall return (true);


Re: Proposal for the transition timetable for the move to GIT

2019-12-27 Thread Alexandre Oliva
On Dec 26, 2019, Joseph Myers  wrote:

> We should ensure we don't have missing branches in the first place (for 
> whatever definition of what branches we should have).

*nod*

> Adding a branch after the fact is a fundamentally different kind of
> operation

That depends on the used tool.  A reproducible one, or at least one that
aimed at stability across multiple conversions, could make this easier,
but I guess reposurgeon is not such a tool.  Which suggests to me we
have to be even more reassured of the correctness of its moving-target
output before we adopt it, unlike other conversion tools that have long
had a certain stability of output built into their design.


I understand you're on it, and I thank you for undertaking much of that
validation and verification work.  Your well-known attention to detail
is very valuable.

-- 
Alexandre Oliva, freedom fighter   he/him   https://FSFLA.org/blogs/lxo
Free Software Evangelist   Stallman was right, but he's left :(
GNU Toolchain EngineerFSMatrix: It was he who freed the first of us
FSF & FSFLA board memberThe Savior shall return (true);


Re: Proposal for the transition timetable for the move to GIT

2020-01-01 Thread Alexandre Oliva
On Dec 30, 2019, "Richard Earnshaw (lists)"  wrote:

> Right, (and wrong).  You have to understand how the release branches and
> tags are represented in CVS to understand why the SVN conversion is done
> this way.

I'm curious and ignorant, is the convoluted representation that Maxim
described what SVN normally uses for tree copies, that any conversion
tool from SVN to GIT thus ought to be able to figure out, or is it just
an unusual artifact of the conversion from CVS to SVN, that we'd like to
fix in the conversion from SVN to GIT with some specialized recovery for
such errors in repos poorly converted from CVS?

Thanks in advance for cluing me in,

-- 
Alexandre Oliva, freedom fighter   he/him   https://FSFLA.org/blogs/lxo
Free Software Evangelist   Stallman was right, but he's left :(
GNU Toolchain EngineerFSMatrix: It was he who freed the first of us
FSF & FSFLA board memberThe Savior shall return (true);


finding bugs deferred from previous releases

2018-02-06 Thread Alexandre Oliva
Hi,

In this round of GCC stabilization, I've noticed a larger than usual
number of bugs that carried over from earlier cycles, with notes
indicating it was too late to fix them during stabilization.

I wish we had some means to mark such bugs clearly, so that they could
be found easily earlier in the development cycles, instead of lingering
on until we find it's too late again.

Just targeting it to a future release might be enough during
stabilization, but later on, it will be hard to tell these cases apart
from other bugs for which we didn't make such assessment, and that could
in theory still be addressed during the next round of stabilization.

What I wish for is some means to find these bugs easily, while we're
earlier in a development cycle.  We could mark them as we go through the
current regressions list, so that others wouldn't look at them again in
this cycle, but could find them at any point in the next.

Just bumping the target milestone would address part of the problem: it
would help exclude it from bug lists that need fixing in this cycle.
However, it wouldn't help locate these bugs during the next cycle, since
at some point GCC8 will be out and the target milestone for all unfixed
bugs will be bumped.  Also, when we get to GCC9 stabilization, it would
be nice to have some means to defer again any bugs that, during the
earlier stabilization cycles, were deemed unfixable in the stabilization
phase, so as to defer them again.

Any thoughts on how to mark such bugs in bugzilla?

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist|Red Hat Brasil GNU Toolchain Engineer


Re: finding bugs deferred from previous releases

2018-02-27 Thread Alexandre Oliva
On Feb  8, 2018, Richard Biener  wrote:

> Add a 'defered' keyword?

Done:

  deferred:

This bug was deemed too risky to attempt to fix during stabilization
stages.  Deferred to a development stage of a subsequent release.

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist|Red Hat Brasil GNU Toolchain Engineer


Re: Repo conversion troubles.

2018-07-09 Thread Alexandre Oliva
On Jul  9, 2018, Jeff Law  wrote:

> On 07/09/2018 01:57 PM, Eric S. Raymond wrote:
>> Jeff Law :
>>> I'm not aware of any such merges, but any that occurred most likely
>>> happened after mid-April when the trunk was re-opened for development.

>> I'm pretty certain things were still good at r256000.  I've started that
>> check running.  Not expecting results in less than twelve hours.

> r256000 would be roughly Christmas 2017.

When was the RAID/LVM disk corruption incident?  Could it possibly have
left any of our svn repo metadata in a corrupted way that confuses
reposurgeon, and that leads to such huge differences?

On Jul  9, 2018, "Eric S. Raymond"  wrote:

> Bernd Schmidt :
>> So what are the diffs? Are we talking about small differences (like one
>> change missing) or large-scale mismatches?

> Large-scale, I'm afraid.  The context diff is about a GLOC.

-- 
Alexandre Oliva, freedom fighter   https://FSFLA.org/blogs/lxo
Be the change, be Free! FSF Latin America board member
GNU Toolchain EngineerFree Software Evangelist


Re: Proposal: Improving patch tracking and review using Rietveld

2011-02-01 Thread Alexandre Oliva
On Jan 28, 2011, Diego Novillo  wrote:

> Technically, Rietveld solves the ENOPATCH problem because the patch is
> *always* available at the URL produced in the patch message.

Hi, Diego,

I just got your e-mail with the patch.  It didn't look that big, but it
will give me something useful to do in the plane.  You'll have the
review as soon as get back on line, some 16 hours from now.  Last call,
gotta board now.

[16 hours later...]

Hi, Diego,

Sorry, but I couldn't review the patch; the e-mail only had the URL.
I'll be at conferences all day till the end of the week.  I'll see if I
can get on line and download the patch at the conference (the network
connection at the hotel sucks rocks, I'm not even sure this e-mail will
go out), but I probably won't be able to review it before the return
flight.

Gotta get some sleep now.  I'm as exhausted as the IPv4 address space.

:-)

If it's not in my personal, local e-mail archives, it doesn't exist.
IMNSHO, the cloud is smoke, and mirrors only help so much ;-)

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist  Red Hat Brazil Compiler Engineer


Re: Bumping DATESTAMP

2011-02-03 Thread Alexandre Oliva
On Feb  3, 2011, Gerald Pfeifer  wrote:

> On Wed, 2 Feb 2011, Dongsheng Song wrote:

>> +BRANCHES=`svnlook -r ${REV} dirs-changed "${REPOS}" \

> Do we really need to worry about more than branch being hit in one
> commit?  I wasn't aware that SVN supports this, but I guess it's
> defensive programming. :-)

SVN doesn't even know what a branch is, that's left up for conventions
in the versioned filesystem that svn exposes.  It's perfectly ok to
check out the root of the filesystem, containing trunk, branch/* etc,
make whatever changes you want and commit them all at the same time.
SVN couldn't care less, so our scripts that implement the conventions we
set may have to.

>> +  if ! svn commit -m "Daily bump." gcc/DATESTAMP; then
>> +# If we could not commit the files, indicate failure.
>> +RESULT=1
>> +  fi

> Can we also issue an error message here?

I'm a bit concerned that changing gcc/DATESTAMP in a post-commit hook
might make subsequent commits in a “git svn dcommit” pack to fail.  That
would be unfortunate, though not a show-stopper, but I figured I'd point
it out in case this hadn't been taken into account.

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist  Red Hat Brazil Compiler Engineer


Re: RFC: A new MIPS64 ABI

2011-02-15 Thread Alexandre Oliva
On Feb 14, 2011, David Daney  wrote:

> Current MIPS 32-bit ABIs (both o32 and n32) are restricted to 2GB of
> user virtual memory space.  This is due the way MIPS32 memory space is
> segmented.  Only the range from 0..2^31-1 is available.  Pointer
> values are always sign extended.

> The proposed new ABI would only be available on MIPS64 platforms.  It
> would be identical to the current MIPS n32 ABI *except* that pointers
> would be zero-extended rather than sign-extended when resident in
> registers.

FTR, I don't really know why my Yeeloong is limited to 31-bit addresses,
and I kind of hoped an n32 userland would improve that WRT o32, without
wasting memory with longer pointers like n64 would.

So, sorry if this is a dumb question, but wouldn't it be much easier to
keep on using sign-extended addresses, and just make sure the kernel
never allocates a virtual memory range that crosses a sign-bit change,
or whatever other reason there is for addresses to be limited to the
positive 2GB range in n32?

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist  Red Hat Brazil Compiler Engineer


Re: RFC: A new MIPS64 ABI

2011-05-06 Thread Alexandre Oliva
On Feb 15, 2011, David Daney  wrote:

> On 02/15/2011 09:56 AM, Alexandre Oliva wrote:
>> On Feb 14, 2011, David Daney  wrote:

>> So, sorry if this is a dumb question, but wouldn't it be much easier to
>> keep on using sign-extended addresses, and just make sure the kernel
>> never allocates a virtual memory range that crosses a sign-bit change,

> No, it is not possible.  The MIPS (and MIPS64) hardware architecture
> does not allow userspace access to addresses with the high bit (two
> bits for mips64) set.

Interesting.  I guess this makes it far easier to transition to the u32
ABI: n32 addresses all have the 32-bit MSB bit clear, so n32 binaries
can be used within u32 environments, as long as the environment refrains
from using addresses that have the MSB bit set.

So we could switch lib32 to u32, have a machine-specific bit set for u32
binaries, and if the kernel starts an executable or interpreter that has
that bit clear, it will refrain from allocating any n32-invalid address
for that process.  Furthermore, libc, upon loading a library, should be
able to notify the kernel when an n32 library is to be loaded, to which
the kernel would respond either with failure (if that process already
uses u32-valid but n32-invalid addresses) or success (switching to n32
mode if not in it already).

Am I missing any other issues?

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist  Red Hat Brazil Compiler Engineer


-fno-inline-functions vs glibc's initfini

2012-01-30 Thread Alexandre Oliva
glibc 2.15 won't build with GCC 4.7ish on ppc64: -fno-inline-functions
is no longer enough to prevent call_gmon_start from being inlined into
initfini.c's _init, as required by glibc's somewhat convoluted
compilation of initfini.c into crt*.o.  As a result of the inlining, a
branch and its target label end up in separate object files, after the
compiler's assembly output is broken up in pieces.

I suppose this could be easily worked around in glibc, by using
attribute noinline (and noclone, for good measure, for partial inlining
of the gmon_start undef-weak test would be just as damaging), or
compiling sysdeps/generic/initfini.c with
-fno-inline-functions-called-once (if available), and perhaps also
-fno-inline-small-functions, -fno-indirect-inlining and
-fno-partial-inlining, with room for whatever other options we GCC
developers come up with to control other cases or kinds of inlining.

I'm a bit surprised that -fno-inline-functions doesn't imply all of
these, as long as they're not specified explicitly.  IMHO it should.

I'm also surprised that some parts of GCC appear to assume that
-fno-inline was supposed to behave that way, preventing any inlining
whatsoever.  Grepping for flag_no_inline shows some hits that appear to
indicate some confusion as to its meaning.

To make matters worse for glibc, it appears that at least
-finline-functions-called-once is already present in earlier releases of
GCC, which means we might have no choice but to bite the bullet and use
this option if it's available, even though I have no evidence that the
implementation controlled by the option caused any problems to glibc's
initfini compilation in already-released versions of GCC.


So, where do we go from here?  Is there any reason why glibc doesn't use
the noinline attribute in sysdeps/generic/initfini.c, or for glibc not
to auto-detect -fno-inline-functions-called-once et al and use them in
addition to -fno-inline-functions to compile initfini.c?

As for GCC, shouldn't we aim at providing a stable, reliable option that
prevents inlining of functions not marked always_inline, regardless of
special cases and exceptions to the general inlining rules we might come
up with in the future?  Also, shouldn't the implementation of
-finline/-fno-inline be adjusted to match the documentation, and have
flag_no_inline_functions become what we test for as flag_no_inline in
some functions that make decisions about whether or not to perform
inlining?

Thanks in advance for feedback and suggestions,

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist  Red Hat Brazil Compiler Engineer


Re: -fno-inline-functions vs glibc's initfini

2012-01-31 Thread Alexandre Oliva
On Jan 31, 2012, Richard Guenther  wrote:

> What's probably confusing you is the "Don't pay attention to the
> @code{inline} keyword" sentence.

What really set me down the wrong patch were the comments in
gcc/common.opt, that got me the idea it had something to do with C99
inline.

; Nonzero means that functions declared `inline' will be treated
; as `static'.  Prevents generation of zillions of copies of unused
; static inline functions; instead, `inlines' are written out
; only when actually used.  Used in conjunction with -g.  Also
; does the right thing with #pragma interface.
finline
Common Report Var(flag_no_inline,0) Init(0)
Pay attention to the \"inline\" keyword

> I suppose we should clarify the documentation and I will prepare a patch.

Thanks.  Would you please take care of adjusting the comments in
common.opt accordingly?  TIA,

> The implementation is exactly right

Phew! :-)

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist  Red Hat Brazil Compiler Engineer


Re: -fno-inline-functions vs glibc's initfini

2012-01-31 Thread Alexandre Oliva
On Jan 31, 2012, Roland McGrath  wrote:

> I think we can do that right away without trouble, and get it onto
> release branches too.

*nod*

Want me to prepare a s/-fno-inline-functions/-fno-inline/ patch?

> On the libc side more generally, I've become skeptical that the generic C
> version of initfini is worth continuing with.

Maybe rather than using the generic version, we should have a Makefile
rule that generates the machine-dependent .s files for developers'
perusal in creating the machine-specific asm sources.

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist  Red Hat Brazil Compiler Engineer


Re: GPL (probably a FAQ)

2009-07-24 Thread Alexandre Oliva
On Jul 24, 2009, graham_k  wrote:

> Can someone tell me definitively - if I use a ten line GPLed function, say
> quicksort, in 500,000 lines of code which I write myself, do I need to GPL
> all of my source code and make the ode free for all?

The FSF offers an e-mail based service to answer this kind of question,
but this mailing list is not the way to request it.  The service is
offered gratuitously if you're developing Free Software, and for a fee
otherwise.  Send your question to licens...@fsf.org.

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist  Red Hat Brazil Compiler Engineer


merging VTA: what does it take?

2009-07-24 Thread Alexandre Oliva
So...  It's been a long journey, but I think I'm at a point in which,
even though VTA is not completely finished, it's already enough of an
improvement that it could go in, be useful and get wider testing.

To the best of my knowledge, all of the concerns and objections that
were raised have already been addressed: we have low memory and
compile-time overhead in general, and we have significantly improved
debug information.

Besides all the data that Jakub has already posted, Mark Wielaard has
done some Systemtap testing, comparing debug information for parameters
of inlined functions using a pristine Fedora kernel vs results using a
vta4.4 compiler to compile the same kernel sources.

Out of 42249 inlined function parameters for which some debug
information was emitted, GCC 4.4 emitted location information for only
280, whereas the backport of VTA to GCC 4.4 emitted location information
for as many as 7544.

The careful reader will note that 34705 parameters still don't get
location information.  That's a lot.  No investigation has been done as
to how many of them are indeed unavailable, and therefore couldn't
possibly and shouldn't have got debug location/value information, but
I'd be surprised if it's this many.  As I pointed out before, the code
is not perfect, and there is certainly room for further improvements,
but waiting for perfection will take too long ;-)

Seriously, if VTA could be merged soonish, or at least accepted soon for
a merge at a later date, Fedora would adopt it right away in its
development tree, and then we'd get much broader testing.  So, what does
it take to get it merged soonish, even if not enabled by default?

Thanks,

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist  Red Hat Brazil Compiler Engineer


  1   2   3   4   >