Re: Solaris gcc 4.0.0 static linking of libgcc.a

2005-03-08 Thread Roland Lengfeldner
thanks for your fast answer, 

I must admit that my first description of the problem was a bit outdated,
but I will give you some more information: I first tried the
gcc-4.0-10012005 snapshot, together with the -fno-exceptions flag, and there
was no dynamic linking of libgcc_s.so necessary. gcc-3.4.3 works in the same
way, no dynamic linking necessary. With the newest snapshot gcc-4.0-07032005
there is dynamic linking necessary. The code is written in C++, but there is
no exception handling.

I know I should use -static-libgcc/-shared-libgcc, but there were some
resons why I didn't use it, which I can't remember now, but I will try.

I do just a new installation of gcc-4.0-10012005 to verify the behaviour,
this will last some time, unfortunately.

kind regards,
Roland Lengfeldner

> > I have a question about the different behaviour of the Linux and Solaris
> > versions of gcc (3.4.x and 4.0.x) regarding static linking of libgcc. I
> do
> > the static linking by adding the libgcc.a library.
> 
> Ideally you should not.  Use -shared-libgcc or -static-libgcc instead.
> 
> > The Linux versions link libgcc statically, as do the Solaris versions.
> But
> > then the Solaris version requires to additionally load libgcc_s.so,
> although
> > there there no undefined references to GCC. When I change the specs file
> > (for 3.4.x), so that the libgcc section of the Solaris version matches
> the
> > Linux version, also the Solaris version behaves like *I* expect.
> 
> The shared version is required on Solaris to properly support exception 
> handling across shared libraries.
> 
> > Now my question: which behaviour is the correct one?
> 
> Both.
> 
> -- 
> Eric Botcazou
> 

-- 
SMS bei wichtigen e-mails und Ihre Gedanken sind frei ...
Alle Infos zur SMS-Benachrichtigung: http://www.gmx.net/de/go/sms


Re: __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Duncan Sands
Hi Robert,

> > It's not true because it's neither true nor false. It's a not well
> > formulated statement. (Mathematically).
> 
> I disagree with this, we certainly agree that 0.0 ** negative value
> is undefined, i.e. that this is outside the domain of the ** function,
> and I think normally in mathematics one would say the same thing,
> and simply say that 0**0 is outside the domain of the function.
> 
> However, we indeed extend domains for convenience. After all
> typically on computers 1.0/0.0 yielding infinity, and that
> certainly does not correspond to the behavior of the division
> operator over the reals in mathematics, but it is convenient :-)

the problem with 1.0/0.0 is not so much the domain but the range:
it is the range which needs to be extended to contain an infinite
value (representing both + and - infinity), at which point the
definition 1.0/0.0 = infinity is an example of the standard notion
of "extension by continuity".  The problem with x^y is that the
range of limits as (x,y) converges to zero (through x>0,y>0) is
the entire interval of real numbers between 0 and 1 inclusive.
Attempts to extend by continuity are doomed in this case (the fact
that the limiting values also arise as values of x^y for x,y>0
means that attempts to "fix up" the range bork the usual function
meaning).

Ciao,

Duncan.



Re: __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Robert Dewar
Marcin Dalecki wrote:
Are we a bit too obedient today? Look I was talking about the paper 
presented
above not about the author there of.
But a paper like this must be read in context, and if you don't
know who the author is, you
a) don't have the context to read the paper
b) you show yourself to be remarkably ignorant about the field



Re: Solaris gcc 4.0.0 static linking of libgcc.a

2005-03-08 Thread Eric Botcazou
> I must admit that my first description of the problem was a bit outdated,
> but I will give you some more information: I first tried the
> gcc-4.0-10012005 snapshot, together with the -fno-exceptions flag, and
> there was no dynamic linking of libgcc_s.so necessary. gcc-3.4.3 works in
> the same way, no dynamic linking necessary. With the newest snapshot
> gcc-4.0-07032005 there is dynamic linking necessary. The code is written in
> C++, but there is no exception handling.

That's a bit odd.  However we would need more information to properly diagnose 
the problem, if any.

-- 
Eric Botcazou


Re: __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Florian Weimer
* Robert Dewar:

> Marcin Dalecki wrote:
>
>> There is no reason here and you presented no reasoning. But still
>> there is a
>> *convenient* extension of the definition domain for the power of
>> function for the
>> zero exponent.
>
> The trouble is that there are *two* possible convenient extensions.

>From a mathematical point of view, 0^0 = 1 is the more convenient one
in most contexts.  Otherwise, you suddenly lack a compact notation of
polynomials (and power series).  However, this definition is only used
in a context were the exponent is an integer, so it's not really
relevant to the current discussion.


Re: __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Robert Dewar
Florian Weimer wrote:
* Robert Dewar:

Marcin Dalecki wrote:

There is no reason here and you presented no reasoning. But still
there is a
*convenient* extension of the definition domain for the power of
function for the
zero exponent.
The trouble is that there are *two* possible convenient extensions.

From a mathematical point of view, 0^0 = 1 is the more convenient one
in most contexts.  Otherwise, you suddenly lack a compact notation of
polynomials (and power series).  However, this definition is only used
in a context were the exponent is an integer, so it's not really
relevant to the current discussion.
RIght, and if the exponent is an integer, then of course there is only
one limit that is relevant, and the result is indeed "obviously" 1.



Re: __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Duncan Sands
Hi Florian,

> From a mathematical point of view, 0^0 = 1 is the more convenient one
> in most contexts.  Otherwise, you suddenly lack a compact notation of
> polynomials (and power series).  However, this definition is only used
> in a context were the exponent is an integer, so it's not really
> relevant to the current discussion.

if you restrict the domain of x^y to: x>=0 (real), y an integer >=0,
and (x,y) not equal to zero, then there is a unique limit as (x,y)
converges to zero, namely 1.  So this is an example of extending by
continuity.

Ciao,

Duncan.



Re: __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Ronny Peine
Well this article was referenced by http://grouper.ieee.org/groups/754/, 
so i don't think it's an unreliable source.

It would be nice if you wouldn't try to insult me Joe Buck, that's not 
very productive.

Robert Dewar wrote:
Marcin Dalecki wrote:
Are we a bit too obedient today? Look I was talking about the paper 
presented
above not about the author there of.

But a paper like this must be read in context, and if you don't
know who the author is, you
a) don't have the context to read the paper
b) you show yourself to be remarkably ignorant about the field



Re: __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Robert Dewar
Ronny Peine wrote:
Well this article was referenced by http://grouper.ieee.org/groups/754/, 
so i don't think it's an unreliable source.
Since Kahan is one of the primary movers behind 754 that's not so 
surprising.
For me, 754 is authoritative significantly because of this connection.
If there were a case where Kahan disagreed with 754, I would suspect
that the standard had made a mistake :-)


Re: Pascal front-end integration

2005-03-08 Thread Frank Heckenbach
> I also don't recommend trying to keep compatibility before 4.0; working
> with 4.0 would suffice to keep the new GPC developments usable with a GCC 
> release and the differences between 4.0 and earlier compilers are 
> sufficiently large that the saving from not trying to be compatible with 
> earlier versions would be substantial.  In general, compatibility with the 
> most recent release series should suffice; if the 4.0 series is 
> insufficiently stable, effort would better be devoted to improving it than 
> to keeping compatibility with older and less-maintained series.

I'm not only talking of genuine backend bugs (though, of course,
there may be some that are not exercised by other backends, as there
have been in various 3.x versions), but also of frontend bugs that
are not exercised with older backends. And of course, the new code
to adapt gpc to 4.x needs to be tested and fixed as well.

So to make it short, for my own productive work I'm not going to use
gpc with a backend that hasn't been tested with gpc for at least
several months. Therefore, I'm not going to do my own frontend work
on such a version, as I want to be able to try it immediately. So if
you think dropping older backends is the only way to support 4.x, a
fork would be inevitable. But I'm not convinced this is really
necessary.

Waldek Hebisch wrote:

> James A. Morrison wrote:
> >  I beleive function-at-a-time goes back to gcc 3.0.  Creating function trees
> > then passes off to an expand function within the pascal front-end would
> > probably work with most gcc backends.  The cgraph stuff is a bit more of
> > an issue.  I think cgraph appeared in gcc 3.4.
> > 
> 
> AFAIK 4.0 is the first back-end which can handle the whole function
> as a tree. All earlier versions had a tree walker in the front-end
> which did tree to RTL conversion. Since such tree walker is not
> needed in 4.0 I think that GPC should not support function-at-a-time
> for 3.x.

Agreed, this would just be unnecessary extra work.

So IMHO the best thing for a smooth transition would be to add 4.x
support as far as we can, with conditionals, so everyone can test it
and we can drop earlier backend as soon as (safely) possible.

Frank

-- 
Frank Heckenbach, [EMAIL PROTECTED]
http://fjf.gnu.de/
GnuPG and PGP keys: http://fjf.gnu.de/plan (7977168E)


are link errors caused by mixing of versions?

2005-03-08 Thread Michael Cieslinski
I tried the actual snapshot of gcc 4.1.0 on ACE 5.4.2 and got an ICEs.
To complete the compilation I used gcc from last week to compile the
erroneous files. Later on I got link errors like:


/usr/bin/ld: Warning: size of symbol
`ACE_At_Thread_Exit::~ACE_At_Thread_Exit()' changed from 46 in
.shobj/POSIX_Proactor.o to 48 in .shobj/Proactor.o

`typeinfo name for ACE_Sbrk_Memory_Pool' referenced in section
`.gnu.linkonce.d._ZTI20ACE_Sbrk_Memory_Pool[typeinfo for
ACE_Sbrk_Memory_Pool]' of .shobj/Local_Name_Space.o: defined in discarded
section `.gnu.linkonce.r._ZTS20ACE_Sbrk_Memory_Pool[typeinfo name for
ACE_Sbrk_Memory_Pool]' of .shobj/Priority_Reactor.o

`vtable for ACE_Sig_Adapter' referenced in section
`.gnu.linkonce.t._ZN15ACE_Sig_AdapterD0Ev[ACE_Sig_Adapter::~ACE_Sig_Adapter()]'
of .shobj/Local_Name_Space.o: defined in discarded section
`.gnu.linkonce.d._ZTV15ACE_Sig_Adapter[vtable for ACE_Sig_Adapter]' of
.shobj/POSIX_Proactor.o

/usr/bin/ld: BFD 2.15.91.0.2 20040727 internal error, aborting at
../../bfd/elf64-x86-64.c line 1873 in elf64_x86_64_relocate_section

/usr/bin/ld: Please report this bug.


Is this behavior to be expected or should I report a bug?

Michael Cieslinski


Re:[OT] __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Chris Jefferson
Ronny Peine wrote:
Well, i'm studying mathematics and as i know so far 0^0 is always 1 
(for real and complex numbers) and well defined even in numerical and 
theoretical mathematics. Could you point me to some publications which 
say other things?

cu, Ronny
Just wanting to put in my mathematical opinion as well (sorry), I'm 
personally of the opinion that you can define 0^0 to be whatever you 
like. Define it to be 0,1 or 27. Also feel free to define 1^1 to be 
whatever you like as well, make it 400 if you like.

Maths is much less written in stone than a lot of people think. However, 
the main argument here is which definition of 0^0 would be most useful.

One of the most important things I think personally is that I usually 
consider floating point arithmetic to be closely linked to range 
arithmetic. For this reason it is very important that the various 
functions in volved are continus, as you hope that a small permutation 
of the input values will lead to a small permutation of the output 
values, else error will grow too quickly.

Any definition of 0^0 will break this condition, as there are places 
where you can approach it and be equal 0, and places where you can 
approach it and be equal 1. Therefore it is probably best to leave it 
undefined.

What we are debating here isn't really maths at all, just the definition 
which will be most useful and least suprising (and perhaps also what 
various standards tell us to use).

Chris


Re:[OT] __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Chris Jefferson
Ronny Peine wrote:
Well, i'm studying mathematics and as i know so far 0^0 is always 1 
(for real and complex numbers) and well defined even in numerical and 
theoretical mathematics. Could you point me to some publications which 
say other things?

cu, Ronny
Just wanting to put in my mathematical opinion as well (sorry), I'm 
personally of the opinion that you can define 0^0 to be whatever you 
like. Define it to be 0,1 or 27. Also feel free to define 1^1 to be 
whatever you like as well, make it 400 if you like.

Maths is much less written in stone than a lot of people think. However, 
the main argument here is which definition of 0^0 would be most useful.

One of the most important things I think personally is that I usually 
consider floating point arithmetic to be closely linked to range 
arithmetic. For this reason it is very important that the various 
functions in volved are continus, as you hope that a small permutation 
of the input values will lead to a small permutation of the output 
values, else error will grow too quickly.

Any definition of 0^0 will break this condition, as there are places 
where you can approach it and be equal 0, and places where you can 
approach it and be equal 1. Therefore it is probably best to leave it 
undefined.

What we are debating here isn't really maths at all, just the definition 
which will be most useful and least suprising (and perhaps also what 
various standards tell us to use).

Chris


Re: [OT] __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Paolo Carlini
Chris Jefferson wrote:
What we are debating here isn't really maths at all, just the 
definition which will be most useful and least suprising (and perhaps 
also what various standards tell us to use).
Also, since we are definitely striving to consistently implement the 
current C99 and C++ Standards, it's *totally* pointless discussing 0^0 
in the real domain: it *must* be one. Please, people, don't overflow the 
gcc development list with this kind of discussion. I feel guilty because 
of that, by the way: please, accept my apologies. My original question 
was *only* about consistency between the real case (pow) and the complex 
case (cpow, __builtin_cpow, std::complex::pow).

Paolo.


Re: [Bug c++/19199] [3.3/3.4/4.0/4.1 Regression] Wrong warning about returning a reference to a temporary

2005-03-08 Thread Richard Henderson
On Mon, Mar 07, 2005 at 08:29:48PM -0700, Roger Sayle wrote:
> Which docs?!  There's currently *no* documentation for MIN_EXPR
> or MAX_EXPR in c-tree.texi.

Ah, I misremembered the docs for the rtl patterns.

  Signed minimum and maximum operations.  When used with floating point,
  if both operands are zeros, or if either operand is @code{NaN}, then
  it is unspecified which of the two operands is returned as the result.

But I think the tree patterns should be the same.

> As has been described earlier on this thread, GCC has folded the C++
> source "(a >= b ? a : b) = c" into "MAX_EXPR (a,b) = c" and equivalently
> "(a > b ? a : b) = c" into "MAX_EXPR (b,a) = c" since the creation of
> current CVS.

Which, as we've been seeing in this thread, is also a mistake.



r~


Re: [OT] __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Ronny Peine
Well, you are right, this discussion becomes a bit off topic.
I think 0^0 should be 1 in the complex case, too. Otherwise the complex
and real definitions would collide.
Example:
use complex number 0+i*0 then this should be handled equivalent to the
real number 0. Otherwise the programmer would get quite irritated if he
transforms real numbers into equivalent complex numbers (a -> a+i*0).
Paolo Carlini wrote:
Chris Jefferson wrote:
What we are debating here isn't really maths at all, just the 
definition which will be most useful and least suprising (and perhaps 
also what various standards tell us to use).

Also, since we are definitely striving to consistently implement the 
current C99 and C++ Standards, it's *totally* pointless discussing 0^0 
in the real domain: it *must* be one. Please, people, don't overflow 
the gcc development list with this kind of discussion. I feel guilty 
because of that, by the way: please, accept my apologies. My original 
question was *only* about consistency between the real case (pow) and 
the complex case (cpow, __builtin_cpow, std::complex::pow).

Paolo.




Re: [OT] __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Duncan Sands
Hi Paolo,

> > What we are debating here isn't really maths at all, just the 
> > definition which will be most useful and least suprising (and perhaps 
> > also what various standards tell us to use).
> 
> Also, since we are definitely striving to consistently implement the 
> current C99 and C++ Standards, it's *totally* pointless discussing 0^0 
> in the real domain: it *must* be one. Please, people, don't overflow the 
> gcc development list with this kind of discussion. I feel guilty because 
> of that, by the way: please, accept my apologies. My original question 
> was *only* about consistency between the real case (pow) and the complex 
> case (cpow, __builtin_cpow, std::complex::pow).

aren't __builtin_cpow and friends language independent?  I mean, if a
front-end sees a x^y then presumably it ends up being turned into a
call to a __builtin_?pow by the back-end.  If so, then conforming to
the C99 and C++ standards isn't enough: the standards for all gcc
supported languages need to be checked.  Since some of them require
one, as long as none of the others requires something else then it is
clear that one should be returned.  But do any other languages require
something else?

All the best,

Duncan.



Re: [OT] __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Paolo Carlini
Duncan Sands wrote:
aren't __builtin_cpow and friends language independent?  I mean, if a
front-end sees a x^y then presumably it ends up being turned into a
call to a __builtin_?pow by the back-end.  If so, then conforming to
the C99 and C++ standards isn't enough: the standards for all gcc
supported languages need to be checked.
This is a good question, indeed. I don't think Ada, for instance, is using
__builtin_cpow internally, or cpow, for that matter, since many widespread
libc implementations have cpow(0,0) returning (nan, nan) contra the Ada
RM, as mentioned by Robert Dewar.
Paolo.


Re: __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Ronny Peine
Maybe i should make it more clearer, why 0^x is not defined for real 
exponents x, and not continual in any way.

Be G a set ("Menge" in german) and op : G x G -> G, (a,b) -> a op b.
If op is associative than (G,op) is called a half-group.
Therefore then exponentiation is defined as:
a from G, n from |N>0:
a^1 = a; a^n = a op a^(n-1)
If a neutral element is in G (mostly called the "1") than a^0 is defined 
as 1.

Example (Z,+) is a half-group (it's even a group). Therefor a^n = a + a 
+ a + ... + a (n times).

For real exponents this is not defined in the above case, therefore
(Example: what would be 2^pi?) a definition which is in accordance to 
the previous one was defined:
For A,X from |R, A>0:
A^X = exp(X*ln(A))

with exp(N*X) = exp(X)^N (which can be proofed by induction) it can
be seen that it is in accordance to the previous definition (if X is 
from |N).

The rule a^(1/n) = n-th root of a comes from the proof:
Be a from |R, a>0 and p from Z, q from |N>1, then:
a^p = exp(p * ln(a)) = exp(q * (p/q) * ln(a)) = exp(p/q * ln(a))^q = 
(a^(p/q))^q => a^(p/q) = q-th root of a^p (remind that this is only true 
for a>0).

For 0^x there is no such definition except of x is from |N. Therefore 
0^0 is defined as according to the first rule as 1 (because we look at
the group (|R,*) with a^n= a*a*a* ... *a (n times) and the neutral 
element 1, therefore a^0 = 1 for every element in |R).

I hope that this make things clearer for some who don't believe 0^0 = 1 
in the real case.

cu, Ronny
Robert Dewar wrote:
Ronny Peine wrote:
Well this article was referenced by 
http://grouper.ieee.org/groups/754/, so i don't think it's an 
unreliable source.

Since Kahan is one of the primary movers behind 754 that's not so 
surprising.
For me, 754 is authoritative significantly because of this connection.
If there were a case where Kahan disagreed with 754, I would suspect
that the standard had made a mistake :-)




Re: [OT] __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Robert Dewar
Paolo Carlini wrote:
Duncan Sands wrote:
aren't __builtin_cpow and friends language independent?  I mean, if a
front-end sees a x^y then presumably it ends up being turned into a
call to a __builtin_?pow by the back-end.  If so, then conforming to
the C99 and C++ standards isn't enough: the standards for all gcc
supported languages need to be checked.
This is a good question, indeed. I don't think Ada, for instance, is using
__builtin_cpow internally, or cpow, for that matter, since many widespread
libc implementations have cpow(0,0) returning (nan, nan) contra the Ada
RM, as mentioned by Robert Dewar.
Paolo.
Well if you tell me there are people about there implementing cpow
with log and exp, that's enough for me to decide that Ada should
continue to stay away (the Ada RM has accuracy requirements that
would preclude a broken implementation of this kind) :-)


Question about "#pragma pack(n)" and "-fpack-struct"

2005-03-08 Thread feng qiu
Hello!
Consider the following simple code.
#pragma pack(2)
struct TEST{
charx1;
int x2; 
};
#pragma pack()
main()
{
struct TEST t;
printf("%d\n", sizeof(t));
}
When I compile with "gcc" ,the output is 6.But the output is 5 when using 
"-fpack-struct" to build,and the "#pragma pack(2)" seems to be invalid.What 
is the reason?
Is there anyway to combine them?If I want to modify the gcc source 
code,what should I do?

Thanks in advanced,
Diterlish
_
免费下载 MSN Explorer:   http://explorer.msn.com/lccn/  



Deprecating min/max extension in C++

2005-03-08 Thread Giovanni Bajo
Andrew Pinski <[EMAIL PROTECTED]> wrote:

>> Well, that sounds largely impossible. Can you point exactly which bug
>> are
>> you talking of? I know for a fact that the extension itself has always
>> worked for basic rvalue usage, with basic types. Instead, I would not
>> be
>> surprised if some more complex usage of it used to be (or still is)
>> broken,
>> like weird lvalue contexts, usage in templates, operator overloading or
>> similar.
>
> Yes this was PR 19068 and bug 18548.


Thanks. Nonetheless, both are regressions, and both shows a rather complex
situation which includes pointer tricks. My statement that basic usage of
the extension has always worked still holds.
-- 
Giovanni Bajo



Deprecating min/max extension in C++

2005-03-08 Thread Giovanni Bajo
Mark Mitchell <[EMAIL PROTECTED]> wrote:

> IMO, if these are C++-only, it's relatively easy to deprecate these
> extension -- but I'd like to hear from Jason and Nathan, and also the
> user community before we do that.  Of all the extensions we've had, this
> one really hasn't been that problematic.

I would prefer them to stay. My reasons:

1) std::min() and std::max() are not exact replacements. For instance, you
cannot do std::min(3, 4.0f) because the arguments are of different type.
Also, you cannot use reference to non-const types as arguments. The min/max
exensions do not suffer from these problems (I consider the former very
problematic, and the latter just annoying).

2) The min/max assignments are very useful. I'm speaking of the
(undocumented?) ">?=" and "?= Compute(i) * factor;

instead of:

for (int i=0;i<100;i++)
{
 float cur = Compute(i) * factor;
 if (max_computed < cur)
max_computed = cur;
}

I find the former more compact, more expressive and much easier to read (of
course, you have to know to syntax). I find it also less error-prone since
there is no duplication, nor the use of a variable to prevent side-effects.
I suppose that if we drop ">?" and "?=" and
"

Re: __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Robert Dewar
Ronny Peine wrote:
I hope that this make things clearer for some who don't believe 0^0 = 1 
in the real case.
Believe??? so now its a matter of religeon. Anyway, your bogus proof is
irrelevant for the real case, since the language standard is clear in
the real case anyway. It really is completely pointless to argue this
from a mathematical point of view, the only proper viewpoint is that
of the standard. You would do better to go read that!


Re: __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Ronny Peine
This proof is absolutely correct and in no way bogus, it is lectured to 
nearly every mathematics student PERIOD
But you are right, if the standards handles this otherwise, then this 
doesn't help in any case.

Robert Dewar wrote:
Ronny Peine wrote:
I hope that this make things clearer for some who don't believe 0^0 = 
1 in the real case.

Believe??? so now its a matter of religeon. Anyway, your bogus proof is
irrelevant for the real case, since the language standard is clear in
the real case anyway. It really is completely pointless to argue this
from a mathematical point of view, the only proper viewpoint is that
of the standard. You would do better to go read that!



Re: [OT] __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Duncan Sands
Hi Robert,

> Well if you tell me there are people about there implementing cpow
> with log and exp, that's enough for me to decide that Ada should
> continue to stay away (the Ada RM has accuracy requirements that
> would preclude a broken implementation of this kind) :-)

the reference manual allows for a "relaxed mode", which doesn't have
those accuracy requirements.  I guess -ffast-math and the use of
builtins would be appropriate in the relaxed mode.  Do you plan to
implement such a mode one day?

Just curious.

All the best,

Duncan.



Re: Deprecating min/max extension in C++

2005-03-08 Thread Jonathan Wakely
On Tue, Mar 08, 2005 at 02:06:48PM +0100, Giovanni Bajo wrote:

> Mark Mitchell <[EMAIL PROTECTED]> wrote:
> 
> > IMO, if these are C++-only, it's relatively easy to deprecate these
> > extension -- but I'd like to hear from Jason and Nathan, and also the
> > user community before we do that.  Of all the extensions we've had, this
> > one really hasn't been that problematic.
> 
> I would prefer them to stay. My reasons:
> 
> 1) std::min() and std::max() are not exact replacements. For instance, you
> cannot do std::min(3, 4.0f) because the arguments are of different type.
> Also, you cannot use reference to non-const types as arguments. The min/max
> exensions do not suffer from these problems (I consider the former very
> problematic, and the latter just annoying).

I was about to reply making the same point about template argument
deduction.


Whether or not the extensions get deprecated, shouldn't the docs for
them at least mention std::min and std::max, rather than only referring
to the infamous, flawed macros?

* gcc/doc/extend.texi: Mention std::min and std::max in docs for
min/max operators.

Patch OK for mainline?

jon

-- 
"In theory, practice and theory are the same,
 but in practice they are different."
- Larry McVoy
Index: gcc/doc/extend.texi
===
RCS file: /cvs/gcc/gcc/gcc/doc/extend.texi,v
retrieving revision 1.241
diff -u -p -b -B -r1.241 extend.texi
--- gcc/doc/extend.texi 25 Feb 2005 18:29:28 -  1.241
+++ gcc/doc/extend.texi 8 Mar 2005 13:00:45 -
@@ -9157,6 +9157,10 @@ Since @code{?} are built 
 handle expressions with side-effects;  @[EMAIL PROTECTED] min = i++ }) also correctly handle expressions with side-effects
+e.g. @[EMAIL PROTECTED] min = std::min(i++, j++);}}
+
 @node Volatiles
 @section When is a Volatile Object Accessed?
 @cindex accessing volatiles


Re: [OT] __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Robert Dewar
Duncan Sands wrote:
Hi Robert,

Well if you tell me there are people about there implementing cpow
with log and exp, that's enough for me to decide that Ada should
continue to stay away (the Ada RM has accuracy requirements that
would preclude a broken implementation of this kind) :-)

the reference manual allows for a "relaxed mode", which doesn't have
those accuracy requirements.  I guess -ffast-math and the use of
builtins would be appropriate in the relaxed mode.  Do you plan to
implement such a mode one day?
Just curious.
All the best,
Duncan.
No plans, but also note that the use of log/exp for ** besides
being horribly inaccurate, is also inefficient. Fast accurate
math is achievable, we don't see a need for a relaxed mode.


problem with the scheduler

2005-03-08 Thread Kunal Parmar
Hello,
I am working with c6x processor from TI. It has a VLIW architecture.
It has 32 registers namedly a0-a15 and b0-b15. b15 is used as the SP
in the current port.
I am facing a problem with the scheduler of GCC. 
Following is the c code I was compiling - 

***
int mult(int a,int b) {
  int result=0,flag;

  if(b<0)
flag=1;
  else
flag=-1;
  for(;b;b+=flag)
result += a;
  return result;
}

int main() {
  return mult(5,4);
}


Following is part of the assembly generated by GCC - 

*
mult:
stw .D2T1   a15,*--b15
||  mvk 0,  b4
mv  b15,a15
ldw .D1T2   *+a15[3],   b1
ldw .D1T1   *+a15[2],   a3
nop 3
cmplt   b1, b4, b0
[ b0] mvkl  L2, b4   ;
[ b0] mvkh  L2, b4
[ b0] b b4
nop 5
[ b1] mvkl  L5, b4
||  mvk 0,  a4
[ b1] mvkh  L5, b4
[ b1] b b4
nop 5
;; problem - this should have been scheduled before
;; the branch instruction
mvk -1, b3
L9:
ldw .D1T2   *+a15[1],   b14
||  mv  a15,b15
ldw .D2T1   *b15++, a15
add 4,  b15,b15
nop 2
b   .S2 b14
nop 5
L2:
mvk 0,  a4
||  mvk 1,  b3
L5:
add b3, b1, b1
||  add a3, a4, a4
[ b1] mvkl  L5, b4
[ b1] mvkh  L5, b4
[ b1] b b4
nop 5
b   .S2 L9
nop 5
***


problem with the scheduler in gcc-4.0-20040911

2005-03-08 Thread Kunal Parmar
Hello,
I am working with c6x processor from TI. It has a VLIW architecture.
It has 32 registers namedly a0-a15 and b0-b15. b15 is used as the SP
in the current port.
I am facing a problem with the scheduler of GCC.

Following is the c code I was compiling -

***
int mult(int a,int b) {
 int result=0,flag;

 if(b<0)
   flag=1;
 else
   flag=-1;
 for(;b;b+=flag)
   result += a;
 return result;
}

int main() {
 return mult(5,4);
}



Following is part of the assembly generated by GCC. Code was compiled with O2.


*
mult:
   stw .D2T1   a15,*--b15;D1,D2 are functional units
 
;T1,T2 are transmission paths
||  mvk 0,  b4 ;|| implies
that this instruction is executed
  ;in
parallel with the previous instruction
   mv  b15,a15
   ldw .D1T2   *+a15[3],   b1
   ldw .D1T1   *+a15[2],   a3
   nop 3 
;equivalent to 3 nops
   cmplt   b1, b4, b0
   [ b0] mvkl  L2, b4   ;[] implies
conditional execution. The
 
;instruction is executed if b0 is TRUE
   [ b0] mvkh  L2, b4
   [ b0] b b4
   nop 5
   [ b1] mvkl  L5, b4
||  mvk 0,  a4
   [ b1] mvkh  L5, b4
   [ b1] b b4
   nop 5
;; problem - the below instruction should have been scheduled before
;; the branch instruction because it will not be executed
if the branch is
;; taken
   mvk -1, b3
L9:
   ldw .D1T2   *+a15[1],   b14
||  mv  a15,b15
   ldw .D2T1   *b15++, a15
   add 4,  b15,b15
   nop 2
   b   .S2 b14
   nop 5
L2:
   mvk 0,  a4
||  mvk 1,  b3
L5:
   add b3, b1, b1
||  add a3, a4, a4
   [ b1] mvkl  L5, b4
   [ b1] mvkh  L5, b4
   [ b1] b b4
   nop 5
   b   .S2 L9
   nop 5
***


Following is the debugging dump by the scheduler -

**
;;   ==
;;   -- basic block 1 from 17 to 89 -- after reload
;;   ==

;;   --- forward dependences: 

;;   --- Region Dependences --- b 1 bb 0
;;  insn  codebb   dep  prio  cost   reservation
;;    --   ---       ---
;;   17 5 0 0 1 1   S1  :
;;   18 5 0 0 1 1   S2  :
;;   9067 0 0 8 1   S2  : 89 91
;;   9166 0 1 7 1   S2  : 89
;;   89   100 0 2 6 6   S2  :

;;  Ready list after queue_to_ready:90  18  17
;;  Ready list after ready_sort:18  17  90
;;  Ready list (t =  0):18  17  90
;;0--> 90   (b1) b4=b4+low(L25):S2
;;  dependences resolved: insn 91 into queue with cost=1
;;  Ready-->Q: insn 91: queued for 1 cycles.
;;  Ready list (t =  0):18  17
;;0--> 17   a4=0x0 :S1
;;  Ready list (t =  0):18
;;  Ready-->Q: insn 18: queued for 1 cycles.
;;  Ready list (t =  0):
;;  Second chance
;;  Q-->Ready: insn 18: moving to ready without stalls
;;  Q-->Ready: insn 91: moving to ready without stalls
;;  Ready list after queue_to_ready:91  18
;;  Ready list after ready_sort:18  91
;;  Ready list (t =  1):18  91
;;1--> 91   (b1) {b4=high(L25);use b4;}:S2
;;  dependences resolved: insn 89 into queue with cost=1
;;  Ready-->Q: insn 89: queued for 1 cycles.
;;  Ready list (t =  1):18
;;  Ready-->Q: insn 18: queued for 1 cycles.
;;  Ready list (t =  1):
;;  Second chance
;;  Q-->Ready: insn 18: moving to ready without stalls
;;  Q-->Ready: insn 89: moving to ready without stalls
;;  Ready list after queue_to_ready:89  18
;;  Ready list after ready_sort:18  89
;;  Ready list (t =  2):18  89
;;2--> 89   (b1) pc=b4 :S2
;;  Ready list (t =  2):18
;;  Ready-->Q: insn 18: queued for 1 cycles.
;;  Ready list (t =  2):
;;   

Re: Deprecating min/max extension in C++

2005-03-08 Thread Andrew Pinski
On Mar 8, 2005, at 8:04 AM, Giovanni Bajo wrote:
Andrew Pinski <[EMAIL PROTECTED]> wrote:
Well, that sounds largely impossible. Can you point exactly which bug
are
you talking of? I know for a fact that the extension itself has 
always
worked for basic rvalue usage, with basic types. Instead, I would not
be
surprised if some more complex usage of it used to be (or still is)
broken,
like weird lvalue contexts, usage in templates, operator overloading 
or
similar.
Yes this was PR 19068 and bug 18548.

Thanks. Nonetheless, both are regressions, and both shows a rather 
complex
situation which includes pointer tricks. My statement that basic usage 
of
the extension has always worked still holds.
The number of pointer tricks were very simple.  There more likely 
another
way to reproduce the bug but it would be hard to find one which is as 
simple
as the testcases in those PRs.

-- Pinski


problem with dependencies in gcc-4.0-20040911

2005-03-08 Thread Kunal Parmar
Hello,
I am working with a VLIW processor and GCC-4.0-20040911. There is a
problem in the dependency calculation of GCC. GCC is giving
write-after-read a higher priority than write-after-write. Thus, as in
the following code, GCC gives a write-after-read dependency between
the 2 instructions. Due to this the 2 instructions are scheduled
together.

stw --b15,*a15  ;; pre-decrement b15 and store its contents in memory
add 4,a15,b15  ;; b15 = a15+4

The first instruction reads b15 and then writes b15(pre-decrement).
The second writes to b15. Thus there exists a write-after-read and a
write-after-write dependency from the second instruction on the first.
But GCC keeps only the more restrictive dependency which is determined
by reg-notes.def (line 98). The updation of dependency takes place is
sched-deps.c (line 304). Accordingly, GCC puts a write-after-read
dependency between the 2 instructions. Due to this, the 2 instructions
are scheduled for execution in parallel. This results in 2 writes to
the same register on the same clock cycle.

As of now, I have the interchanged the 2 lines in reg-notes.def (line
98 and 99).

Please correct me if I am wrong.
Thanks in advance.
Regards,
Kunal.


Re: problem with the scheduler in gcc-4.0-20040911

2005-03-08 Thread Vladimir Makarov
Kunal Parmar wrote:
Following is the debugging dump by the scheduler -
**
;;   ==
;;   -- basic block 1 from 17 to 89 -- after reload
;;   ==
;;   --- forward dependences: 
;;   --- Region Dependences --- b 1 bb 0
;;  insn  codebb   dep  prio  cost   reservation
;;    --   ---       ---
;;   17 5 0 0 1 1   S1  :
;;   18 5 0 0 1 1   S2  :
;;   9067 0 0 8 1   S2  : 89 91
;;   9166 0 1 7 1   S2  : 89
;;   89   100 0 2 6 6   S2  :
As can be seen in the assembly dump, one instruction is scheduled
after the branch instruction. The branch is a conditionally executed
branch instruction. This is incorrect because if the branch is
executed then the instruction after that will not be executed.
Please help.
 

There is no dependency between insn 18 and 89 (the jump).  The scheduler 
automatically adds anti-dependencies between jump and previous sets.  So 
I think you jump insn is not represented as RTL JUMP_INSN.The RTL 
dump after the scheduler could help more.

Vlad



Re: matching constraints in asm operands question

2005-03-08 Thread Michael Matz
Hi,

On Sat, 5 Mar 2005 [EMAIL PROTECTED] wrote:

> > Well, I assumed the same thing when I started poking at that code, but
> > then someone pointed out that it didn't actually work that way, and as
> > I recall the code does in fact assume a register.  I certainly would
> > not object to making '+' work properly for memory operands, but simply
> > asserting that it already does is wrong.
> 
> The code in reload to make non-matching operands match assumes a
> register. However, a match from a plus should always kept in sync
> (except temporarily half-way through a substitution, since we now
> unshare).  If it isn't, that's a regression.

In former times an in-out constraint simply was translated to a matching 
constraints.  So it broke when no register was allowed with it.

Jason added a warning to that effect in 2003
  http://gcc.gnu.org/ml/gcc-patches/2003-12/msg01491.html
RTH fixed the problem of this translation in tree-ssa and hence also 
removed the warning
  http://gcc.gnu.org/ml/gcc-patches/2004-05/msg00438.html

I now see that I had an objection to that one as he also applied the 
removal of the warning to 3.4, without me seeing that we were doing the 
right thing as Richard claimed, but that seemed to have falled through the 
crack.

> Do you have a testcase, and/or can point out the code that introduces
> the inconsistency in the rtl?


Ciao,
Michael.


Re: Solaris gcc 4.0.0 static linking of libgcc.a

2005-03-08 Thread Roland Lengfeldner
> > I must admit that my first description of the problem was a bit
> outdated,
> > but I will give you some more information: I first tried the
> > gcc-4.0-10012005 snapshot, together with the -fno-exceptions flag, and
> > there was no dynamic linking of libgcc_s.so necessary. gcc-3.4.3 works
> in
> > the same way, no dynamic linking necessary. With the newest snapshot
> > gcc-4.0-07032005 there is dynamic linking necessary. The code is written
> in
> > C++, but there is no exception handling.
> 
> That's a bit odd.  However we would need more information to properly
> diagnose 
> the problem, if any.
> 

ok, it seems that it was my error, I recompile gcc-4.0-10012005 to verify
the behaviour, but libgcc_s.so needs to be linked in any case. gcc-3.4 works
because I changed the specs file, but then I thougth it was due to the
compiler flags, which wasn't the case. 

But what I don't understand is why libgcc_s.so is needed. I have no
exception handling in my souces, and furthermore I use -fno-exceptions. I
understand that I should link libgcc_eh.so dynamically for exception
handling. I did a nm on the resulting so, and there were no references to
GCC_xx, only to SYSVABI_xx and SUNW_xx. What Information would you need for
further investigation, or is it simple to say: On Solaris, libgcc_s.so is
always linked to c++ programs?

thanks for your help,

regards,
Roland Lengfeldner

-- 
DSL Komplett von GMX +++ Supergünstig und stressfrei einsteigen!
AKTION "Kein Einrichtungspreis" nutzen: http://www.gmx.net/de/go/dsl


Re: problem with dependencies in gcc-4.0-20040911

2005-03-08 Thread Vladimir Makarov
Kunal Parmar wrote:
Hello,
I am working with a VLIW processor and GCC-4.0-20040911. There is a
problem in the dependency calculation of GCC. GCC is giving
write-after-read a higher priority than write-after-write. Thus, as in
the following code, GCC gives a write-after-read dependency between
the 2 instructions. Due to this the 2 instructions are scheduled
together.
stw --b15,*a15  ;; pre-decrement b15 and store its contents in memory
add 4,a15,b15  ;; b15 = a15+4
The first instruction reads b15 and then writes b15(pre-decrement).
The second writes to b15. Thus there exists a write-after-read and a
write-after-write dependency from the second instruction on the first.
But GCC keeps only the more restrictive dependency which is determined
by reg-notes.def (line 98). The updation of dependency takes place is
sched-deps.c (line 304). Accordingly, GCC puts a write-after-read
dependency between the 2 instructions. Due to this, the 2 instructions
are scheduled for execution in parallel. This results in 2 writes to
the same register on the same clock cycle.
As of now, I have the interchanged the 2 lines in reg-notes.def (line
98 and 99).
 

Please use hook variable_issue or dfa_new_cycle to solve the problem. 
You can use  Itanium port as example how to solve WAW conflict for VLIW 
processors.

Vlad



Bug 20375 - ia64 varadic regression

2005-03-08 Thread Nathan Sidwell
Bug 20375 is logged as a C++ bug, but it is a middle end
bug that cannot be expressed in C.  Here's a reduced testcase
union U
{
  void *m[7];
};
struct C;
void f(struct C *c, float f, union U, ...)
{ }
Notice that the last specified argument 'union U' has no name.  when
compiled for ia64-hp-hpux11.23 with -mlp64 this ICEs because of this
bit of code in assign_parm_find_data_types
  /* Set LAST_NAMED if this is last named arg before last anonymous args.  */
  if (current_function_stdarg)
{
  tree tem;
  for (tem = TREE_CHAIN (parm); tem; tem = TREE_CHAIN (tem))
if (DECL_NAME (tem))
  break;
  if (tem == 0)
data->last_named = true;
}
That triggers on the float argument, not union.  Naming the union
makes it trigger on the union, and compilation succeeds.  This is
clearly wrong.
The comment doesn't make sense, arguments with and without names can
be freely intermixed (in C++), and should not affect the ABI.  As this
is to do with varadic parameters, is this really talking about the
last typed argument before the varadic ones?  If that's so, why isn't
the test just to see if TREE_CHAIN (parm) is NULL? Later comments in
function.c mention that LAST_NAMED is misnamed.  I'm confused about what
this is really testing for.
nathan
--
Nathan Sidwell::   http://www.codesourcery.com   :: CodeSourcery LLC
[EMAIL PROTECTED]:: http://www.planetfall.pwp.blueyonder.co.uk


Re: problem with the scheduler in gcc-4.0-20040911

2005-03-08 Thread Kunal Parmar
Hello,
I have attached the dump after the scheduler. The branch instruction
is a conditionally executed branch instruction. So it is represented
as RTL COND_EXEC.
Regards,
Kunal



On Tue, 08 Mar 2005 10:14:05 -0500, Vladimir Makarov
<[EMAIL PROTECTED]> wrote:
> Kunal Parmar wrote:
> 
> >Following is the debugging dump by the scheduler -
> >
> >**
> >;;   ==
> >;;   -- basic block 1 from 17 to 89 -- after reload
> >;;   ==
> >
> >;;   --- forward dependences: 
> >
> >;;   --- Region Dependences --- b 1 bb 0
> >;;  insn  codebb   dep  prio  cost   reservation
> >;;    --   ---       ---
> >;;   17 5 0 0 1 1   S1  :
> >;;   18 5 0 0 1 1   S2  :
> >;;   9067 0 0 8 1   S2  : 89 91
> >;;   9166 0 1 7 1   S2  : 89
> >;;   89   100 0 2 6 6   S2  :
> >
> >
> >As can be seen in the assembly dump, one instruction is scheduled
> >after the branch instruction. The branch is a conditionally executed
> >branch instruction. This is incorrect because if the branch is
> >executed then the instruction after that will not be executed.
> >Please help.
> >
> >
> >
> There is no dependency between insn 18 and 89 (the jump).  The scheduler
> automatically adds anti-dependencies between jump and previous sets.  So
> I think you jump insn is not represented as RTL JUMP_INSN.The RTL
> dump after the scheduler could help more.
> 
> Vlad
> 
>


a.c.32.sched2
Description: Binary data


Re: problem with the scheduler in gcc-4.0-20040911

2005-03-08 Thread Daniel Jacobowitz
On Tue, Mar 08, 2005 at 09:38:19PM +0530, Kunal Parmar wrote:
> Hello,
> I have attached the dump after the scheduler. The branch instruction
> is a conditionally executed branch instruction. So it is represented
> as RTL COND_EXEC.

Vladimir was right.  It's an INSN, when it should be a JUMP_INSN.
Your backend is probably using emit_insn when it should be using
emit_jump_insn.

-- 
Daniel Jacobowitz
CodeSourcery, LLC


Re: Bug 20375 - ia64 varadic regression

2005-03-08 Thread Andrew Pinski
On Mar 8, 2005, at 10:59 AM, Nathan Sidwell wrote:
Bug 20375 is logged as a C++ bug, but it is a middle end
bug that cannot be expressed in C.  Here's a reduced testcase
union U
{
  void *m[7];
};
struct C;
void f(struct C *c, float f, union U, ...)
{ }
I almost want to say this is undefined as there is no way to get
at the varaidic arguments.
-- Pinski


Re: Bug 20375 - ia64 varadic regression

2005-03-08 Thread Andreas Schwab
Andrew Pinski <[EMAIL PROTECTED]> writes:

> On Mar 8, 2005, at 10:59 AM, Nathan Sidwell wrote:
>
>> Bug 20375 is logged as a C++ bug, but it is a middle end
>> bug that cannot be expressed in C.  Here's a reduced testcase
>>
>> union U
>> {
>>   void *m[7];
>> };
>>
>> struct C;
>>
>> void f(struct C *c, float f, union U, ...)
>> { }
>
> I almost want to say this is undefined as there is no way to get
> at the varaidic arguments.

If you never invoke va_start this is not a problem.

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


Re: [Bug c++/19199] [3.3/3.4/4.0/4.1 Regression] Wrong warning about returning a reference to a temporary

2005-03-08 Thread Alexandre Oliva
On Mar  8, 2005, Richard Henderson <[EMAIL PROTECTED]> wrote:

>> As has been described earlier on this thread, GCC has folded the C++
>> source "(a >= b ? a : b) = c" into "MAX_EXPR (a,b) = c" and equivalently
>> "(a > b ? a : b) = c" into "MAX_EXPR (b,a) = c" since the creation of
>> current CVS.

> Which, as we've been seeing in this thread, is also a mistake.

Not quite.  The folding above is not a mistake at all, if all the
expressions are exactly as displayed.  The problem occurs when we
turn:

  ((int)a > (int)b ? a : b) = c

into

  (__typeof(a))(MAX_EXPR ((int)a, (int)b)) = c

and avoiding this kind of lvalue-dropping transformation is exactly
what the patch I proposed fixes.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: problem with the scheduler in gcc-4.0-20040911

2005-03-08 Thread Kunal Parmar
Hello,
Thanks alot Vladimir and Daniel.
Regards,
Kunal


On Tue, 8 Mar 2005 11:12:46 -0500, Daniel Jacobowitz <[EMAIL PROTECTED]> wrote:
> On Tue, Mar 08, 2005 at 09:38:19PM +0530, Kunal Parmar wrote:
> > Hello,
> > I have attached the dump after the scheduler. The branch instruction
> > is a conditionally executed branch instruction. So it is represented
> > as RTL COND_EXEC.
> 
> Vladimir was right.  It's an INSN, when it should be a JUMP_INSN.
> Your backend is probably using emit_insn when it should be using
> emit_jump_insn.
> 
> --
> Daniel Jacobowitz
> CodeSourcery, LLC
>


Re: Bug 20375 - ia64 varadic regression

2005-03-08 Thread Mark Mitchell
Nathan Sidwell wrote:
Notice that the last specified argument 'union U' has no name.  when
compiled for ia64-hp-hpux11.23 with -mlp64 this ICEs because of this
bit of code in assign_parm_find_data_types
  /* Set LAST_NAMED if this is last named arg before last anonymous 
args.  */
  if (current_function_stdarg)
{
  tree tem;
  for (tem = TREE_CHAIN (parm); tem; tem = TREE_CHAIN (tem))
if (DECL_NAME (tem))
  break;
  if (tem == 0)
data->last_named = true;
}

That triggers on the float argument, not union.  Naming the union
makes it trigger on the union, and compilation succeeds.  This is
clearly wrong.
The comment doesn't make sense, arguments with and without names can
be freely intermixed (in C++), and should not affect the ABI.  As this
is to do with varadic parameters, is this really talking about the
last typed argument before the varadic ones? 
Yes, it must be.  As you say, the current code is just plain bogus; the 
idea that the ABI would change for a C++ function depending on whether 
or not the argument is named is wrong, as, for example, function 
pointers would cease to work.  I suspect this is a relic of our old 
implementation of varargs, and depended on the fact that C functions do 
not have unnamed arguments.

However, the "named_arg" bit (which depends on last_named) is indeed 
passed around to all kinds of functions.  It looks like that argument is 
unused in most backends.  (For example, alpha_pass_by_reference ignores 
the "named_arg" flag.)

I suppose you could start by going through the hoooks which take a 
named_arg flag, verifying that the flag is unused in all back ends, and 
removing the flag.  Then, you could also remove it from function.c.

Oh, dear, the SH back end actually uses it.  So, SH is broken, at least 
for C++.

I'm not sure what that means, but I'd be tempted just to declare it broken.
--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: Bug 20375 - ia64 varadic regression

2005-03-08 Thread Jeffrey A Law
On Tue, 2005-03-08 at 09:06 -0800, Mark Mitchell wrote:
> Nathan Sidwell wrote:
> 
> > Notice that the last specified argument 'union U' has no name.  when
> > compiled for ia64-hp-hpux11.23 with -mlp64 this ICEs because of this
> > bit of code in assign_parm_find_data_types
> > 
> >   /* Set LAST_NAMED if this is last named arg before last anonymous 
> > args.  */
> >   if (current_function_stdarg)
> > {
> >   tree tem;
> >   for (tem = TREE_CHAIN (parm); tem; tem = TREE_CHAIN (tem))
> > if (DECL_NAME (tem))
> >   break;
> >   if (tem == 0)
> > data->last_named = true;
> > }
> > 
> > That triggers on the float argument, not union.  Naming the union
> > makes it trigger on the union, and compilation succeeds.  This is
> > clearly wrong.
> > 
> > The comment doesn't make sense, arguments with and without names can
> > be freely intermixed (in C++), and should not affect the ABI.  As this
> > is to do with varadic parameters, is this really talking about the
> > last typed argument before the varadic ones? 
> 
> Yes, it must be.  As you say, the current code is just plain bogus; the 
> idea that the ABI would change for a C++ function depending on whether 
> or not the argument is named is wrong, as, for example, function 
> pointers would cease to work.  I suspect this is a relic of our old 
> implementation of varargs, and depended on the fact that C functions do 
> not have unnamed arguments.
FWIW, there is actually a system which varies its ABI based on whether
or not an argument is named -- my old favorite, the 32bit PA SOM ABI
behaves in this manner.  In fact, I believe it is the only port which
gives a hoot about this kind of thing.

Is it lame, absolutely.  It was one of the many annoying discoveries
in my decade of PA hacking.

Jeff



Re: request for timings - makedepend

2005-03-08 Thread Paul Brook
> (a) the numbers reported by the "time" command,

real3m52.604s
user3m15.490s
sys 0m29.550s

> (b) what sort of machine this is and how old

hp-pa 712/80. At least 7 years only, probably more. This machine takes many 
hours to bootstrap gcc.

> (c) whether or not you would be willing to trade that much additional
> delay in an edit-compile-debug cycle for not having to write
> dependencies manually anymore. 

Yes.

Paul


RE: Bug 20375 - ia64 varadic regression

2005-03-08 Thread Dave Korn
Original Message
>From: Andrew Pinski
>Sent: 08 March 2005 16:13

> On Mar 8, 2005, at 10:59 AM, Nathan Sidwell wrote:
> 
>> Bug 20375 is logged as a C++ bug, but it is a middle end
>> bug that cannot be expressed in C.  Here's a reduced testcase
>> 
>> union U
>> {
>>   void *m[7];
>> };
>> 
>> struct C;
>> 
>> void f(struct C *c, float f, union U, ...)
>> { }
> 
> I almost want to say this is undefined as there is no way to get
> at the varaidic arguments.
> 
> 
> -- Pinski


  There was under varargs, which didn't require to pass a named argument to
va_start; it's only with stdargs that it would be impossible.

  I suspect that this is the underlying reason for the code having developed
this way:  sometimes the first variadic arg is the last named arg (stdargs),
sometimes it is the first arg _after_ the last named arg.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: request for timings - makedepend

2005-03-08 Thread Jeffrey A Law
On Tue, 2005-03-08 at 17:25 +, Paul Brook wrote:
> > (a) the numbers reported by the "time" command,
> 
> real3m52.604s
> user3m15.490s
> sys 0m29.550s
> 
> > (b) what sort of machine this is and how old
> 
> hp-pa 712/80. At least 7 years only, probably more. This machine takes many 
> hours to bootstrap gcc.
A 712/80 would be approximately 10 years old.  I've got one sitting in 
my basement :-)  It belongs to the UofU, but every time I ask if they
want it back, they say no way!

jeff




Re: Bug 20375 - ia64 varadic regression

2005-03-08 Thread Nathan Sidwell
Dave Korn wrote:
  There was under varargs, which didn't require to pass a named argument to
va_start; it's only with stdargs that it would be impossible.
  I suspect that this is the underlying reason for the code having developed
this way:  sometimes the first variadic arg is the last named arg (stdargs),
sometimes it is the first arg _after_ the last named arg.
ah, yes, that explains the later comment
   /* Handle stdargs.  LAST_NAMED is a slight mis-nomer; it's also true
 for the unnamed dummy argument following the last named argument.
 See ABI silliness wrt strict_argument_naming and NAMED_ARG.  So
 we only want to do this when we get to the actual last named
 argument, which will be the first time LAST_NAMED gets set.  */
I was trying to work out what the 'unnamed dummy argument' was.  As we
no longer support varargs, this can be excised.
nathan
--
Nathan Sidwell::   http://www.codesourcery.com   :: CodeSourcery LLC
[EMAIL PROTECTED]:: http://www.planetfall.pwp.blueyonder.co.uk


Re: request for timings - makedepend

2005-03-08 Thread Tom Tromey
> "Zack" == Zack Weinberg <[EMAIL PROTECTED]> writes:

>> Computed headers are dealt with somewhat clumsily in automake.  As a
>> user you specify "BUILT_SOURCES", and then these are built by 'all'
>> before anything else is done.

Zack> This might not be all that bad in gcc land.  It's good to generate all
Zack> the generated sources up front, because certain of them tend to
Zack> bottleneck a parallel build anyway (I think you know which ones I
Zack> mean).  One problem I see is that the set of gt-*.h files is large and
Zack> cannot be easily determined up front.

Personally I wouldn't have a problem just requiring them to be listed
in a variable in Make-lang.in.  If you add a new '#include
"gt-blah.h"', then you add a new line to Make-lang.in.  (But then,
gcjx only has two gt-*.h files... so it is no big deal for me.)


Right now the Makefiles have things like this:

gt-java-hooks.h gt-java-langhooks.h : s-gtype ; @true

We could just replace this with:

$(all_gt_h_files) : s-gtype ; @true

... meaning that the only difference to the maintainer is where the
file gets listed.

Tom


Re: Bug 20375 - ia64 varadic regression

2005-03-08 Thread Richard Henderson
On Tue, Mar 08, 2005 at 10:15:09AM -0700, Jeffrey A Law wrote:
> FWIW, there is actually a system which varies its ABI based on whether
> or not an argument is named -- my old favorite, the 32bit PA SOM ABI
> behaves in this manner.  In fact, I believe it is the only port which
> gives a hoot about this kind of thing.
> 

Actually, there are quite a few of these: sparc64, ia64 at least.
I'm pretty sure there are others.  But of course, what they mean in
all cases by "named" is "in varargs or not", and is in all cases
not meaning arguments without names a la c++.


r~


Re: Bug 20375 - ia64 varadic regression

2005-03-08 Thread Zack Weinberg
Nathan Sidwell <[EMAIL PROTECTED]> writes:

> Dave Korn wrote:
>
>> There was under varargs, which didn't require to pass a named
>> argument to va_start; it's only with stdargs that it would be
>> impossible.  I suspect that this is the underlying reason for the
>> code having developed this way: sometimes the first variadic arg is
>> the last named arg (stdargs), sometimes it is the first arg _after_
>> the last named arg.

This is exactly backward.  See below.

> ah, yes, that explains the later comment
> /* Handle stdargs.  LAST_NAMED is a slight mis-nomer; it's also true
>for the unnamed dummy argument following the last named argument.
>See ABI silliness wrt strict_argument_naming and NAMED_ARG.  So
>we only want to do this when we get to the actual last named
>argument, which will be the first time LAST_NAMED gets set.  */
> I was trying to work out what the 'unnamed dummy argument' was.  As we
> no longer support varargs, this can be excised.

The way you wrote a variadic function definition under the original
K+R  was

variadic(va_alist)
 va_dcl
{
}

which would expand to something like

variadic(va_alist)
 int va_alist;
{
}

va_start() would then take the address of va_alist, which was equal to
the address of the true first parameter (we are talking VAX-era
everything-on-the-stack calling conventions here).  So a variadic
function, to the compiler, appeared to have one named argument.

Now when someone tried to implement  compatibility for GCC,
back in the day, they had to do something to put the compiler on
notice that this was, in fact, a variadic function being defined here
(since they were trying to support more complex calling conventions by
then).  So what GCC's varargs.h used to do with va_alist and va_dcl
was

variadic(__va_alist)
 int __va_alist; ...
{
}

and the ellipsis had roughly the same effect that it did for a
variadic function defined to the  convention.  But note the
key difference: if this were a stdarg function, "int __va_alist" would
be a true, named argument to the function, and the variable, anonymous
arguments would start at position 2.  It is a varargs function, so
"int __va_alist" is a dummy, and the anonymous arguments start at
position 1.  This is what the comment above refers to.  It's
inaccurate insofar as the dummy argument does actually have a name,
but I don't blame whoever wrote it for getting confused.

(I suspect that there was some attempt to support named arguments
before va_alist in varargs mode; hence the stuff about the dummy
argument coming after named arguments.)

It would certainly be nice to get rid of this mess, but Jim Wilson
expressed concerns last time it came up:


zw


Re: [OT] __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Paul Schlie
> Ronny Peine 
>
> Maybe i found something:
>
> http://www.cs.berkeley.edu/~wkahan/ieee754status/ieee754.ps
> page 9 says:
>
> "A number of real expressions are sometimes implemented as INVALID
> by mistake, or declared Undefined by illconsidered
> language standards; a few examples are ...
> 0.0**0.0 = inf**0.0 = NaN**0.0 = 1.0, not Nan;"
> 
> I'm not really sure if he means that it should be 1.0 or it should be NaN
> but i think he means 1.0.

It seems like an acknowledgement that run-time generated Nan are much less
useful than those which would otherwise result from the interpretation that
+/-0 were more conveniently and consistently considered equivalent to their
reciprocal +/-inf counterparts, and visa-versa; implying among other things:

0/0 == (x/inf)/(x/inf) == inf/inf == x/x == x^0 == 1

And further observing that Nan real valued results are also less useful
than simply returning the real part of an otherwise complex valued result
i.e. sqrt(-1) == 0, vs Nan; just as an assignment of a complex value to a
real valued variable would simply return real the valued component, not Nan.
(seemingly enabling most if not all uses of Nan's to be largely eliminated).

But acknowledge that for such an interpretation to be cleanly consistent
it's value representation implementation should be symmetric, thereby the
reciprocal of every representable value x should have a corresponding
representable value y such that x ~ 1/y inclusive of +/- 0 and +/- inf at
the representation's limits; thereby also enabling the correction of the
inconsistent interpretation that -0 == +0, as clearly +1/0 == +inf and
-1/0 == -inf, therefore not equivalent, although +0 == |+/- 0| would be.

Unfortunately however, it's not clear that the industry's vested interest
in preserving the legitimacy of present product implementations will allow
the arguably misguided introduction of Nan to be largely corrected.




RE: Bug 20375 - ia64 varadic regression

2005-03-08 Thread Dave Korn
Original Message
>From: Richard Henderson
>Sent: 08 March 2005 18:24

> On Tue, Mar 08, 2005 at 10:15:09AM -0700, Jeffrey A Law wrote:
>> FWIW, there is actually a system which varies its ABI based on whether
>> or not an argument is named -- my old favorite, the 32bit PA SOM ABI
>> behaves in this manner.  In fact, I believe it is the only port which
>> gives a hoot about this kind of thing.
>> 
> 
> Actually, there are quite a few of these: sparc64, ia64 at least.
> I'm pretty sure there are others.  


  And PPC passes floats differently according to whether they're among the
variadic or named-args of a function, IIRC.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



RE: Bug 20375 - ia64 varadic regression

2005-03-08 Thread Dave Korn
Original Message
>From: Zack Weinberg
>Sent: 08 March 2005 18:25

> Nathan Sidwell <[EMAIL PROTECTED]> writes:
> 
>> Dave Korn wrote:
>> 
>>> There was under varargs, which didn't require to pass a named
>>> argument to va_start; it's only with stdargs that it would be
>>> impossible.  I suspect that this is the underlying reason for the
>>> code having developed this way: sometimes the first variadic arg is
>>> the last named arg (stdargs), sometimes it is the first arg _after_
>>> the last named arg.
> 
> This is exactly backward.  

  Hence my heartfelt expression of regret "...started well but had turned to
nonsense by the time I got this far"...  :)  Thanks for the in-depth
explanation.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Shall we take a short break from my fold changes?

2005-03-08 Thread Kazu Hirata
Hi,

Roger Sayle and I are wondering if we should take a short break (maybe
a few days!?) from my fold changes.  The reason is that I've already
broken fold once.  (Thanks goes to David Edelsohn for fixing the
problem!)  If we wait for a few days, it's likely that people
bootstrap and test GCC on their favorite platforms.  Some people might
even try to build their favorite applications with mainline GCC.

Plus, as Roger Sayle puts it, applying parts 16 and 17

http://gcc.gnu.org/ml/gcc-patches/2005-03/msg00629.html
http://gcc.gnu.org/ml/gcc-patches/2005-03/msg00630.html

would make it harder to revert any of parts 1 through 15.

On the other hand, after applying parts 16 and 17, it's trivial to
make fold_build[12] available although fold_build3 will take some
time, and it would be a bit awkward to provide fold_build[12] but not
fold_build3.

Thoughts?

Kazu Hirata


Re: Solaris gcc 4.0.0 static linking of libgcc.a

2005-03-08 Thread Eric Botcazou
> ok, it seems that it was my error, I recompile gcc-4.0-10012005 to verify
> the behaviour, but libgcc_s.so needs to be linked in any case. gcc-3.4
> works because I changed the specs file, but then I thougth it was due to
> the compiler flags, which wasn't the case.

OK, thanks for the clarification.

> But what I don't understand is why libgcc_s.so is needed. I have no
> exception handling in my souces, and furthermore I use -fno-exceptions. I
> understand that I should link libgcc_eh.so dynamically for exception
> handling. I did a nm on the resulting so, and there were no references to
> GCC_xx, only to SYSVABI_xx and SUNW_xx. What Information would you need for
> further investigation, or is it simple to say: On Solaris, libgcc_s.so is
> always linked to c++ programs?

Yes, by default the g++ driver uses -shared-libgcc.  Pass -static-libgcc if 
you want to use the static libgcc, at the cost of proper EH support.

-- 
Eric Botcazou


Re: Building GCC on Native Platform

2005-03-08 Thread James E Wilson
On Fri, 2005-03-04 at 21:32, Vivek Takalkar wrote:
> Could some one help me in building GCC as NATIVE GCC on platform other
> than INTEL PC, say ARM, SPARC, etc.

The procedure for building a native gcc is generally the same regardless
of target.  If you know how to build it on one machine, then you can
build it on any machine.

See the install instructions, either in the sources our on the web site
that give more info.

There are some targets that have special requirements.  These are
documented in the same place, in the install docs.

I didn't see the original message on the gcc list.  If you had a problem
with the spam filter, I'd suggest using fewer (preferably none) capital
letters in the subject line.
-- 
Jim Wilson, GNU Tools Support, http://www.SpecifixInc.com




Re: IA64 record alignment rules, and modes?

2005-03-08 Thread James E Wilson
Gary Funck wrote:
Question: If we assume that a TImode would've been a more efficient mode
to represent the record type above, would it not have been acceptable for
the compiler to promote the alignment of this type to 128, given there
are no apparent restrictions otherwise, or are there other C conventions
at work that dictate otherwise?  Is there a configuration tweak that
would've led to using TImode rather than BLKmode?
The ABI says that the type has alignment of 64-bits, because that is the 
largest alignment among the types used in the structure.

It is not OK to promote the alignment of the type to 128-bits, because 
we might be given a pointer to an object created by a different compiler 
which has only the 64-bit alignment required by the ABI.  If we assume a 
larger alignment, then we might generate code that fails (or rather in 
this case triggers an unwanted kernel unaligned access fixup).  Also, 
this might break the ABI, since structures with different alignments may 
be passed differently as arguments.  Also, as others mentioned, this 
breaks structure layout when this structure is nested inside other 
structures.  There may also be other problems.

It would be OK to promote the alignment of a variable with this type to 
128-bits.  We could then perhaps generate more efficient code to access 
this variable, and if we take a pointer and pass it to another compiler, 
there won't be any problems.
--
Jim Wilson, GNU Tools Support, http://www.SpecifixInc.com


Re: [Bug c++/19199] [3.3/3.4/4.0/4.1 Regression] Wrong warning about returning a reference to a temporary

2005-03-08 Thread Alexandre Oliva
On Mar  7, 2005, Roger Sayle <[EMAIL PROTECTED]> wrote:

> For example, I believe that Alex's proposed solution to PR c++/19199
> isn't an appropriate fix.  It's perfectly reasonable for fold to convert
> a C++ COND_EXPR into a MIN_EXPR or MAX_EXPR, as according to the C++
> front-end all three of these tree nodes are valid lvalues.  Hence it's
> not this transformation in fold that's in error.

Bugzilla was kept out of the long discussion that ensued, so I'll try
to summarize.  The problem is that the conversion is turning a
COND_EXPR such as:

  ((int)a < (int)b ? a : b)

into

  (__typeof(a)) ((int)a  Simply disabling the COND_EXPR -> MIN_EXPR/MAX_EXPR transformation is
> also likely to be a serious performance penalty, especially on targets
> that support efficient sequences for "min" and "max".

This was not what I intended to do with my patch, FWIW.
Unfortunately, I goofed in the added call to operand_equal_p, limiting
too much the situations in which the optimization could be applied.
The patch fixes this problem, and updates the patch such that it
applies cleanly again.

As for other languages whose COND_EXPRs can't be lvalues, we can
probably arrange quite easily for them to ensure at least one of their
result operands is not an lvalue, so as to enable the transformation
again.

Comments?  Ok to install?

Index: gcc/ChangeLog
from  Alexandre Oliva  <[EMAIL PROTECTED]>

	* fold-const.c (non_lvalue): Split tests into...
	(maybe_lvalue_p): New function.
	(fold_ternary): Use it to avoid turning a COND_EXPR lvalue into
	a MIN_EXPR rvalue.

Index: gcc/fold-const.c
===
RCS file: /cvs/gcc/gcc/gcc/fold-const.c,v
retrieving revision 1.535
diff -u -p -r1.535 fold-const.c
--- gcc/fold-const.c 7 Mar 2005 21:24:21 - 1.535
+++ gcc/fold-const.c 8 Mar 2005 22:07:52 -
@@ -2005,16 +2005,12 @@ fold_convert (tree type, tree arg)
 }
 }
 
-/* Return an expr equal to X but certainly not valid as an lvalue.  */
+/* Return false if expr can be assumed to not be an lvalue, true
+   otherwise.  */
 
-tree
-non_lvalue (tree x)
+static bool
+maybe_lvalue_p (tree x)
 {
-  /* While we are in GIMPLE, NON_LVALUE_EXPR doesn't mean anything to
- us.  */
-  if (in_gimple_form)
-return x;
-
   /* We only need to wrap lvalue tree codes.  */
   switch (TREE_CODE (x))
   {
@@ -2054,8 +2050,24 @@ non_lvalue (tree x)
 /* Assume the worst for front-end tree codes.  */
 if ((int)TREE_CODE (x) >= NUM_TREE_CODES)
   break;
-return x;
+return false;
   }
+
+  return true;
+}
+
+/* Return an expr equal to X but certainly not valid as an lvalue.  */
+
+tree
+non_lvalue (tree x)
+{
+  /* While we are in GIMPLE, NON_LVALUE_EXPR doesn't mean anything to
+ us.  */
+  if (in_gimple_form)
+return x;
+
+  if (! maybe_lvalue_p (x))
+return x;
   return build1 (NON_LVALUE_EXPR, TREE_TYPE (x), x);
 }
 
@@ -9734,10 +9746,16 @@ fold_ternary (tree expr)
 	 of B and C.  Signed zeros prevent all of these transformations,
 	 for reasons given above each one.
 
+	 We don't want to use operand_equal_for_comparison_p here,
+	 because it might turn an lvalue COND_EXPR into an rvalue one,
+	 see PR c++/19199.
+
  Also try swapping the arguments and inverting the conditional.  */
   if (COMPARISON_CLASS_P (arg0)
-	  && operand_equal_for_comparison_p (TREE_OPERAND (arg0, 0),
-	 arg1, TREE_OPERAND (arg0, 1))
+	  && ((maybe_lvalue_p (op1) && maybe_lvalue_p (op2))
+	  ? operand_equal_p (TREE_OPERAND (arg0, 0), op1, 0)
+	  : operand_equal_for_comparison_p (TREE_OPERAND (arg0, 0),
+		arg1, TREE_OPERAND (arg0, 1)))
 	  && !HONOR_SIGNED_ZEROS (TYPE_MODE (TREE_TYPE (arg1
 	{
 	  tem = fold_cond_expr_with_comparison (type, arg0, op1, op2);
@@ -9746,9 +9764,10 @@ fold_ternary (tree expr)
 	}
 
   if (COMPARISON_CLASS_P (arg0)
-	  && operand_equal_for_comparison_p (TREE_OPERAND (arg0, 0),
-	 op2,
-	 TREE_OPERAND (arg0, 1))
+	  && ((maybe_lvalue_p (op1) && maybe_lvalue_p (op2))
+	  ? operand_equal_p (TREE_OPERAND (arg0, 0), op2, 0)
+	  : operand_equal_for_comparison_p (TREE_OPERAND (arg0, 0),
+		op2, TREE_OPERAND (arg0, 1)))
 	  && !HONOR_SIGNED_ZEROS (TYPE_MODE (TREE_TYPE (op2
 	{
 	  tem = invert_truthvalue (arg0);
Index: gcc/testsuite/ChangeLog
from  Alexandre Oliva  <[EMAIL PROTECTED]>

	PR c++/19199
	* g++.dg/expr/lval2.C: New.

Index: gcc/testsuite/g++.dg/expr/lval2.C
===
RCS file: gcc/testsuite/g++.dg/expr/lval2.C
diff -N gcc/testsuite/g++.dg/expr/lval2.C
--- /dev/null	1 Jan 1970 00:00:00 -
+++ gcc/testsuite/g++.dg/expr/lval2.C 8 Mar 2005 22:08:07 -
@@ -0,0 +1,26 @@
+// PR c++/19199
+
+// { dg-do run }
+
+// We used to turn the COND_EXPR lvalue into a MIN_EXPR rvalue, and
+// then return a reference to a temporary in qMin.
+
+#include 
+
+enum Foo { A, B };
+
+template T &qMin(T &a, T &b) 
+{
+  return a < b ? a : b;

Re: Bug 20375 - ia64 varadic regression

2005-03-08 Thread Mark Mitchell
Zack Weinberg wrote:
It would certainly be nice to get rid of this mess, but Jim Wilson
expressed concerns last time it came up:

Well, sidestepping that, what the compiler really seems to want is "the 
last argument that was declared by the user" rather than "the last 
parameter with a name".  We have a good way of determining that: it's 
just the last parameter, nowadays, given that we've no longer got 
varargs to worry about.  So can't we just fix this loop:

  if (current_function_stdarg)
{
  tree tem;
  for (tem = TREE_CHAIN (parm); tem; tem = TREE_CHAIN (tem))
if (DECL_NAME (tem))
  break;
  if (tem == 0)
data->last_named = true;
}
to iterate until the end of the loop, without checking DECL_NAME?
In C, this is not an ABI change because all parameters are named.  And, 
here, we're looking at the definition of the function, not a prototype. 
 In C++, this is an ABI change in that an unnamed parameter directly 
proceeding the ellipsis will now be treated as named, rather than 
unnamed.  That's clearly correct, but it will change the C++ ABI on 
those platforms where this makes a difference.

I would call this sufficiently much a corner case as to say that we 
should just go ahead and do it.  If we're truly paranoid, we can make 
this dependent on flag_abi_version.

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: Bug 20375 - ia64 varadic regression

2005-03-08 Thread Zack Weinberg
Mark Mitchell <[EMAIL PROTECTED]> writes:

> Zack Weinberg wrote:
>
>> It would certainly be nice to get rid of this mess, but Jim Wilson
>> expressed concerns last time it came up:
>> 
>
> Well, sidestepping that, what the compiler really seems to want is
> "the last argument that was declared by the user" rather than "the
> last parameter with a name".  We have a good way of determining that:
> it's just the last parameter, nowadays, given that we've no longer got
> varargs to worry about.  So can't we just fix this loop:
>
>if (current_function_stdarg)
>  {
>tree tem;
>for (tem = TREE_CHAIN (parm); tem; tem = TREE_CHAIN (tem))
>  if (DECL_NAME (tem))
>break;
>if (tem == 0)
>  data->last_named = true;
>  }
>
> to iterate until the end of the loop, without checking DECL_NAME?

So, in other words,

  if (current_function_stdarg)
data->last_named = true;

?

It sounds like a good plan to me but I don't know that I know all the
issues.

zw


Re: Bug 20375 - ia64 varadic regression

2005-03-08 Thread Mark Mitchell
Zack Weinberg wrote:
So, in other words,
  if (current_function_stdarg)
data->last_named = true;
Actually, no:
  data->last_named = !TREE_CHAIN (parm);
(This is the last "named" parameter iff it's the last parameter.)
But, right idea. :-)
--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: GCC 3.4.3 static constants, named sections, and -fkeep-static-consts

2005-03-08 Thread James E Wilson
Gary Funck wrote:
When compiled with GCC 3.4.3, at -O2, the ident string above will _not_
appear in the executable.  This is apparently expected behavior.
See the docs for __attribute__ ((used)).  Contrary to the docs, it works 
for more than functions.

However, interestingly,
  gcc -fkeep-static-consts -O2 t.c
did not retain the ident string, rcsid, defined above.  Shouldn't
-fkepp-static-consts have ensured that this static constant would appear
in the executable?
Try re-reading the docs.  -fkeep-static-consts is the default.  The 
purpose of this is that we don't perform this optimization at -O0 
normally, but if you use -fno-keep-static-consts, then we do.  So this 
option can let you remove static consts in extra cases, but will never 
prevent the compiler from removing them.
--
Jim Wilson, GNU Tools Support, http://www.SpecifixInc.com


Re: Pascal front-end integration

2005-03-08 Thread Frank Heckenbach
Joseph S. Myers wrote:

> > So IMHO the best thing for a smooth transition would be to add 4.x
> > support as far as we can, with conditionals, so everyone can test it
> > and we can drop earlier backend as soon as (safely) possible.
> 
> If you can make such conditionals work, then fine.  I'm just doubtful of 
> how clean the result will be given the extent of the changes to the 
> front-end interface with tree-ssa,

I guess we'll just have to see. Waldek has looked into these changes
already more than I have, and if Jim is willing to help in this
course, perhaps we can get some preliminary results soon ...

Frank

-- 
Frank Heckenbach, [EMAIL PROTECTED]
http://fjf.gnu.de/
GnuPG and PGP keys: http://fjf.gnu.de/plan (7977168E)


Re: RFC: Plan for cleaning up the "Addressing Modes" macros

2005-03-08 Thread Zack Weinberg
Ian Lance Taylor  writes:

> I think this change is a great idea.  I want to point out something
> you probably already noticed: some definitions of
> LEGITIMIZE_RELOAD_ADDRESS rely on the fact that they appear in
> reload.c in the only caller, find_reloads_address.  For example, the
> definition in avr.h calls make_memloc, which is a static function in
> reload.c.  I thought there was at least one other case, but maybe it
> has been fixed.

Yes, I noticed.  My planning only went so far as intending to put the
transitional default definition of the target hook into reload.c, so
nothing breaks, and then dump the hard part on the target maintainers.

If we have to export more functions from reload.c in order to make it
possible for the target maintainers to do their part, so be it.  I
don't think that makes anything worse.

> In general writing LEGITIMIZE_RELOAD_ADDRESS requires a good
> knowledge of what reload does and does not do.  It is possible to
> call push_reload on some portion of X, not change X, and still jump
> to WIN.

I thought the documentation might be saying that, but I wasn't sure.
Thanks for the clarification.

zw


Re: RFC: New pexecute interface

2005-03-08 Thread Zack Weinberg

The interface looks sound to me with one exception: it's not safe to
conflate !-pipe with -save-temps, because that opens up the
possibility of a tempfile race -- if an attacker sees that the
compiler is producing /tmp/ccQWERTY.s, then they should not be able to
predict that the assembler will produce /tmp/ccQWERTY.o.

Also, why the parentheses around the numbers?

zw


Re: Deprecating min/max extension in C++

2005-03-08 Thread Gabriel Dos Reis
"Giovanni Bajo" <[EMAIL PROTECTED]> writes:

| Mark Mitchell <[EMAIL PROTECTED]> wrote:
| 
| > IMO, if these are C++-only, it's relatively easy to deprecate these
| > extension -- but I'd like to hear from Jason and Nathan, and also the
| > user community before we do that.  Of all the extensions we've had, this
| > one really hasn't been that problematic.
| 
| I would prefer them to stay. My reasons:
| 
| 1) std::min() and std::max() are not exact replacements. For instance, you
| cannot do std::min(3, 4.0f) because the arguments are of different type.

That is a rather weak argument.  What is the type of the argument if
it were possible? If float, why can't you write 3f?  If int, why can't
you write 4?  With the ominpresence of function templates and the
rather picky template-argument deduction process, how useful is that
fuzzy-typed constructs with rather dubious semantics and implementation?

I would like to see those extensions deprecated and go with no return.

-- Gaby


Re: RFC: New pexecute interface

2005-03-08 Thread Ian Lance Taylor
Zack Weinberg <[EMAIL PROTECTED]> writes:

> The interface looks sound to me with one exception: it's not safe to
> conflate !-pipe with -save-temps, because that opens up the
> possibility of a tempfile race -- if an attacker sees that the
> compiler is producing /tmp/ccQWERTY.s, then they should not be able to
> predict that the assembler will produce /tmp/ccQWERTY.o.

It is not necessarily obvious, but on a system which supports pipes
you will only get temporary files if you explicitly request them via
PEX_SAVE_TEMPS.  If you don't use PEX_SAVE_TEMPS, then the code will
always use pipes for communication.  If you do use PEX_SAVE_TEMPS,
then the caller provides the base name and the suffix, and the caller
is responsible for making good choices.

The weasel words about using temporary files when PEX_SAVE_TEMPS is
not set is for systems which do not support pipes.

> Also, why the parentheses around the numbers?

Habit.

Ian


Re: RFC: New pexecute interface

2005-03-08 Thread Zack Weinberg
Ian Lance Taylor  writes:

> If you do use PEX_SAVE_TEMPS, then the caller provides the base name
> and the suffix, and the caller is responsible for making good
> choices.

It doesn't look like the caller can specify a different base name for
each stage in the pipeline, is the thing.

zw