Re: -fprofile-arcs changes the structure of basic blocks

2005-06-27 Thread Liu Haibin
I found that the optimization must be on in order to see the frequency.


Timothy

On 6/24/05, Liu Haibin <[EMAIL PROTECTED]> wrote:
> Then I think I shouldn't use -fprofile-arcs. The reason why I used
> -fprofile-arcs is when I debugged a program without any flags, I saw
> the frequency was zero. When I added this flag, I saw frequency with
> values.
> 
> I checked the frequency after life_analysis and before
> combine_instructions. I used
> 
> FOR_EACH_BB(bb) {
>  // some code
> }
> 
> and checked the bb->frequency.
> 
> So now the question is how I can see the frequency without any flags.
> The following was the small program I used to check the frequency.
> 
> int foo(int i)
> {
> if (i < 2)
> return 2;
> else
> return 0;
> }
> int main()
> {
> int i;
> 
> i = 0;
> if (i < 100)
> i = 3;
> else
> i = foo(i);
> 
> return 0;
> }
> 
> 
> 
> 
> On 6/24/05, Daniel Berlin <[EMAIL PROTECTED]> wrote:
> > On Thu, 23 Jun 2005, Liu Haibin wrote:
> >
> > > Hi,
> > >
> > > I want to use profiling information. I know there're two relevent
> > > fields in each basic block, count and frequency. I want to use
> > > frequency because the compiled program is for another architecture so
> > > it cannot run on the host.
> >
> > Besides the fact that, as Zdenek hsa pointed out, this is not a useful
> > situation for -fprofile-arcs, ...
> > >
> > > My question is why it is so? I want to know the profiling info, but if
> > > profiling info I get is for another different structure of basic
> > > block, it's useless to me.
> > >
> >
> > This is because it's inserting profiling code.
> >
> > This isn't magic, it's inserting code to do the profiling, which
> > necessarily changes the basic blocks.
> > The profiling info you get is for the original set of basic blocks.
> >
> >
>


Re: [RFC] gcov tool, comparing coverage across platforms

2005-06-27 Thread Nathan Sidwell

[EMAIL PROTECTED] wrote:

Current questions include whether this tool needs to be used on 
platforms for which a bourne shell script is inappropriate and whether 
this tool needs to be coded in C instead.


as you're somewhat deadline bound, write it in whatever language suits your 
needs.  bash would certainly be acceptable, but I wouldn't particularly mind if 
it was in perl or python, which might be somewhat easier to work with.


Also, whether the -a, -b, -c 
and -f output types from gcov all need to be accounted for or whether 
only some of these outputs are of types for which cross-platform 
comparison makes sense. We have little doubt that regular users of gcov 

Seems a reasonable choice.

One other use of such a tool, that I suggested to Janis when she mentioned you 
all, is to compare gcov results for the same target machine but at different 
optimization levels.  this might or might not change the block structure to a 
greater extent.


nathan

--
Nathan Sidwell::   http://www.codesourcery.com   :: CodeSourcery LLC
[EMAIL PROTECTED]:: http://www.planetfall.pwp.blueyonder.co.uk



Re: [Ada] Current patch needed to build Ada as of 20050626

2005-06-27 Thread Eric Botcazou
> If no one is looking at this, may be it's better to just commit the
> workaround patch?
>
> Laurent
>
> Index: misc.c
> ===
> RCS file: /cvs/gcc/gcc/gcc/ada/misc.c,v
> retrieving revision 1.103
> diff -u -r1.103 misc.c
> --- misc.c  16 Jun 2005 09:05:06 -  1.103
> +++ misc.c  26 Jun 2005 20:30:01 -
> @@ -339,6 +339,8 @@
>/* Uninitialized really means uninitialized in Ada.  */
>flag_zero_initialized_in_bss = 0;
>
> +  flag_wrapv = 1;
> +
>return CL_Ada;
>  }

The flag_wrapv problem doesn't come from the Ada front-end, which generates a 
perfectly reasonable construct, but from fold so the workaround should be 
installed in fold instead, if it is agreed that a workaround is needed.

But Diego is now working on the problem.

-- 
Eric Botcazou


Re: Do C++ signed types have modulo semantics?

2005-06-27 Thread Nathan Sidwell

Michael Veksler wrote:

According to the (very) long discussion on VRP, signed char/short/int/etc
do not have modulo semantic, they have an undefined behavior on overflow.
However in  defines numeric_limits::is_modulo = true.


signed types are undefined on overflow. [5/5] and [3.9.1/2,3]


1. Is that a bug in , a bug in the standard, or is just C++
different
   than C in this respect?

a bug in limits, probably


2. Maybe because overflow is undefined then is_modulo maybe
   considered "unspecified". I don't like this option, because it does not
help
   generic programming.
it's also, I believe, wrong, in that some gcc optimizations will not preserve 
such behaviour. (I guess this is the whole VRP conversation you mention.)



3. Do I understand what is_modulo stands for?

yes


4. What should be done (libstdc++ PR, C++ PR, DR, other)?


18.2.1.2/57 claims is_modulo is true 'for signed types on most machines'.  Such 
an assertion is false when optimizations rely the undefinedness of signed 
overflow.  A DR should probably be filed (maybe one is, I'm not at all familiar 
with library DRs).


nathan

--
Nathan Sidwell::   http://www.codesourcery.com   :: CodeSourcery LLC
[EMAIL PROTECTED]:: http://www.planetfall.pwp.blueyonder.co.uk



Re: Do C++ signed types have modulo semantics?

2005-06-27 Thread Gabriel Dos Reis
Nathan Sidwell <[EMAIL PROTECTED]> writes:

| Michael Veksler wrote:
| > According to the (very) long discussion on VRP, signed char/short/int/etc
| > do not have modulo semantic, they have an undefined behavior on overflow.
| > However in  defines numeric_limits::is_modulo = true.
| 
| signed types are undefined on overflow. [5/5] and [3.9.1/2,3]

But a compiler could define them to be modulo -- that is the whole
point.  The paragraph does not say they don't "modulo".

| > 1. Is that a bug in , a bug in the standard, or is just C++
| > different
| >than C in this respect?
| a bug in limits, probably
| 
| > 2. Maybe because overflow is undefined then is_modulo maybe
| >considered "unspecified". I don't like this option, because it does not
| > help
| >generic programming.
| it's also, I believe, wrong, in that some gcc optimizations will not
| preserve such behaviour. (I guess this is the whole VRP conversation
| you mention.)
| 
| > 3. Do I understand what is_modulo stands for?
| yes
| 
| > 4. What should be done (libstdc++ PR, C++ PR, DR, other)?
| 
| 18.2.1.2/57 claims is_modulo is true 'for signed types on most
| machines'.  Such an assertion is false when optimizations rely the
| undefinedness of signed overflow.  A DR should probably be filed
| (maybe one is, I'm not at all familiar with library DRs).

Well, undefined behaviour does not mean unconditional hell or  evil.
It is just behaviour left up to the compiler to whatever it wants.
And all useful programs we write rely on undefined behaviour of one
sort or the other, starting with GCC.  In the case of
numeric_limits<>, it may help remembering two things: 

  (1) 18.2.1/1
  The numeric_limits component provides a C++ program with
  information about various properties of the implementation's
  representation  of the fundamental types.

  (2) LIA-1, 5.1.2

  If /bounded/ is true, the mathematical operations +, 1, *, /
  (after rounding) can produce results that lie outside of the set
  I.  In such cases, the computations add_I, sub_I, mul_I and
  div_I shall either cause a notification (if modulo == false), or
  return a "wrapped" result (if modulo = true)

(1) is something that C++ provides (constains more information that
C's  and ) user with.  The intented semantics is
that that compiler informs users about their representation choices.
(2) gives the rationale behing "modulo".  C++ does not require
notification (the passage you pointed to), so it is up to GCC to
define the behaviour should be and correspondingly amend the paragraph
so as to display consistent semantics.


Back in the dark ages, we used to define is_modulo false but we did not
raise notification, basing on the fact since C++ did not require
anything, it was already sufficient to tell user that we do not
promise modulo arithmetic.  

When RTH helped cleanup the numeric_limits implementation in September
2002, he made a very good point (which I held to be true, since
obviously he is The Middle-end and Back-end Guy)

   http://gcc.gnu.org/ml/libstdc++/2002-09/msg00207.html

Quote:

   First, GCC does not support any targets for which is_modulo is false
   for an integer type.  This is made obvious by the fact that we do not
   distinguish between signed and unsigned rtl modes, and eg do not have
   different patterns for signed and unsigned addition.  Thus is_modulo
   should always be true for the integral types.


I don't think there is a DR for that (from the standard point).  We do
have a consistency problem in GCC.  I believe optimizations we do
should be consistent with semantics unless we also provide for
-finconsistent-semantics.

-- Gaby


expanding builtins

2005-06-27 Thread James Lemke
I have a situation where a structure is not properly aligned and I want
to copy it to fix this.

I'm aware that -no-builtin-memcpy will suppress the expansion of
memcpy() (force library calls) for a whole module.  Is it possible to
suppress the expansion for a single invocation?

-- 
James Lemke   [EMAIL PROTECTED]   Orillia, Ontario
1992 ST1100, STOC #3750;   FWD# M:245401 H:246889
Life is what happens while you're busy making other plans. --John Lennon



Re: expanding builtins

2005-06-27 Thread Jakub Jelinek
On Mon, Jun 27, 2005 at 10:11:50AM -0400, James Lemke wrote:
> I have a situation where a structure is not properly aligned and I want
> to copy it to fix this.
> 
> I'm aware that -no-builtin-memcpy will suppress the expansion of
> memcpy() (force library calls) for a whole module.  Is it possible to
> suppress the expansion for a single invocation?

You can:
#include 
...
extern __typeof(memcpy) my_memcpy __asm ("memcpy");

and use my_memcpy instead of memcpy in the place where you want to force
library call.

Or you can use memcpy builtin, just tell GCC it should forget everything
it knows about alignment of whatever you know is not aligned.
void *psrc = (void *) src;
__asm ("" : "+r" (psrc));
memcpy (dest, psrc, len);

Jakub


Re: Do C++ signed types have modulo semantics?

2005-06-27 Thread Nathan Sidwell

Gabriel Dos Reis wrote:


But a compiler could define them to be modulo -- that is the whole
point.  The paragraph does not say they don't "modulo".


of course it could do so, but then to be useful it should document it, and it 
would be an extension.



| 18.2.1.2/57 claims is_modulo is true 'for signed types on most
| machines'.  Such an assertion is false when optimizations rely the
| undefinedness of signed overflow.  A DR should probably be filed
| (maybe one is, I'm not at all familiar with library DRs).

Well, undefined behaviour does not mean unconditional hell or  evil.
It is just behaviour left up to the compiler to whatever it wants.


correct.  However the std *itself* says in one place 'this is undefined' and in 
another place 'this is usually modulo'. I find that confusing at best.



And all useful programs we write rely on undefined behaviour of one
sort or the other, starting with GCC.  In the case of


They do? I thought they usually relied on implementation defined, documented 
extensions or were part of the implementation.  Now I'm sure you'll prove me 
wrong in some way or other, but please stick to the point -- do real important 
programs that must not break and cannot be changed rely on signed modulo behaviour?



When RTH helped cleanup the numeric_limits implementation in September
2002, he made a very good point (which I held to be true, since
obviously he is The Middle-end and Back-end Guy)

   http://gcc.gnu.org/ml/libstdc++/2002-09/msg00207.html


thanks for that.  I was under the impression that some of the loop optimizers 
relied on the fact that iterating over a signed type did not do odd modulo 
related things.  Indeed this comment in loop.c concerning BIVs leads me to 
believe we already fail to honor the is_modulo value


   Note that treating the entire pseudo as a BIV will result in making
   simple increments to any GIVs based on it.  However, if the variable
   overflows in its declared mode but not its promoted mode, the result will
   be incorrect.  This is acceptable if the variable is signed, since
   overflows in such cases are undefined, but not if it is unsigned, since
   those overflows are defined.  So we only check for SIGN_EXTEND and
   not ZERO_EXTEND.

Anyway, this doesn't answer Michael's question.  He asked whether C and C++ 
differ in this regard.  The answer is the standards are the same, and the 
implementation is the same (because it is the same backend).  So, if whatever 
optimizations he is turning on change the behaviour rth cited, then limits 
should change too.


I don't particularly care what behaviour is chosen, except that

1) C and C++ implementations should behave the same way

2) we should pick the behaviour that leads to better code generation in real 
life.

3) if modulo behaviour is chosen, it should be well documented in a place more 
prominant than type_traits::is_modulo.


nathan

--
Nathan Sidwell::   http://www.codesourcery.com   :: CodeSourcery LLC
[EMAIL PROTECTED]:: http://www.planetfall.pwp.blueyonder.co.uk



Re: Do C++ signed types have modulo semantics?

2005-06-27 Thread Morten Welinder
| signed types are undefined on overflow. [5/5] and [3.9.1/2,3]

> But a compiler could define them to be modulo -- that is the whole
> point.  The paragraph does not say they don't "modulo".

True, but you are going to have to deal with the run-time version of

(int)0x8000 / -1

which is unpleasant in the sense that Intel processors will trap and not
do anything modulo-like.

Morten


Re: Do C++ signed types have modulo semantics?

2005-06-27 Thread Paul Koning
> "Nathan" == Nathan Sidwell <[EMAIL PROTECTED]> writes:

 >> And all useful programs we write rely on undefined behaviour of
 >> one sort or the other, starting with GCC.  In the case of

 Nathan> They do? I thought they usually relied on implementation
 Nathan> defined, documented extensions or were part of the
 Nathan> implementation.  Now I'm sure you'll prove me wrong in some
 Nathan> way or other, but please stick to the point -- do real
 Nathan> important programs that must not break and cannot be changed
 Nathan> rely on signed modulo behaviour?

I'm sure they do.  Obviously they should not, since the standard says
not to.  But most programmers are not language lawyers.  Most
programmers "know" that arithmetic is modulo wordsize.  And those few
who know the right answer (only unsigned arithmetic is modulo) will
from time to time slip up and omit the "unsigned" keyword in their
declarations. 

So I can't point to a direct example but I am certain such examples
exist. 

  paul



re: [RFC] gcov tool, comparing coverage across platforms

2005-06-27 Thread Dan Kegel

We are a group of undergrads at Portland State University who accepted as our 
senior capstone software engineering project a proposed tool for use with gcov 
for summarizing gcov outputs for a given piece of source code tested on 
multiple architecture/OS platforms. A summary of the initial proposal is here:
http://www.clutchplate.org/gcov/gcov_proposal.txt

A rough overview of our proposed design is as follows:
We would build a tool which would accept as input:
on the command line, paths to each .gcov file to be included in the summary,
each of these to be followed by a string which would be the platform identifier 
for
that .gcov file.
The .gcov files would be combined so that the format would parallel the 
existing output,
with the summarized report listing each line of the source once, followed 
immediately
by a line for each platform id and the coverage data for that platform.


Sounds like a fun project.

Rather than taking the path to each .gcov file on
the commandline, you might consider searching
from them, as lcov does.
Come to think of it, maybe you could steal
some ideas or even code from lcov. See
http://ltp.sourceforge.net/coverage/lcov.php
ltp is written in perl, for what it's worth.

I like using Bourne shell for projects it's a good
fit for, but you may find yourself needing
something like perl, since you'll be wrangling
lots of files and lots of text.
- Dan

--
Trying to get a job as a c++ developer?  See 
http://kegel.com/academy/getting-hired.html


Re: Do C++ signed types have modulo semantics?

2005-06-27 Thread Gabriel Dos Reis
Nathan Sidwell <[EMAIL PROTECTED]> writes:

| Gabriel Dos Reis wrote:
| 
| > But a compiler could define them to be modulo -- that is the whole
| > point.  The paragraph does not say they don't "modulo".
| 
| of course it could do so, but then to be useful it should document it,
| and it would be an extension.

We're in violent agreement there.

| > | 18.2.1.2/57 claims is_modulo is true 'for signed types on most
| > | machines'.  Such an assertion is false when optimizations rely the
| > | undefinedness of signed overflow.  A DR should probably be filed
| > | (maybe one is, I'm not at all familiar with library DRs).
| > Well, undefined behaviour does not mean unconditional hell or  evil.
| > It is just behaviour left up to the compiler to whatever it wants.
| 
| correct.  However the std *itself* says in one place 'this is
| undefined' and in another place 'this is usually modulo'. I find that
| confusing at best.

well, you could send a message to -lib.  The standard does not
prescribe "this is always true", so saying "it is usually true" does
not contradict previous statement that it is undefined -- it is just
as vague.

| > And all useful programs we write rely on undefined behaviour of one
| > sort or the other, starting with GCC.  In the case of
| 
| They do?

Well, just check out GCC for starters :-)

| I thought they usually relied on implementation defined,
| documented extensions or were part of the implementation.  Now I'm
| sure you'll prove me wrong in some way or other, but please stick to
| the point -- do real important programs that must not break and cannot
| be changed rely on signed modulo behaviour?

I don't think that is a useful question, because first we would need
to agree on what is considered "real important programs" when and
where they could or should be changed.

I think what we should address is 

  (1) Can we make it useful, not just left to random blue moon?
  (2) can it be done in a reasonable way?

Just saying, "ah but the standard sys it is undefined behaviour" does
nto sound to me as a satisfying answer.  It one takes the standards
literally, then GCC is not required to be useful; just conformant.
But then you just need "cp /bin/sh gcc" and appropriate documentation.

| > When RTH helped cleanup the numeric_limits implementation in September
| > 2002, he made a very good point (which I held to be true, since
| > obviously he is The Middle-end and Back-end Guy)
| >http://gcc.gnu.org/ml/libstdc++/2002-09/msg00207.html
| 
| thanks for that.  I was under the impression that some of the loop
| optimizers relied on the fact that iterating over a signed type did
| not do odd modulo related things. 

yes, but in the case of the *concrete* targets supported by GCC, do
you know of any that "do odd modulo related things"?  
GCC's documentation GCC/gcc/doc/implement-c.texi says:

   GCC supports only two's complement integer types, and all bit patterns
   are ordinary values.

And, in the case of 2's complement, the relation between the min, max of
signed integer type is well-defined.  It is not left up to the will of
the Go'aulds.

|  Indeed this comment in loop.c
| concerning BIVs leads me to believe we already fail to honor the
| is_modulo value
| 
| Note that treating the entire pseudo as a BIV will result in making
| simple increments to any GIVs based on it.  However, if the variable
| overflows in its declared mode but not its promoted mode, the result will
| be incorrect.  This is acceptable if the variable is signed, since
| overflows in such cases are undefined, but not if it is unsigned, since
| those overflows are defined.  So we only check for SIGN_EXTEND and
| not ZERO_EXTEND.

As I said earlier, we do have a consistency problem in GCC and this is
an area where we would be nore useful in improving things.  I do not
consider lengthy discussions exemplified bt late VRP thing very
useful.  It does not really help much arguing "but it is undefined
behaviour".  Yes, we can define it to something useful when we can
(and we should).

| Anyway, this doesn't answer Michael's question.  He asked whether C
| and C++ differ in this regard.  The answer is the standards are the
| same, and the implementation is the same (because it is the same
| backend). 

With the difference that C++ gives users a hook to ask for more information.
And that makes a real difference -- as opposed to a mere "undefined
behaviour".

| So, if whatever optimizations he is turning on change the
| behaviour rth cited, then limits should change too.

As I said, I'm for consistent semantics.  The question remains as
whether we should reflect reality -- the arithmetic is modulo -- or
just leave in the abstract tower of "standard says it is undefined
behaviour". 

| I don't particularly care what behaviour is chosen, except that
| 
| 1) C and C++ implementations should behave the same way
| 
| 2) we should pick the behaviour that leads to better code generation in real 
life.

W

Re: Do C++ signed types have modulo semantics?

2005-06-27 Thread Gabriel Dos Reis
Morten Welinder <[EMAIL PROTECTED]> writes:

| | signed types are undefined on overflow. [5/5] and [3.9.1/2,3]
| 
| > But a compiler could define them to be modulo -- that is the whole
| > point.  The paragraph does not say they don't "modulo".
| 
| True, but you are going to have to deal with the run-time version of
| 
| (int)0x8000 / -1
| 
| which is unpleasant in the sense that Intel processors will trap and not
| do anything modulo-like.

If such things really yields undefined behaviour on Intel's then
numeric_limits<> for Intel's should be changed accoordingly.  It does
not imply that numeric_limits<>::is_modulo is false for all targets
supported by GCC.

-- Gaby


Re: Do C++ signed types have modulo semantics?

2005-06-27 Thread Michael Veksler






Paul Koning <[EMAIL PROTECTED]> wrote on 27/06/2005 17:47:12:

> > "Nathan" == Nathan Sidwell <[EMAIL PROTECTED]> writes:
>
>  >> And all useful programs we write rely on undefined behaviour of
>  >> one sort or the other, starting with GCC.  In the case of
>
>  Nathan> They do? I thought they usually relied on implementation
>  Nathan> defined, documented extensions or were part of the
>  Nathan> implementation.  Now I'm sure you'll prove me wrong in some
>  Nathan> way or other, but please stick to the point -- do real
>  Nathan> important programs that must not break and cannot be changed
>  Nathan> rely on signed modulo behaviour?
>
> I'm sure they do.  Obviously they should not, since the standard says
> not to.  But most programmers are not language lawyers.  Most
> programmers "know" that arithmetic is modulo wordsize.  And those few
> who know the right answer (only unsigned arithmetic is modulo) will
> from time to time slip up and omit the "unsigned" keyword in their
> declarations.
>
> So I can't point to a direct example but I am certain such examples
> exist.
>


Please, these arguments have been beaten to death in previous
threads. We don't want to slip to long discussions on the merits
of either one of the decisions, again.

So from what I understand, gcc does something very inconsistent,
and this has to be fixed. Anyway, as someone mentioned, it is not
clear if the compiler can have a reasonable support for modulo on
x86 without a penalty (the MIN_INT/-1 case). What it means that
numeric_limits::is_modulo was inconsistent with gcc for
all older gcc/libstdc++ versions.

It is now PR 22200, the potentially long debate can move there.

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=22200

  Michael



Re: expanding builtins

2005-06-27 Thread Andreas Schwab
Jakub Jelinek <[EMAIL PROTECTED]> writes:

> On Mon, Jun 27, 2005 at 10:11:50AM -0400, James Lemke wrote:
>> I have a situation where a structure is not properly aligned and I want
>> to copy it to fix this.
>> 
>> I'm aware that -no-builtin-memcpy will suppress the expansion of
>> memcpy() (force library calls) for a whole module.  Is it possible to
>> suppress the expansion for a single invocation?
>
> You can:
> #include 
> ...
> extern __typeof(memcpy) my_memcpy __asm ("memcpy");
>
> and use my_memcpy instead of memcpy in the place where you want to force
> library call.

Except that sometimes the function is actually called _memcpy at the
assembler level.

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


Re: expanding builtins

2005-06-27 Thread James Lemke
> You can:
> #include 
> ...
> extern __typeof(memcpy) my_memcpy __asm ("memcpy");
> 
> and use my_memcpy instead of memcpy in the place where you want to force
> library call.
Thanks Jakub!  That worked very well.

Jim.

> Or you can use memcpy builtin, just tell GCC it should forget everything
> it knows about alignment of whatever you know is not aligned.
> void *psrc = (void *) src;
> __asm ("" : "+r" (psrc));
> memcpy (dest, psrc, len);


-- 
James Lemke   [EMAIL PROTECTED]   Orillia, Ontario
1992 ST1100, STOC #3750;   FWD# M:245401 H:246889
Life is what happens while you're busy making other plans. --John Lennon



Newbie question: Offset for a pseudo.

2005-06-27 Thread N V Krishna
Hello All,
I have two questions. I am trying to implement some new register
allocator scheme and at this moment I am trying to spill some of the
pseudos. I am facing two questions:

- For globals how to find the offset of the pseudo from the beginning
variable section? For example for constants in constant pool we can
get_pool_offset() would do that.

- Is there a corresponding function for pseudos present for locals?

Any help in this regards would be greatly appreciated. Thanks in Advance.

Warm regards,
N V Krishna.


Re: [RFC] gcov tool, comparing coverage across platforms

2005-06-27 Thread Joe Buck
On Thu, Jun 23, 2005 at 11:41:04AM -0700, [EMAIL PROTECTED] wrote:
> We are a group of undergrads at Portland State University who accepted 
> as our senior capstone software engineering project a proposed tool for 
> use with gcov for summarizing gcov outputs for a given piece of source 
> code tested on multiple architecture/OS platforms. A summary of the 
> initial proposal is here:
> http://www.clutchplate.org/gcov/gcov_proposal.txt

It seems that you may be imposing a restriction on your tool that puts
an unnecessary limitation on its usefulness.

What you are really producing is a mechanism to combine information from
gcov reports, that allows attributes to be placed on the gcov reports.
You have identified one possible attribute: the architecture/OS platform.
But that's only one possibility.

Remember, gcov produces one report per .o file.  But that .o file might
be linked into many different possible programs.  If you were testing
something Gnome or KDE, you might be interested in which lines of code
in libraries are touched by calls from which user applications, for
example.  A software development project might want to know which lines
are hit only by unit tests, and which are actually used by the full
application.  The list goes on.

The thing is, you don't need to do any additional work to handle this
more general application, just be less restrictive about what the
property means that you are calling "architecture/OS platform".



Re: makeinfo 4.8 generates non-standard HTML for @[EMAIL PROTECTED]

2005-06-27 Thread Karl Berry
>   You should substitute `i686
>   ' in the above command with the appropriate processor 
>   for your host.
> Thanks for the report, I'll work on fixing that.

have you had a chance to look into this?  

Not yet, but it's again at the top of my Texinfo list due to your
message :).

It's certainly not a top priority issue 

I'm glad, because my time for Texinfo in general is unfortunately rather
lacking these days :(.

Best,
Karl


LCOV

2005-06-27 Thread Dickson Patton

All,

LCOV looks like what we were planning.  Let's steal it.

See you at 7:00.


Dickson


Re: [RFH] - Less than optimal code compiling 252.eon -O2 for x86

2005-06-27 Thread Fariborz Jahanian
FYI, the change to rtl  in -O2 vs. -O1 is that -O2 includes -fforce- 
mem which forces memory operands to registers to make memory  
references common sub-expressions. In this case, the constant double  
float value is assigned to an xmm register which is used where it is  
needed. So, I would say this behavior is as expected but not ideal  
for x86 where a couple of 'movl   $0x0, mem' may be preferred to a  
single 'movsd   %xmm7, mem' for 252.eon on x86-darwin.


- fariborz

On Jun 24, 2005, at 3:07 PM, Fariborz Jahanian wrote:

A source file mrSurfaceList.cc of 252.eon produces less efficient  
code initializing instance objects to 0 at -O2 than at -O1.  
Behavior is random and it does not happen on all x86  platforms and  
making the test smaller makes the problem go away. But here is what  
I found out is the cause.


When source is compiled with -O1 -march=pentium4,  'cse' phase sees  
the following pattern initializing a 'double' with 0.


(insn 18 13 19 0 (set (reg:SF 109)
(mem/u/i:SF (symbol_ref/u:SI ("*LC11") [flags 0x2]) [0 S4  
A32])) -1 (nil)

(nil))

(insn 19 18 20 0 (set (mem/s/j:DF (plus:SI (reg/f:SI 20 frame)
(const_int -32 [0xffe0])) [0  
objectBox.pmin.e+16 S8 A128])

(float_extend:DF (reg:SF 109))) 86 {*extendsfdf2_sse} (nil)
(nil))

Then fold_rtx routine  converts it into its reduced form, resulting  
in optimum code:


(insn 19 13 21 0 (set (mem/s/j:DF (plus:SI (reg/f:SI 20 frame)
(const_int -32 [0xffe0])) [0  
objectBox.pmin.e+16 S8 A128])

(const_double:DF 0.0 [0x0.0p+0])) 64 {*movdf_nointeger} (nil)
(nil))


But when the same source is compiled with -O2 march=pentium4, 'cse'  
phase sees a slightly different pattern (note that float_extend:DF  
has moved)


(insn 18 13 19 0 (set (reg:DF 109)
(float_extend:DF (mem/u/i:SF (symbol_ref/u:SI ("*LC13")  
[flags 0x2]) [0 S4 A32]))) -1 (nil)

(nil))

(insn 19 18 20 0 (set (mem/s/j:DF (plus:SI (reg/f:SI 20 frame)
(const_int -32 [0xffe0])) [0  
objectBox.pmin.e+16 S8 A128])

(reg:DF 109)) 64 {*movdf_nointeger} (nil)
(nil))

This cannot be simplified by fold_rtx, resulting in less efficient  
code.


Change in pattern is most likely because of additional tree  
optimization phases running at -O2. If so, then should the cse be  
taught to simplify the new rtl pattern. Or, the tree optimizer  
phase responsible for the less than optimal tree need be twiked to  
generate the same tree as with -O1?


Thanks, fariborz






Re: [RFH] - Less than optimal code compiling 252.eon -O2 for x86

2005-06-27 Thread Richard Henderson
On Mon, Jun 27, 2005 at 12:21:01PM -0700, Fariborz Jahanian wrote:
> FYI, the change to rtl  in -O2 vs. -O1 is that -O2 includes -fforce- 
> mem which forces memory operands to registers to make memory  
> references common sub-expressions.

Hmm.  I would suspect this is obsolete now.  We'll have forced
everything into "registers" (or something equivalent that we
can work with) during tree optimization.  Any CSEs that can be
made should have been made.


r~


Re: Q about Ada and value ranges in types

2005-06-27 Thread Richard Kenner
Sorry it took me so long to get to this.

> You're not showing where this comes from, so it's hard to say.  However
> D.1480 is created by the gimplifier, not the Ada front end.  There could
> easily be a typing problem in the tree there (e.g., that of the
> subtraction) but I can't tell for sure.

As it turned out, there was.

So, after calling sinfo__chars() and subtracting 30361, the
FE is emitting that range check.  AFAICT, the call to
sinfo__chars(e_5) comes from ada/sem_intr.adb:148

 Nam : constant Name_Id   := Chars (E);

and 'if (D.1480_32 <= 1)' is at line 155:

I'd also assumed this was where the bogus tree came from, but I was wrong.
The node in question was not made by the Ada front end but by
build_range_check in clearly incorrect code that does the subtraction in the
wrong type.

This fixes that problem.  Are you in a position to check if it fixes the
original issue?

*** fold-const.c25 Jun 2005 01:59:57 -  1.599
--- fold-const.c27 Jun 2005 20:44:56 -
*** build_range_check (tree type, tree exp, 
*** 4027,4034 
  
if (value != 0 && ! TREE_OVERFLOW (value))
! return build_range_check (type,
! fold_build2 (MINUS_EXPR, etype, exp, low),
! 1, fold_convert (etype, integer_zero_node),
! value);
  
return 0;
--- 4027,4045 
  
if (value != 0 && ! TREE_OVERFLOW (value))
! {
!   /* There is no requirement that LOW be within the range of ETYPE
!if the latter is a subtype.  It must, however, be within the base
!type of ETYPE.  So be sure we do the subtraction in that type.  */
!   if (TREE_TYPE (etype))
!   {
! etype = TREE_TYPE (etype);
! value = fold_convert (etype, value);
!   }
! 
!   return build_range_check (type,
!   fold_build2 (MINUS_EXPR, etype, exp, low),
!   1, fold_convert (etype, integer_zero_node),
!   value);
! }
  
return 0;


Re: [RFH] - Less than optimal code compiling 252.eon -O2 for x86

2005-06-27 Thread Fariborz Jahanian


On Jun 27, 2005, at 12:56 PM, Richard Henderson wrote:


Hmm.  I would suspect this is obsolete now.  We'll have forced
everything into "registers" (or something equivalent that we
can work with) during tree optimization.  Any CSEs that can be
made should have been made.



I will do  sanity check followed by SPEC runs (x86 and ppc darwin)  
and see if behavior changes by obsoleting -fforce-mem  in -O2  (or  
higher).


- Thanks, fariborz



r~





Re: Q about Ada and value ranges in types

2005-06-27 Thread James A. Morrison

[EMAIL PROTECTED] (Richard Kenner) writes:

> Sorry it took me so long to get to this.
> 
> > You're not showing where this comes from, so it's hard to say.  However
> > D.1480 is created by the gimplifier, not the Ada front end.  There could
> > easily be a typing problem in the tree there (e.g., that of the
> > subtraction) but I can't tell for sure.
> 
> As it turned out, there was.
> 
> So, after calling sinfo__chars() and subtracting 30361, the
> FE is emitting that range check.  AFAICT, the call to
> sinfo__chars(e_5) comes from ada/sem_intr.adb:148
> 
>Nam : constant Name_Id   := Chars (E);
> 
> and 'if (D.1480_32 <= 1)' is at line 155:
> 
> I'd also assumed this was where the bogus tree came from, but I was wrong.
> The node in question was not made by the Ada front end but by
> build_range_check in clearly incorrect code that does the subtraction in the
> wrong type.
> 
> This fixes that problem.  Are you in a position to check if it fixes the
> original issue?
> 
> *** fold-const.c  25 Jun 2005 01:59:57 -  1.599
> --- fold-const.c  27 Jun 2005 20:44:56 -
> *** build_range_check (tree type, tree exp, 
> *** 4027,4034 
>   
> if (value != 0 && ! TREE_OVERFLOW (value))
> ! return build_range_check (type,
> !   fold_build2 (MINUS_EXPR, etype, exp, low),
> !   1, fold_convert (etype, integer_zero_node),
> !   value);
>   
> return 0;
> --- 4027,4045 
>   
> if (value != 0 && ! TREE_OVERFLOW (value))
> ! {
> !   /* There is no requirement that LOW be within the range of ETYPE
> !  if the latter is a subtype.  It must, however, be within the base
> !  type of ETYPE.  So be sure we do the subtraction in that type.  */
> !   if (TREE_TYPE (etype))
> ! {
> !   etype = TREE_TYPE (etype);
> !   value = fold_convert (etype, value);
> ! }
> ! 
> !   return build_range_check (type,
> ! fold_build2 (MINUS_EXPR, etype, exp, low),
> ! 1, fold_convert (etype, integer_zero_node),
   

 RTH has been suggesting to use build_int_cst (etype, 0) instead.

> ! value);
> ! }
>   
> return 0;
> 


-- 
Thanks,
Jim

http://www.csclub.uwaterloo.ca/~ja2morri/
http://phython.blogspot.com
http://open.nit.ca/wiki/?page=jim


errors when compiling gcc-3.4.4 and gcc-4.0.0 on i386 freebsd -5.2.1.

2005-06-27 Thread wangxiuli
gcc some errors appear when compiling gcc-3.4.4 and gcc-4.0.0 on i386 freebsd 
-5.2.1.those errrors are caused by byacc's convention of arguments .how to 
solve them?




%make
rm -f stamp-h1
/bin/sh ./config.status config.h
config.status: creating config.h
config.status: config.h is unchanged
test -f config.h || (rm -f stamp-h1 && make stamp-h1)
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
bindtextdom.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
dcgettext.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
dgettext.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
gettext.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
finddomain.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
loadmsgcat.c
gcc -c  -g -O2 -DHAVE_CONFIG_H -DLOCALE_ALIAS_PATH="\"/usr/local/newgcc-4/share/
locale\"" -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/localealias.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
textdomain.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
l10nflist.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
explodename.c
gcc -c  -g -O2 -DHAVE_CONFIG_H -DLOCALEDIR="\"/usr/local/newgcc-4/share/locale\"
" -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/dcigettext.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
dcngettext.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
dngettext.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
ngettext.c
byacc --name-prefix=__gettext --output plural.c ../../gcc-4.0.0/intl/plural.y
usage: yacc [-dlrtv] [-b file_prefix] [-o output_filename]
[-p symbol_prefix] filename
*** Error code 1

Stop in /usr/home/freeman/gcc-build/intl.
*** Error code 1

Stop in /usr/home/freeman/gcc-build.
%


wangxiuli
[EMAIL PROTECTED]
  2005-06-28





some errors compiling gcc-3.4.4 and gcc-4.0.0 on i386 freebsd -5.2.

2005-06-27 Thread wangxiuli
Hi
 some errors appear when compiling gcc-3.4.4 and gcc-4.0.0 on i386 freebsd 
-5.2.1.those errrors are caused by byacc's convention of arguments .how to 
solve them?
 best regard



%make
rm -f stamp-h1
/bin/sh ./config.status config.h
config.status: creating config.h
config.status: config.h is unchanged
test -f config.h || (rm -f stamp-h1 && make stamp-h1)
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
bindtextdom.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
dcgettext.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
dgettext.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
gettext.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
finddomain.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
loadmsgcat.c
gcc -c  -g -O2 -DHAVE_CONFIG_H -DLOCALE_ALIAS_PATH="\"/usr/local/newgcc-4/share/
locale\"" -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/localealias.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
textdomain.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
l10nflist.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
explodename.c
gcc -c  -g -O2 -DHAVE_CONFIG_H -DLOCALEDIR="\"/usr/local/newgcc-4/share/locale\"
" -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/dcigettext.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
dcngettext.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
dngettext.c
gcc -c  -g -O2 -DHAVE_CONFIG_H  -I. -I../../gcc-4.0.0/intl ../../gcc-4.0.0/intl/
ngettext.c
byacc --name-prefix=__gettext --output plural.c ../../gcc-4.0.0/intl/plural.y
usage: yacc [-dlrtv] [-b file_prefix] [-o output_filename]
[-p symbol_prefix] filename
*** Error code 1

Stop in /usr/home/freeman/gcc-build/intl.
*** Error code 1

Stop in /usr/home/freeman/gcc-build.
%


wangxiuli
[EMAIL PROTECTED]
  2005-06-28





Re: some errors compiling gcc-3.4.4 and gcc-4.0.0 on i386 freebsd -5.2.

2005-06-27 Thread Zack Weinberg
"wangxiuli" <[EMAIL PROTECTED]> writes:

> Hi some errors appear when compiling gcc-3.4.4 and gcc-4.0.0 on i386
> freebsd -5.2.1.those errrors are caused by byacc's convention of
> arguments .how to solve them?

You must use Bison; we do not support byacc.

zw


Re: Q about Ada and value ranges in types

2005-06-27 Thread Richard Kenner
RTH has been suggesting to use build_int_cst (etype, 0) instead.

Indeed.  I was trying to minimize the change, but such cleanups are always
useful.  This was also missing a protection on INTEGER_TYPE_P.  I just got
a good bootstrap of Ada on x86_64 with this and a patch from Diego to fix
the other problem and also remove the kludge that's fixed by this.


Re: Do C++ signed types have modulo semantics?

2005-06-27 Thread Mark Mitchell

Michael Veksler wrote:


Most programmers "know" that arithmetic is modulo wordsize.  And those few
who know the right answer (only unsigned arithmetic is modulo) will
from time to time slip up and omit the "unsigned" keyword in their
declarations.


I agree.

Although the standard clearly makes signed overflow undefined, I think 
it would be better if GCC defined it to be modulo arithmetic.  The 
degree to which that would result in inferior code seems likely to be 
somewhat small, and the amount of user confusion we would eliminate, and 
the number of programs that would not break with GCC, even though they 
work with other compilers, seems likely to be high.


--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: Do C++ signed types have modulo semantics?

2005-06-27 Thread Gabriel Dos Reis
Mark Mitchell <[EMAIL PROTECTED]> writes:

| Michael Veksler wrote:
| 
| >> Most programmers "know" that arithmetic is modulo wordsize.  And those few
| >>who know the right answer (only unsigned arithmetic is modulo) will
| >>from time to time slip up and omit the "unsigned" keyword in their
| >>declarations.
| 
| I agree.
| 
| Although the standard clearly makes signed overflow undefined, I think
| it would be better if GCC defined it to be modulo arithmetic.  The
| degree to which that would result in inferior code seems likely to be
| somewhat small, and the amount of user confusion we would eliminate,
| and the number of programs that would not break with GCC, even though
| they work with other compilers, seems likely to be high.

Amen.

-- Gaby


signed is undefined and has been since 1992 (in GCC)

2005-06-27 Thread Andrew Pinski
The first change in GCC which changed signed overflow/wrapping to be 
undefined

was added back in 1992 in loop.c.  The next change was in 1999 with the
addition of simplify-rtx.c.  Why are we talking about this now, instead 
of back
when they were added?  (note both of these changes were before fwrapv 
can into

play).

-- Pinski



Re: signed is undefined and has been since 1992 (in GCC)

2005-06-27 Thread Michael Veksler






Andrew Pinski wrote on 28/06/2005 07:08:33:

> The first change in GCC which changed signed overflow/wrapping to be
> undefined
> was added back in 1992 in loop.c.  The next change was in 1999 with the
> addition of simplify-rtx.c.  Why are we talking about this now, instead
> of back
> when they were added?  (note both of these changes were before fwrapv
> can into
> play).
>
I don't mind MAX_INT+1 being undefined by gcc. I object to drawing from
"undefined" to conclude that is_modulo should be true. This does not
make a practical sense. Drawing conclusions from "undefined" can yield
absurd results.



Re: signed is undefined and has been since 1992 (in GCC)

2005-06-27 Thread Gabriel Dos Reis
Andrew Pinski <[EMAIL PROTECTED]> writes:

| The first change in GCC which changed signed overflow/wrapping to be
| undefined
| was added back in 1992 in loop.c.  The next change was in 1999 with the
| addition of simplify-rtx.c.  Why are we talking about this now,
| instead of back
| when they were added?  (note both of these changes were before fwrapv
| can into
| play).

Because the world has evolved, we have gained more experience, more
users and there are opportunities to make GCC useful to more people?  

But, you do have a point; in 1992, you weren't here, I wasn't here, GCC
development was not as open as today for wider people to scrutinize and
contribute and we could not have discussed it.  But it does not really
matter we did not discuss it in 1992 or 1999.  We don't have a time
travel machine to change the past.  But we can make a difference for the
future.  The attitude that "undefined behaviour" should be interpreted
as "we should not make things more useful when we can" is beyond
understanding. 

-- Gaby


Re: signed is undefined and has been since 1992 (in GCC)

2005-06-27 Thread Andrew Pinski


On Jun 28, 2005, at 12:34 AM, Gabriel Dos Reis wrote:


The attitude that "undefined behaviour" should be interpreted
as "we should not make things more useful when we can" is beyond
understanding.


Then C/C++ aliasing rules go out the window really or maybe I 
misunderstand

what you are trying to say?

And what about casting functions to a different function type and 
calling

that, we just declared it as calling a trap in the last couple of years.
That is not very useful really.  What about var_args with shorts, that 
is

not useful but since it is undefined, we just call trap on it.

Or even sequence points, we get less of those bugs than C/C++ aliasing 
rules
violation but still get some even with documenting they are undefined 
and

change with optimizations.

The list can go on, with the current undefined behavior we have changed 
in

the recent years, past 5 years.  Part of C++ aliasing rules were not
implemented  until at least 3.3 which was only 2 years ago.

-- Pinski



Re: signed is undefined and has been since 1992 (in GCC)

2005-06-27 Thread Gabriel Dos Reis
Andrew Pinski <[EMAIL PROTECTED]> writes:

| On Jun 28, 2005, at 12:34 AM, Gabriel Dos Reis wrote:
| 
| > The attitude that "undefined behaviour" should be interpreted
| > as "we should not make things more useful when we can" is beyond
| > understanding.
| 
| Then C/C++ aliasing rules go out the window really or maybe I
| misunderstand
| what you are trying to say?

yes, you misunderstand what I'm saying.

| And what about casting functions to a different function type and
| calling
| that, we just declared it as calling a trap in the last couple of years.

That is a type constraint violation that leads to subtle runtime
errors, so we did actually improve things by catching (potential)
errors earlier. 

As a concrete case at point, the C++ committee just decided at the
last meeting in Norway to "upgrade" cast between void* and pointer to
function types from "undefined behaviour" to "conditionally supported"
-- and interestingly it led to vigurous request from library and
application programmers  that compilers do document what they are
doing in that area.  GCC had been a lead there.

For the concrete case at issue, if the hardware I'm writing the C/C++
programs for consistently displays modulo arithmetics for signed
integer type, Andrew can you tell me why GCC should deny me access
to that functionally where it actually can?  "Denying" here means that
it does not give me access to that consistent hardware behaviour.
None of the items on the list you gave falls into that category.
Please, do remember that "undefined behaviour" is a catch-all basket
and two things in that basket are not necessarily "equally evil".  So,
please, do refrain from reasoning like "since we did X for Y and Y was
undefined behaviour, we should do the same for Z."  "Undefined
behaviour" isn't a 0 or 1 thingy, even though it is about computers.
You need to evaluate them on case-by-case basis.

-- Gaby


Re: signed is undefined and has been since 1992 (in GCC)

2005-06-27 Thread Andrew Pinski


On Jun 28, 2005, at 1:12 AM, Gabriel Dos Reis wrote:


Andrew Pinski <[EMAIL PROTECTED]> writes:

| On Jun 28, 2005, at 12:34 AM, Gabriel Dos Reis wrote:
|
| > The attitude that "undefined behaviour" should be interpreted
| > as "we should not make things more useful when we can" is beyond
| > understanding.
|
| Then C/C++ aliasing rules go out the window really or maybe I
| misunderstand
| what you are trying to say?

yes, you misunderstand what I'm saying.


But you did not explain your full then, I still don't understand.
Here is the full quote from the C99 standard about what undefined 
behavior:


1 behavior, upon use of a nonportable or erroneous program construct or 
of erroneous data, for which this International Standard imposes no 
requirements


2 NOTE  Possible undefined behavior ranges from ignoring the situation 
completely with unpredictable results, to behaving during translation 
or program execution in a documented manner characteristic of the 
environment (with or without the issuance of a diagnostic message), to 
terminating a translation or execution (with the issuance of a 
diagnostic message).
3 EXAMPLE  An example of undefined behavior is the behavior on integer 
overflow.


See it even points out integer overflow as a good example.  See also 
how it says
the standard imposes no requirement, which means the compiler should be 
able
to erase the hard drive each and every time you invoke undefined 
behavior.



| And what about casting functions to a different function type and
| calling
| that, we just declared it as calling a trap in the last couple of 
years.


That is a type constraint violation that leads to subtle runtime
errors, so we did actually improve things by catching (potential)
errors earlier.


So is wrapping, what is a different.  If I multiply a large positive 
number
by another large positive number, I will get an overflow, well since it 
is undefined
I could get a positive number, a negative one, or a trap (or even my 
hard drive erased

which is what I deserved).


For the concrete case at issue, if the hardware I'm writing the C/C++
programs for consistently displays modulo arithmetics for signed
integer type, Andrew can you tell me why GCC should deny me access
to that functionally where it actually can?


It does not, use -fwrapv if you want that behavior.  GCC is not denying
you anything, at best it is giving you two different options, a fast
optimizing option and one where follows what you want.



"Denying" here means that
it does not give me access to that consistent hardware behaviour.
None of the items on the list you gave falls into that category.
Please, do remember that "undefined behaviour" is a catch-all basket
and two things in that basket are not necessarily "equally evil".


Well then why is there implementation defined behaviors then, it sounds
to me that you want it to be included there instead.


 So,
please, do refrain from reasoning like "since we did X for Y and Y was
undefined behaviour, we should do the same for Z."  "Undefined
behaviour" isn't a 0 or 1 thingy, even though it is about computers.
You need to evaluate them on case-by-case basis.


No, reread what the standard says we don't need to evaluate them case 
by case, that
is what implementation defined behavior is for.  Maybe this should have 
been
made that but it was not.  So file a DR report for it instead of saying 
GCC should do

something when it is already doing what the standard says it can do.

-- Pinski



Re: signed is undefined and has been since 1992 (in GCC)

2005-06-27 Thread Gabriel Dos Reis
Andrew Pinski <[EMAIL PROTECTED]> writes:

| On Jun 28, 2005, at 1:12 AM, Gabriel Dos Reis wrote:
| 
| > Andrew Pinski <[EMAIL PROTECTED]> writes:
| >
| > | On Jun 28, 2005, at 12:34 AM, Gabriel Dos Reis wrote:
| > |
| > | > The attitude that "undefined behaviour" should be interpreted
| > | > as "we should not make things more useful when we can" is beyond
| > | > understanding.
| > |
| > | Then C/C++ aliasing rules go out the window really or maybe I
| > | misunderstand
| > | what you are trying to say?
| >
| > yes, you misunderstand what I'm saying.
| 
| But you did not explain your full then, I still don't understand.
| Here is the full quote from the C99 standard about what undefined
| behavior:

Andrew --

  Nobody is denying that signed interger overflow is "undefined behaviour".
So, your keeping saying "but, look the standard says it is undefined
beahviour" is irrelevant to the discussion; it is only recitation that
does not help making progress. 

What people are saying is that "undefined behaviour" does not
necessarily mean "Go'auld semantics".  Is that hard to understand?

[...]

| See it even points out integer overflow as a good example.  See also
| how it says
| the standard imposes no requirement, which means the compiler should
| be able
| to erase the hard drive each and every time you invoke undefined
| behavior.

and it should also be able to take your life.  Do you want it to actually
do it?  If yes, I suggest you create your own compiler that does that
and leave us work on a compiler that does something more positive.

-- Gaby


Re: signed is undefined and has been since 1992 (in GCC)

2005-06-27 Thread Steven Bosscher
On Tuesday 28 June 2005 07:12, Gabriel Dos Reis wrote:
> For the concrete case at issue, if the hardware I'm writing the C/C++
> programs for consistently displays modulo arithmetics for signed
> integer type, Andrew can you tell me why GCC should deny me access
> to that functionally where it actually can?

Because it disallows compiler transformations?  E.g. suddenly a
loop with a signed variable as the loop counter may wrap around, 
which that means some transformations that are safe now would
no longer be safe.

Gr.
Steven