Re: Failures in tests for obj-c++....

2005-06-07 Thread Ziemowit Laski


On 6 Jun 2005, at 22.26, Christian Joensson wrote:


I get a few failures when trying to run the obj-c++ testsuite...

See, e.g., http://gcc.gnu.org/ml/gcc-testresults/2005-06/msg00375.html

This is what I see in the log file and this is all over... :)

Setting LD_LIBRARY_PATH to
.:/usr/local/src/trunk/objdir32/sparc-linux/./libstdc++-v3/src/.libs:/ 
usr/local/src/trunk/objdir32/gcc:/usr/local/src/trunk/objdir32/gcc:.:/ 
usr/local/src/trunk/objdir32/sparc-linux/./libstdc++-v3/src/.libs:/ 
usr/local/src/trunk/objdir32/gcc:/usr/local/src/trunk/objdir32/gcc:.:/ 
usr/local/src/trunk/objdir32/sparc-linux/./libstdc++-v3/src/.libs:/ 
usr/local/src/trunk/objdir32/gcc

./bitfield-4.exe: error while loading shared libraries: libobjc.so.1:
cannot open shared object file: No such file or directory
FAIL: obj-c++.dg/bitfield-4.mm execution test

Any ideas of what migt go wrong?


No idea. :-(  Perhaps someone maintaining the GNU runtime could take a  
look, and also address the i686-pc-linux-gnu ObjC/ObjC++ failures  
(http://gcc.gnu.org/ml/gcc/2005-05/msg01513.html)...


--Zem
--
Ziemowit Laski 1 Infinite Loop, MS 301-2K
Mac OS X Compiler GroupCupertino, CA USA  95014-2083
Apple Computer, Inc.   +1.408.974.6229  Fax .5477



Re: Ada front-end depends on signed overflow

2005-06-07 Thread Robert Dewar

Paul Schlie wrote:


- Agreed, I would classify any expression as being ambiguous if any of
  it's operand values (or side effects) were sensitive to the allowable
  order of evaluation of it's remaining operands, but not otherwise.


But this predicate cannot be evaluated at compile time!


Now you seem to suggest that the optimizer should simply avoid
"optimizing" in such cases (where it matters).


- No, I simply assert that if an expression is unambiguous (assuming
  my definition above for the sake of discussion), then the compiler
  may choose to order the evaluation in any way it desires as long as
  it does not introduce an such an ambiguity by doing so.


But this predicate cannot be evaluated at compile time!


- I fully agree that if a complier does not maintain records of the
  program state which a function may alter or be dependant on, as
  would be required to determine if any resulting operand/side-effect
  interdependences may exist upon it's subsequent use as an operand
  within a an expression itself; then the compiler would have no choice
  but to maintain it's relative order of evaluation as hypothetically
  specified, as it may otherwise introduce an ambiguity.


Fine, but then you are designing a different language from C. It is
fine to argue this language point in the context of langage design,
but this is irrelevant to the discussion of the implementation of
C, since the language is already defined, and the design decision
is contrary to what you want. Any C programmer who programs with
the conceptions you suggest is simply not a competent C programmer.

Note also the critical word in your above paragraph: "may". That's
the whole point, the compiler can't tell, and if it has to make
worst case assumptions, the impact on code efficiency is
significant. SO it is no problem for the compiler to "maintain
records ...", but it is not good enough (please reread my examples
in the previous message!)


  Although I believe I appreciate the relative complexity this introduces
  to both the compiler, and well as requirements imposed on "pre-compiled"
  libraries, etc., I don't believe that it justifies a language definition
  legitimizing the specification of otherwise non-deterministic programs.


Fine, as I say, an argument for some other forum.


- As you've specified the operations as distinct statements, I would argue
  that such an optimization would only be legitimate if the result were
  known to produce the same result as if the statements were evaluated in
  sequence as specified by the standard (which of course would be target
  specific). 


You can argue this all you like, but it is just a plain wrong
argument for C which is a language defined by the ANSI/ISO
standard, not by Paul Schlie.


  d = (a + b) / 8;

  would be ambiguous if the complier were able to restructure evaluation
  of expression in any way which may alter it's resulting effective result
  for a given target, As a program which has non-deterministic behavior
  doesn't seem very useful


That's a judgment with which I, many others, and the designers of
C disagree, so too bad for you!



Now it is legitimate to argue about how much quality is hurt, and
whether the resulting non-determinisim is worth the effeciency hit.



- Or rather is non-determinisim ever legitimately acceptable? (as I can't
  honestly think of a single instance were it would be, except if it may
  only result in the lost of a measurably few bits of fp precision, which
  are imprecise representations to begin with. but not otherwise?) 


If you can't appreciate the argument on the other side, you can't
very effectively argue your own position. Most language designers
are quite ready to accept non-determinate behavior in peculiar cases
to ensure that the common cases can be compiled efficiently. The basic
conclusion in design of such languages (which includes C, C++, Ada,
Fortran, Pasal, PL/1, etc) is that no reasonable programmer writes
the kind of side effects that cause trouble, so why cripple efficiency
for all reasonable programmers.

Really the only language in common use that follows your line of
thinking is Java, and that is a case where a very concious
decision is made to sacrifice efficiency for reproducibility
and portability of peculiar cases.


But overall do agree with your earlier statement, that each language has
made a choice, for better or worse.


Yes, of course, and the job of the compiler writer is to implement
the languages with the choice it made, not second guess it.

Interesting postscript. Even APL as originally designed had undefined
order of operations. However, early implementations were naive
interpretors which not only associated right to left (as required
by the language) but also executed in this order (which was not
required). Programmers got used to expecting this order and
relying on it (partly because it worked and most of them did
not even know it was wrong to rely on it, and partly because
people got c

Re: Making GCC faster

2005-06-07 Thread Paolo Bonzini
There has been a lot of work recently on making GCC output faster 
code.  But GCC isn't very fast.  On my slow 750MHz Linux box (which the PIII in 
it is now R.I.P), it took a whole night to compile 3.4.3.


Sometimes I wonder if Sam Lauber is a Markov generator...

Please read the release notes for 4.0.0.  You'll see that it is *much* 
faster on C++, and about as fast as 3.4.0 on other languages despite 
having added dozens of new passes.


Paolo


Re: Ada front-end depends on signed overflow

2005-06-07 Thread Florian Weimer
* Robert Dewar:

> Defintiely not, integer overflow generates async traps on very few
> architectures. Certainly synchronous traps can be generated (e.g.
> with into on the x86).

Or the JO jump instruction.  Modern CPUs choke on the INTO
instruction.


Subversion migration plans

2005-06-07 Thread Bernardo Innocenti
Hello,

browsing the mailing-list archives, I can't find what's
the current status of the much discussed Subversion
migration.  The topic just appears to have been abandoned
about three months ago.

Was there agreement on the migration?  What would be the most
appropriate time to do it?

I'm asking because I was planning to replace CVS in our
company, but first I wanted to see how good or badly
Subversion would perform on a largish project such as GCC.
Yes, you can call me a vile coward if you want! :-)

-- 
  // Bernardo Innocenti - Develer S.r.l., R&D dept.
\X/  http://www.develer.com/



Re: Value range propagation pass in 4.0.1 RC1 (or not)

2005-06-07 Thread nkavv
Dear Diego,

is the newest version of your pass (including the June 01 modifications at
gcc-patches) applicable to all statements in a basic block and not only to the
conditionals?

I mean, i saw the gcc-4.0.0 release version (VRP in tree-ssa-dom.c). In this
version two things happen:

a. -fdump-tree-vrp not applicable
b. VRP applied only to variables in conditional statements

Nikolaos Kavvadias

Quoting Diego Novillo <[EMAIL PROTECTED]>:

> On Tue, Jun 07, 2005 at 02:38:26AM +0300, [EMAIL PROTECTED] wrote:
>
> > does the 4.0.1 RC1 include the value range propagation (VRP) ssa-based pass
> > developed by Diego Novillo?
> >
> No.
>
> > If not what is the VRP status at the CVS for the C language? Is it
> basically
> > working?
> >
> Essentially, yes.  It's enabled by default at -O2 and you can see
> what the pass does with -fdump-tree-vrp.
>
>
> Diego.
>





Re: Ada front-end depends on signed overflow

2005-06-07 Thread Robert Dewar

Florian Weimer wrote:

* Robert Dewar:



Defintiely not, integer overflow generates async traps on very few
architectures. Certainly synchronous traps can be generated (e.g.
with into on the x86).



Or the JO jump instruction.  Modern CPUs choke on the INTO
instruction.


Well INTO is still the best choice if you worry about code size.
Furthermore, it would be interesting to know what the actual
pragmatic effect of choke is, by comparing:

1. no checks
2. current software checks
3. into style checks
4. checks using jo

on real applications (not an easy test to do unfortunately)



Re: Subversion migration plans

2005-06-07 Thread Haren Visavadia

--- Bernardo Innocenti  wrote:

> Hello,
> 
> browsing the mailing-list archives, I can't find
> what's
> the current status of the much discussed Subversion
> migration.  The topic just appears to have been
> abandoned
> about three months ago.

It's not abandoned, it can be found at
http://gcc.gnu.org/wiki/SvnPlan






___ 
How much free photo storage do you get? Store your holiday 
snaps for FREE with Yahoo! Photos http://uk.photos.yahoo.com


Re: Subversion migration plans

2005-06-07 Thread Andreas Schwab
Bernardo Innocenti <[EMAIL PROTECTED]> writes:

> I'm asking because I was planning to replace CVS in our
> company, but first I wanted to see how good or badly
> Subversion would perform on a largish project such as GCC.

You might want to ask the KDE project about their experience.  It's
probably the biggest repository so far that has been converted to
Subversion.

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
Key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


OCODE Backend

2005-06-07 Thread Rasmussen, David Ravn
www.opentv.com has a compiler for developing to their platform,
generating OCODE (stack based virtual machine code) from C. Their
compiler is an old gcc 2.6.0 which has been altered somehow.

As I understand it, there were certain problems at the time because gcc
2.6.0 was naturally suited for register machines, but not ideally suited
for stack based machines.

1) Is current gcc more suited for stack machines than was 2.6.0?
2) Is current gcc a suitable starting point for an C/C++ to OCODE
compiler?

/David

***
Information contained in this email message is intended only for use of the 
individual or entity named above. If the reader of this message is not the 
intended recipient, or the employee or agent responsible to deliver it to the 
intended recipient, you are hereby notified that any dissemination, 
distribution or copying of this communication is strictly prohibited. If you 
have received this communication in error, please immediately notify the [EMAIL 
PROTECTED] and destroy the original message.
***


Re: Failures in tests for obj-c++....

2005-06-07 Thread David Ayers
Ziemowit Laski wrote:
> 
> On 6 Jun 2005, at 22.26, Christian Joensson wrote:
> 
[snip]
>> ./bitfield-4.exe: error while loading shared libraries: libobjc.so.1:
>> cannot open shared object file: No such file or directory
>> FAIL: obj-c++.dg/bitfield-4.mm execution test
>>
>> Any ideas of what migt go wrong?
> 
> 
> No idea. :-(  Perhaps someone maintaining the GNU runtime could take a 
> look, and also address the i686-pc-linux-gnu ObjC/ObjC++ failures 
> (http://gcc.gnu.org/ml/gcc/2005-05/msg01513.html)...

I cannot reproduce this, yet I think this patch is probably correct
anyway.  Does it help?

Bootstrapped and passed all supported Objective-C tests on
i686-pc-linux-gnu.
If so, OK for mainline and 4.0 branch?

Cheers,
David
2005-06-07  David Ayers  <[EMAIL PROTECTED]>

* archive.c, init.c, selector.c: Include hash.h.

Index: archive.c
===
RCS file: /cvs/gcc/gcc/libobjc/archive.c,v
retrieving revision 1.10
diff -u -r1.10 archive.c
--- archive.c   2 Mar 2005 19:37:02 -   1.10
+++ archive.c   7 Jun 2005 10:59:43 -
@@ -28,6 +28,7 @@
 #include "runtime.h"
 #include "typedstream.h"
 #include "encoding.h"
+#include "hash.h"
 #include 
 
 extern int fflush (FILE *);
Index: init.c
===
RCS file: /cvs/gcc/gcc/libobjc/init.c,v
retrieving revision 1.10
diff -u -r1.10 init.c
--- init.c  2 Mar 2005 19:37:02 -   1.10
+++ init.c  7 Jun 2005 10:59:43 -
@@ -25,6 +25,7 @@
covered by the GNU General Public License.  */
 
 #include "runtime.h"
+#include "hash.h"
 
 /* The version number of this runtime.  This must match the number 
defined in gcc (objc-act.c).  */
Index: selector.c
===
RCS file: /cvs/gcc/gcc/libobjc/selector.c,v
retrieving revision 1.10
diff -u -r1.10 selector.c
--- selector.c  2 Mar 2005 19:37:03 -   1.10
+++ selector.c  7 Jun 2005 10:59:43 -
@@ -26,6 +26,7 @@
 #include "runtime.h"
 #include "sarray.h"
 #include "encoding.h"
+#include "hash.h"
 
 /* Initial selector hash table size. Value doesn't matter much */
 #define SELECTOR_HASH_SIZE 128


Re: Failures in tests for obj-c++....

2005-06-07 Thread David Ayers
David Ayers wrote:

> Ziemowit Laski wrote:
> 
>>On 6 Jun 2005, at 22.26, Christian Joensson wrote:
>>
> 
> [snip]
> 
>>>./bitfield-4.exe: error while loading shared libraries: libobjc.so.1:
>>>cannot open shared object file: No such file or directory
>>>FAIL: obj-c++.dg/bitfield-4.mm execution test
>>>
>>>Any ideas of what migt go wrong?
>>
>>
>>No idea. :-(  Perhaps someone maintaining the GNU runtime could take a 
>>look, and also address the i686-pc-linux-gnu ObjC/ObjC++ failures 
>>(http://gcc.gnu.org/ml/gcc/2005-05/msg01513.html)...
> 
> 
> I cannot reproduce this, yet I think this patch is probably correct
> anyway.  Does it help?

OK, besides the fact that the patch was supposed to go to gcc-patches@,
I think this may not really address the issue.  I could imagine that the
problem lies in using: #include  in some header files
instead of #include "hash.h".  (Or issue with the ordering of the -I
directives.)  Could you test whether it's pulling the wrong headers by
poisoning the installed headers and maybe posting the compile invocation
for selector.c, archive.c or init.c?

Thanks,
David Ayers




Re: Value range propagation pass in 4.0.1 RC1 (or not)

2005-06-07 Thread Diego Novillo
On Tue, Jun 07, 2005 at 11:41:34AM +0300, [EMAIL PROTECTED] wrote:
> Dear Diego,
> 
> is the newest version of your pass (including the June 01 modifications at
> gcc-patches) applicable to all statements in a basic block and not only to the
> conditionals?
> 
You'll have to be more specific.  What do you mean by "applicable
to all statements"?  VRP is mostly used to fold predicates whose
ranges can be computed at compile time, but single-valued ranges
are replaced in non-predicates:

if (a == 4)
  return a + 2;

-->

if (a == 4)
  return 6;

> I mean, i saw the gcc-4.0.0 release version (VRP in tree-ssa-dom.c). In this
> version two things happen:
> 
> a. -fdump-tree-vrp not applicable
>
Correct.  This is a 4.1 switch.

> b. VRP applied only to variables in conditional statements
> 
You'll need to give me an example.  Alternately, why don't you
try a CVS snapshot post Jun03?


Diego.


Re: Ada front-end depends on signed overflow

2005-06-07 Thread Paul Schlie
> From: Robert Dewar <[EMAIL PROTECTED]>
> Paul Schlie wrote:
> Fine, but then you are designing a different language from C.

- I'm not attempting to design a language, but just defend the statement
  that I made earlier; which was in effect that I contest the assertion
  that undefined evaluation semantics enable compilers to generate more
  efficient useful code by enabling them to arbitrarily destructively alter
  evaluation order of interdependent sub-expressions, and/or base the
  optimizations on behaviors which are not representative of their target
  machines.

  Because I simply observe that since an undefined behavior may also be
  non-deterministic even within a single program, it can't be relied upon;
  therefore enabling a compiler to produce garbage more efficiency is
  seems basically worthless, and actually even dangerous when the compiler
  can't even warn about resulting potentially non-deterministic ambiguities
  because it can't differentiate between garbage and reliably deterministic
  useful code, as it considers them equivalently legitimate.

  (With an exception being FP optimization, as FP is itself based
   only on the approximate not absolute representation of values.)

>> - Agreed, I would classify any expression as being ambiguous if any of
>>   it's operand values (or side effects) were sensitive to the allowable
>>   order of evaluation of it's remaining operands, but not otherwise.
> 
> But this predicate cannot be evaluated at compile time!

- Why not? The compiler should be able to statically determine if an
  expression's operands are interdependent, by determining if any of
  its operand's sub-expressions are themselves dependant on a variable
  value potentially modifiable by any of the other operand's sub-
  expressions. (Which is basically the same constraint imposed when
  rescheduling instructions, as an assignment can not be moved passed a
  reference to the same variable value, without potentially corrupting
  the effective semantics of the specified program, but may freely
  re-schedule assignments and references to distinct variable values
  past each other without any restrictions safely.)




Re: Ada front-end depends on signed overflow

2005-06-07 Thread Florian Weimer
* Paul Schlie:

> - I'm not attempting to design a language, but just defend the statement
>   that I made earlier; which was in effect that I contest the assertion
>   that undefined evaluation semantics enable compilers to generate more
>   efficient useful code by enabling them to arbitrarily destructively alter
>   evaluation order of interdependent sub-expressions, and/or base the
>   optimizations on behaviors which are not representative of their target
>   machines.

But the assertion is trivially true.  If you impose fewer constraints
on an implementation by leaving some cases undefined, it always has
got more choices when generating code, and some choices might yield
better code.  So code generation never gets worse.

Whether an implementation should exercise the liberties granted by the
standard in a particular case is a different question, and has to be
decided on a case-by-case basis.

>   (With an exception being FP optimization, as FP is itself based
>only on the approximate not absolute representation of values.)

FP has well-defined semantics, and its absolutely required for
compilers to implement them correctly because otherwise, a lot of
real-world code will break.

Actually, this is a very interesting example.  You don't care about
proper floating point arithmetic and are willing to sacrifice obvious
behavior for a speed or code size gain.  Others feel the same about
signed integer arithmetic.

>>> - Agreed, I would classify any expression as being ambiguous if any of
>>>   it's operand values (or side effects) were sensitive to the allowable
>>>   order of evaluation of it's remaining operands, but not otherwise.
>> 
>> But this predicate cannot be evaluated at compile time!
>
> - Why not?

In general, it's undecidable.

>   The compiler should be able to statically determine if an
>   expression's operands are interdependent, by determining if any of
>   its operand's sub-expressions are themselves dependant on a variable
>   value potentially modifiable by any of the other operand's sub-
>   expressions.

Phrased this way, you make a lot of code illegal.  I doubt this is
feasible.


Re: OCODE Backend

2005-06-07 Thread Ian Lance Taylor
"Rasmussen, David Ravn" <[EMAIL PROTECTED]> writes:

> ***
> Information contained in this email message is intended only for use of the 
> individual or entity named above. If the reader of this message is not the 
> intended recipient, or the employee or agent responsible to deliver it to the 
> intended recipient, you are hereby notified that any dissemination, 
> distribution or copying of this communication is strictly prohibited. If you 
> have received this communication in error, please immediately notify the 
> [EMAIL PROTECTED] and destroy the original message.
> ***

Please do not send e-mail with this type of disclaimer to mailing
lists @gcc.gnu.org.  These disclaimers potentially give us legal
liability, as we have already violated the terms of the message by
archiving it.  They are against list policy, as described here:
http://gcc.gnu.org/lists.html

If you are unable to disable this disclaimer, then please send your
message from a free e-mail account, such as @yahoo.com.

Thanks.

Ian


Andreas Schwab m68k Maintainer

2005-06-07 Thread Joel Sherrill <[EMAIL PROTECTED]>


I'm happy to anounce that Andreas Schwab <[EMAIL PROTECTED]>
as the new m68k port maintainer.

I, for one, thank him and wish him well in this effort. :)

--
Joel Sherrill, Ph.D. Director of Research & Development
[EMAIL PROTECTED] On-Line Applications Research
Ask me about RTEMS: a free RTOS  Huntsville AL 35805
   Support Available (256) 722-9985



Re: Ada front-end depends on signed overflow

2005-06-07 Thread Paul Schlie
> From: Florian Weimer <[EMAIL PROTECTED]>
> * Paul Schlie:
> 
>> - I'm not attempting to design a language, but just defend the statement
>>   that I made earlier; which was in effect that I contest the assertion
>>   that undefined evaluation semantics enable compilers to generate more
>>   efficient useful code by enabling them to arbitrarily destructively alter
>>   evaluation order of interdependent sub-expressions, and/or base the
>>   optimizations on behaviors which are not representative of their target
>>   machines.
> 
> But the assertion is trivially true.  If you impose fewer constraints
> on an implementation by leaving some cases undefined, it always has
> got more choices when generating code, and some choices might yield
> better code.  So code generation never gets worse.

- yes, it certainly enables an implementation to generate more efficient
  code which has no required behavior; so in effect basically produce more
  efficient programs which don't reliably do anything in particular; which
  doesn't seem particularly useful?

>>   (With an exception being FP optimization, as FP is itself based
>>only on the approximate not absolute representation of values.)
> 
> Actually, this is a very interesting example.  You don't care about
> proper floating point arithmetic and are willing to sacrifice obvious
> behavior for a speed or code size gain.  Others feel the same about
> signed integer arithmetic.

- Essentially yes; as FP is an approximate not absolute representation
  of a value, therefore seems reasonable to accept optimizations which
  may result in some least significant bits of ambiguity.

  Where integer operations are relied upon for state representations,
  which are in general must remain precisely and deterministically
  calculated, as otherwise catastrophic semantic divergences may result.

  (i.e. a single lsb divergence in an address calculation is not acceptable
   although an similar divergence in a FP value is likely harmless.)

>>   The compiler should be able to statically determine if an
>>   expression's operands are interdependent, by determining if any of
>>   its operand's sub-expressions are themselves dependant on a variable
>>   value potentially modifiable by any of the other operand's sub-
>>   expressions.
> 
> Phrased this way, you make a lot of code illegal.  I doubt this is
> feasible.

- No, exactly the opposite, the definition of an order of evaluation
  eliminates ambiguities, it does not prohibit anything other than the
  compiler applying optimizations which would otherwise alter the meaning
  of the specified expression.





ARM and __attribute__ long_call error

2005-06-07 Thread Jani Monoses

Hello

trying to compile the following simple source file using arm-elf-gcc 4.0

void pig(void) __attribute__ ((long_call));
void pig(void)
{
}

yields:

error: conflicting types for 'pig'
error: previous declaration of 'pig' was here

The same with gcc3.3. With other function attributes (isr, section, naked etc.) 
it works fine.

Aren't attributes supposed to be part of the functin declaration only? What am 
I missing?
For now I am passing -mlong-call on the command line but with that I cannot get the 
granularity I want as with __attribute__.


thanks
Jani



Re: Ada front-end depends on signed overflow

2005-06-07 Thread Florian Weimer
* Paul Schlie:

>> But the assertion is trivially true.  If you impose fewer constraints
>> on an implementation by leaving some cases undefined, it always has
>> got more choices when generating code, and some choices might yield
>> better code.  So code generation never gets worse.
>
> - yes, it certainly enables an implementation to generate more efficient
>   code which has no required behavior; so in effect basically produce more
>   efficient programs which don't reliably do anything in particular; which
>   doesn't seem particularly useful?

The quality of an implementation can't be judged only based on its
conformance to the standard, but this does not mean that the
implementation gets better if you introduce additional constraints
which the standard doesn't impose.

Some people want faster code, others want better debugging
information.  A few people only want optimizations which do not change
anything which is pracitcally observable but execution time (which is
a contradiction), and so on.

> - Essentially yes; as FP is an approximate not absolute representation
>   of a value, therefore seems reasonable to accept optimizations which
>   may result in some least significant bits of ambiguity.

But the same is true for C's integers, they do not behave like the
real thing.  Actually, without this discrepancy, we wouldn't have to
worry about overflow semantics, which once was the topic of this
thread!

>>>   The compiler should be able to statically determine if an
>>>   expression's operands are interdependent, by determining if any of
>>>   its operand's sub-expressions are themselves dependant on a variable
>>>   value potentially modifiable by any of the other operand's sub-
>>>   expressions.
>> 
>> Phrased this way, you make a lot of code illegal.  I doubt this is
>> feasible.
>
> - No, exactly the opposite, the definition of an order of evaluation
>   eliminates ambiguities, it does not prohibit anything other than the
>   compiler applying optimizations which would otherwise alter the meaning
>   of the specified expression.

Ah, so you want to prescribe the evaluation order and allow reordering
under the as-if rule.  This wasn't clear to me, sorry.

It shouldn't be too hard to implement this (especially if your order
matches the Java order), so you could create a switch to fit your
needs.  I don't think it should be enabled by default because it
encourages developers to write non-portable code which breaks when
compiled with older GCC version, and it inevitably introduces a
performance regression on some targets.


Re: ARM and __attribute__ long_call error

2005-06-07 Thread Richard Earnshaw
On Tue, 2005-06-07 at 16:08, Jani Monoses wrote:
> Hello
> 
> trying to compile the following simple source file using arm-elf-gcc 4.0
> 
> void pig(void) __attribute__ ((long_call));
> void pig(void)
> {
> }
> 
> yields:
> 
> error: conflicting types for 'pig'
> error: previous declaration of 'pig' was here
> 

Yes, that's the way it's currently coded.

The problem, it seems to me, is that we want to fault:

void pig(void) __attribute__ ((long_call));
...
void pig(void);

and

void pig(void);
...
void pig(void) __attribute__((long_call));

both of which would be potentially problematical (which do we believe?)
from the case that you have.  AFAICT there is nothing on the types
passed into the back-end to distinguish a declaration from a definition
in this context.

> The same with gcc3.3. With other function attributes (isr, section, naked 
> etc.) it works fine.
> 

Looking at the code suggests isr should be strictly enforced too.

R.


Re: Gcc 3.0 and unnamed struct: incorrect offsets

2005-06-07 Thread Joe Buck
On Tue, Jun 07, 2005 at 11:34:46AM +0530, Atul Talesara wrote:
> Hello folks,
>   This might have already been addressed, but I
> tried searching on GCC mailing list archives
> http://gcc.gnu.org/lists.html#searchbox
> and google before posting.
> ...

> My GCC version (cross-compiled to generate MIPS code):
> bash-2.05b$ mips-elf-gcc --version
> 3.0

Argh.  Stop right there.

You urgently need a newer version of GCC.


Re: Will Apple still support GCC development?

2005-06-07 Thread Toon Moene

Mirza Hadzic wrote:


A big endian system is indispensible if you are a compiler writer, 
because little endian hardware hides too many programmer errors



Can you show example(s) where little endian hides errors? Just curious...


Sorry, I was already asleep when your mail arrived ...

main()
{
long blah;

(void) foo(&blah);

printf("%ld\n", blah);
}

 different file 

foo(int *bar)
{
*bar = 42;
}

Works on ILP32 mode machines (both endiannesses), "works" on I32LP64 
mode machines (little endian), fails miserably on I32LP64 mode machines 
(big endian) *as it should*.


--
Toon Moene - e-mail: [EMAIL PROTECTED] - phone: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
A maintainer of GNU Fortran 95: http://gcc.gnu.org/fortran/
Looking for a job: Work from home or at a customer site; HPC, (GNU) 
Fortran & C


Re: Will Apple still support GCC development?

2005-06-07 Thread Toon Moene

Robert Dewar wrote:


Toon Moene wrote:


 The first
thing I did after receiving it is wiping out OS X and installing a 
real operating system, i.e., Debian.



Is it really necessary to post flame bait like this, hopefully people
ignore this


Perhaps the following little exchange rings a bell:

C: 'E's not pinin'! 'E's passed on! This customer is no more! He has 
ceased to be! 'E's expired and gone to meet 'is maker!


'E's a stiff! Bereft of life, 'e rests in peace! If you hadn't nailed 
'im to the perch 'e'd be pushing up the daisies!

'Is metabolic processes are now 'istory! 'E's off the twig!
'E's kicked the bucket, 'e's shuffled off 'is mortal coil, run down the 
curtain and joined the bleedin' choir invisibile!!


THIS IS AN EX-CUSTOMER!!

--
Toon Moene - e-mail: [EMAIL PROTECTED] - phone: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
A maintainer of GNU Fortran 95: http://gcc.gnu.org/fortran/
Looking for a job: Work from home or at a customer site; HPC, (GNU) 
Fortran & C


Re: Subversion migration plans

2005-06-07 Thread Bernardo Innocenti
Andreas Schwab wrote:
> Bernardo Innocenti <[EMAIL PROTECTED]> writes:
> 
> 
>>I'm asking because I was planning to replace CVS in our
>>company, but first I wanted to see how good or badly
>>Subversion would perform on a largish project such as GCC.
> 
> You might want to ask the KDE project about their experience.  It's
> probably the biggest repository so far that has been converted to
> Subversion.

I update almost all of KDE's repository weekly, and with
SVN it feels a little faster than CVS, with just a few
minor inconveniences.

I don't have a developer account, so I can't tell what
it's like to use it for write operations.  OK, I'll ask
them.

-- 
  // Bernardo Innocenti - Develer S.r.l., R&D dept.
\X/  http://www.develer.com/



Re: Ada front-end depends on signed overflow

2005-06-07 Thread Robert Dewar

 * Paul Schlie:


 (With an exception being FP optimization, as FP is itself based
  only on the approximate not absolute representation of values.)


Floating-point arithmetic is not simply some inaccurate representation
of real arithmetic. It can be used this way by the programmer, but in
fact fpt operations have very well defined semantics, and compiler
writers have to be very careful not to intefere with these semantics
beyond the level permitted by the language. Certainly the above quoted
attitude would be pretty deadly if held by a compiler optimizer
writer!




Re: Ada front-end depends on signed overflow

2005-06-07 Thread Robert Dewar

Paul Schlie wrote:


- yes, it certainly enables an implementation to generate more efficient
  code which has no required behavior; so in effect basically produce more
  efficient programs which don't reliably do anything in particular; which
  doesn't seem particularly useful?


You keep saying this over and over, but it does not make it true. Once
again, the whole idea of making certain constructs undefined, is to
ensure that efficient code can be generated for well defined constructs.


- Essentially yes; as FP is an approximate not absolute representation
  of a value, therefore seems reasonable to accept optimizations which
  may result in some least significant bits of ambiguity.


Rubbish, this shows a real misunderstanding of floating-point. FP values
are not "approximations", they are well defined values in a system of
arithmetic with precisely defined semantics, just as well defined as
integer operations. Any compiler that followed your misguided ideas
above would be a real menace and completely useless for any serious
fpt work.

As it is, the actual situation is that most serious fpt programmers
find themselves in the same position you are with integer arithmetic.
They often don't like the freedom given by the language to e.g. allow
extra precision (although they tend to be efficiency hungry, so one
doesn't know in practice that this is what they really want, since they
want it without paying for it, and they don't know how much they would
have to pay :-)


  Where integer operations are relied upon for state representations,
  which are in general must remain precisely and deterministically
  calculated, as otherwise catastrophic semantic divergences may result.


Right, and please if you want to write integer operations, you must ensure
that you write only defined operations. If you write a+b and it overflows,
then you have written a junk C program, and you have no right to expect
anything whatsoever from the result. This is just another case of writing
a bug in your program, and consequently getting results you don't expect.

By the way, I know we are hammering this stuff (and Paul) a bit continuously
here, but these kind of misconceptions are very common among programmers who
do not understand as much as they should about language design and compilers.
I find I have to spend quite a bit of time in a graduate compiler course to
make sure everyone understands what "undefined" semantics are all about.


  (i.e. a single lsb divergence in an address calculation is not acceptable
   although an similar divergence in a FP value is likely harmless.)


Nonsense, losing the last bit in an FP value can be fatal to many algorithms.
Indeed, some languages allow what seems to FP programmers to be too much
freedom, but not for a moment can a compiler writer contemplate doing an
optimization which is not allowed. For instance, in general replacing
(a+b)+c by a+(b+c) is an absolute no-no in most languages.


- No, exactly the opposite, the definition of an order of evaluation
  eliminates ambiguities, it does not prohibit anything other than the
  compiler applying optimizations which would otherwise alter the meaning
  of the specified expression.


No, the optimizations do not alter the meaning of any C expression. If the
meaning is undefined, then

a) the programmer should not have written this rubbish

b) any optimization leaves the semantics undefined, and hence unchanged.

Furthermore, the optimizations are not about undefined expressions at all,
they are about generating efficient code for cases where the expression
has a well defined value, but the compiler cannot prove an as-if relation
true if the notion of undefined expressions is not present.





Re: Ada front-end depends on signed overflow

2005-06-07 Thread Joe Buck
On Tue, Jun 07, 2005 at 05:49:54PM -0400, Robert Dewar wrote:
>  * Paul Schlie:
> 
> > (With an exception being FP optimization, as FP is itself based
> >  only on the approximate not absolute representation of values.)
> 
> Floating-point arithmetic is not simply some inaccurate representation
> of real arithmetic. It can be used this way by the programmer, but in
> fact fpt operations have very well defined semantics, and compiler
> writers have to be very careful not to intefere with these semantics
> beyond the level permitted by the language. Certainly the above quoted
> attitude would be pretty deadly if held by a compiler optimizer
> writer!

Exactly.  I have written fixed-point packages as well as expression
manipulation packages that are based on the exact behavior of IEEE
floating point.  I have had running battles in the past (not recently)
with people who think that GCC should warn whenever == is applied to float
or double expressions.

There are some faults that we just have to live with (like the
uncontrolled extra precision on the x86, depending on whether a temporary
is spilled to memory or not), but programs can and do depend on the fact
that certain values and computations are represented precisely by floating
point arithmetic.




Re: Proposed obsoletions

2005-06-07 Thread Nathanael Nerode
Paul Koning wrote:
>>"Nathanael" == Nathanael Nerode <[EMAIL PROTECTED]> writes:
> 
> 
>  Nathanael> * pdp11-*-* (generic only) Useless generic.
> 
> I believe this one generates DEC (as opposed to BSD) calling
> conventions, so I'd rather keep it around.  It also generates .s files
> that can (modulo a few bugfixes I need to get in) be assembled by gas.

Hmm, OK.  Could it be given a slightly more descriptive name, perhaps?
> 
>  paul
> 
> 
> 



GCC 4.0.1 RC1 bits will be spun RSN

2005-06-07 Thread Mark Mitchell
In http://gcc.gnu.org/ml/gcc/2005-06/msg00123.html, I mentioned 3 PRs as 
being very nice to fix for 4.0.1.  There were two others mentioned 
subsequently by others.  The complete list, then, was:


* 21528
* 21847
* 20928
* 19523
* 21828

As I was preparing this message, RTH fixed 21528.  Steven fixed 21847. 
Devang indicated a patch for 19523 would be forthcoming yesterday, but 
it did not appear, AFAICT.  There has been no action on 20928.  I made 
an attempt at fixing 21828.  However, Joseph and Zack both said that 
they were nervous about what side-effects my patch might have, and I am 
too.


So, I will be starting 4.0.1 RC1 bits as soon as I confirm that RTH's 
fix for 21528 is on the 4.0.1 branch.  If fixes for any of the other PRs 
pop up before Friday, we can include them, if they look safe.


I plan on testing 21828 more thoroughly, and committing it after 4.0.1 
is out, assuming all goes well.


Thanks,

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


gcc-3.4-20050607 is now available

2005-06-07 Thread gccadmin
Snapshot gcc-3.4-20050607 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/3.4-20050607/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 3.4 CVS branch
with the following options: -rgcc-ss-3_4-20050607 

You'll find:

gcc-3.4-20050607.tar.bz2  Complete GCC (includes all of below)

gcc-core-3.4-20050607.tar.bz2 C front end and core compiler

gcc-ada-3.4-20050607.tar.bz2  Ada front end and runtime

gcc-g++-3.4-20050607.tar.bz2  C++ front end and runtime

gcc-g77-3.4-20050607.tar.bz2  Fortran 77 front end and runtime

gcc-java-3.4-20050607.tar.bz2 Java front end and runtime

gcc-objc-3.4-20050607.tar.bz2 Objective-C front end and runtime

gcc-testsuite-3.4-20050607.tar.bz2The GCC testsuite

Diffs from 3.4-20050531 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-3.4
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: Andreas Schwab m68k Maintainer

2005-06-07 Thread Bernardo Innocenti
Joel Sherrill <[EMAIL PROTECTED]> wrote:
> 
> I'm happy to anounce that Andreas Schwab <[EMAIL PROTECTED]>
> as the new m68k port maintainer.

Kudos!

-- 
  // Bernardo Innocenti - Develer S.r.l., R&D dept.
\X/  http://www.develer.com/