Why GCC 4.1.2 casting "short int" to "short unsigned int" ?

2009-04-14 Thread 梁�
GCC4.1.2 on a Intel Xeon CPU
The test program is test.c:

extern int bar(short);

int foo(short arg1, short arg2)
{
  short res;
  res = arg1 + arg2;
  return bar(res);
}

Compiled with : gcc -fdump-tree-all -S test.c
The resulting test.c.t02.original is :
;; Function foo (foo)
;; enabled by -tree-original

{
  short int res;

short int res;
  res = (short int) ((short unsigned int) arg1 + (short unsigned int) arg2);
  return bar ((int) res);
}

So, my question is : why gcc casted "short" to "short unsigned int"
before addition and casted back after?
--
   此致
敬礼!

梁��


Re: Why GCC 4.1.2 casting "short int" to "short unsigned int" ?

2009-04-14 Thread Richard Guenther
2009/4/14 梁�� :
> GCC4.1.2 on a Intel Xeon CPU
> The test program is test.c:
>
> extern int bar(short);
>
> int foo(short arg1, short arg2)
> {
>  short res;
>  res = arg1 + arg2;
>  return bar(res);
> }
>
> Compiled with : gcc -fdump-tree-all -S test.c
> The resulting test.c.t02.original is :
> ;; Function foo (foo)
> ;; enabled by -tree-original
>
> {
>  short int res;
>
>short int res;
>  res = (short int) ((short unsigned int) arg1 + (short unsigned int) arg2);
>  return bar ((int) res);
> }
>
> So, my question is : why gcc casted "short" to "short unsigned int"
> before addition and casted back after?

This is a question for gcc-help.  The addition is carried out in
integer type and converted back to short.  The result you see
is the result of optimizing this.

Richard.

>   此致
> 敬礼!
>
>梁��
>


Re: [gnat] reuse of ASTs already constructed

2009-04-14 Thread Oliver Kellogg
Interim note:

Apparently, calling compile_file() more than once has not been done
before?
I am seeing many global variables in cgraphunit.c that need to
be reinitialized in this case.
Also, some static variables are defined locally in functions, e.g.
'first_analyzed' and 'first_analyzed_var' in cgraph_analyze_functions().
Those need to be pulled outside for reinitialization.

I am making the necessary changes and extending init_cgraph() as needed.

I hope my changes stand a chance for being integrated ;)

Oliver


On 2009-04-12 at 19:29 +0200, Oliver Kellogg wrote:
> Picking up an old thread,
> http://gcc.gnu.org/ml/gcc/2003-03/msg00281.html
> 
> 
> On Tue, 4 Mar 2003, Geert Bosch  wrote:
> > [...]
> > Best would be to first post a design overview,
> > before doing a lot of work in order to prevent spending time
> > on implementing something that may turn out to have fundamental
> > problems.
> 
> I've done a little experimenting to get a feel for this.
> 
> I've looked at the work done toward the GCC compile server but
> decided that I want to concentrate on GNAT trees (whereas the
> compile server targets the GNU trees.)
> 
> Also I am aiming somewhat lower - not making a separate compile
> server process but rather extending gnat1 to handle multiple
> files in a single invocation.
> 
> The current GNAT code makes a strong assumption that there be
> only one main unit, and this Main_Unit be located at index 0 of
> Lib.Units.Table (see procedure Lib.Load.Load_Main_Source).
> 
> I am currently looking at having each main unit supplied on
> the gnat1 command line overwrite the Main_Unit in the Units table.
> 
> What do you think of this approach?
> 
> The attached patch sets the stage for passing multiple source
> files to a single gnat1 invocation. (Beware, this is a rough cut.
> Best use "svn diff --diff-cmd diff -x -uw" after applying as
> there are many changes that only affect indentation.)
> 
> Thanks,
> 
> Oliver
> 
> 



GCC 4.4.0-rc1 available

2009-04-14 Thread Jakub Jelinek
GCC 4.4.0 release candidate 1 is now available at:

ftp://gcc.gnu.org/pub/gcc/snapshots/4.4.0-RC-20090414/

Please test the tarballs there and report any problems to Bugzilla.  CC me
on the bugs if you believe they are regressions from previous releases
severe enough that they should block the 4.4.0 release.


Parma Polyhedra Library 0.10.1

2009-04-14 Thread Roberto Bagnara


We are pleased to announce the availability of PPL 0.10.1, a new release
of the Parma Polyhedra Library.

This release includes several important improvements to PPL 0.10,
among which is better portability (including the support for
cross-compilation), increased robustness, better packaging and several
bug fixes.  The precise list of user-visible changes is available at
http://www.cs.unipr.it/ppl/Download/ftp/releases/0.10.1/NEWS .
For more information, please come and visit the PPL web site at

http://www.cs.unipr.it/ppl/

On behalf of all the past and present contributors listed at
http://www.cs.unipr.it/ppl/Credits/ and in the file CREDITS,

Roberto Bagnara  
Patricia M. Hill 
Enea Zaffanella  

--
Prof. Roberto Bagnara
Computer Science Group
Department of Mathematics, University of Parma, Italy
http://www.cs.unipr.it/~bagnara/
mailto:bagn...@cs.unipr.it




GCC 4.4.0 Status Report (2009-04-14)

2009-04-14 Thread Jakub Jelinek
Status
==

Release Candidate 1 has been released today.  The branch remains
open under the usual release branch rules; it is open for regression
and documentation fixes only, but please be very conservative at this
point in deciding what changes are needed before the 4.4.0 release
and what can wait until after that release.

The licensing changes have been backported to the branch already
and we currently have 78 serious regressions, which is below 100
and zero P1 regressions, so there are no further obstackles in releasing
GCC 4.4.0 soon.  Given that the waiting for the license changes has
been long, I hope people had a lot of time to test 4.4 and so the
Release Candidate testing period can be relatively short.
If nothing serious is reported, I'd like to release 4.4.0 next week.

Quality Data


Priority  # Change from Last Report
--- ---
P10 +- 0
P2   77 -  2
P31 -  2
--- ---
Total78 -  4

Previous Report
===

http://gcc.gnu.org/ml/gcc/2009-03/msg00397.html

The next status report will be sent by Joseph.


libiberty configuration for DJGPP

2009-04-14 Thread Eli Zaretskii
The current libiberty misconfigures itself for native DJGPP builds,
because it tries to avoid compiling and linking test programs, for
cross-compilation's sake.  But the necessary bits that tell the
configure script what functions are available in the DJGPP library are
in the wrong place: the one that is run during the native build as
well, and there's no reason to avoid linking in that case.  The result
is that the native DJGPP build of GDB is unnecessarily broken, because
GDB needs several functions from libiberty that are unavailable in the
DJGPP library, but are not auto-detected.

The suggested patch below fixes that by moving the explicit list of
known functions to a place that only gets run when cross-compiling.
While at that, I also added to that part the functions needed by GDB
which need to be provided by libiberty.  The native build will now
auto-detect all the required functions.

2009-04-14  Eli Zaretskii  

* configure.ac  (setobjs, msdosdjgpp): Move a-priori setting of
existing and required library functions to with_target_subdir
section, so that the native build does detect them at configure
time.

--- configure.a~0   2009-04-08 22:42:57.0 +0300
+++ configure.ac2009-04-14 15:14:46.0 +0300
@@ -469,6 +469,28 @@
 setobjs=yes
 ;;
 
+  *-*-msdosdjgpp)
+AC_LIBOBJ([vasprintf])
+AC_LIBOBJ([vsnprintf])
+AC_LIBOBJ([snprintf])
+AC_LIBOBJ([asprintf])
+
+for f in atexit basename bcmp bcopy bsearch bzero calloc clock ffs \
+ getcwd getpagesize getrusage gettimeofday \
+ index insque memchr memcmp memcpy memmove memset psignal \
+ putenv random rename rindex sbrk setenv stpcpy strcasecmp \
+ strchr strdup strerror strncasecmp strrchr strstr strtod \
+ strtol strtoul sysconf times tmpnam vfprintf vprintf \
+ vsprintf waitpid
+do
+  n=HAVE_`echo $f | tr 'abcdefghijklmnopqrstuvwxyz' 
'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`
+  AC_DEFINE_UNQUOTED($n)
+done
+
+
+setobjs=yes
+;;
+
   esac
 
   # We may wish to install the target headers somewhere.
@@ -548,23 +570,6 @@
 setobjs=yes
 ;;
 
-  *-*-msdosdjgpp)
-for f in atexit basename bcmp bcopy bsearch bzero calloc clock ffs \
- getcwd getpagesize getrusage gettimeofday \
- index insque memchr memcmp memcpy memmove memset psignal \
- putenv random rename rindex sbrk setenv stpcpy strcasecmp \
- strchr strdup strerror strncasecmp strrchr strstr strtod \
- strtol strtoul sysconf times tmpnam vfprintf vprintf \
- vsprintf waitpid
-do
-  n=HAVE_`echo $f | tr 'abcdefghijklmnopqrstuvwxyz' 
'ABCDEFGHIJKLMNOPQRSTUVWXYZ'`
-  AC_DEFINE_UNQUOTED($n)
-done
-
-
-setobjs=yes
-;;
-
   esac
 fi
 


needed-list fails in libiberty

2009-04-14 Thread Eli Zaretskii
The following snippet from libiberty/Makefile.in:

# needed-list is used by libstdc++.  NEEDED is the list of functions
# to include there.  Do not add anything LGPL to this list; libstdc++
# can't use anything encumbering.
NEEDED = atexit calloc memchr memcmp memcpy memmove memset rename strchr \
 strerror strncmp strrchr strstr strtol strtoul tmpnam vfprintf 
vprintf \
 vfork waitpid bcmp bcopy bzero
needed-list: Makefile
rm -f needed-list; touch needed-list; \
for f in $(NEEDED); do \
  for g in $(LIBOBJS) $(EXTRA_OFILES); do \
case "$$g" in \
  *$$f*) echo $$g >> needed-list ;; \
esac; \
  done; \
done

assumes that either $(LIBOBJS) or $(EXTRA_OFILES), or both, will be
non-empty.  If that assumption is wrong, building libiberty fails:

rm -f needed-list; touch needed-list; \
for f in atexit calloc memchr memcmp memcpy memmove memset rename strchr 
strerror strncmp strrchr strstr strtol strtoul tmpnam vfprintf vprintf vfork 
waitpid bcmp bcopy bzero; do \
  for g in  ; do \
case "$g" in \
  *$f*) echo $g >> needed-list ;; \
esac; \
  done; \
done
d:\usr\tmp/dj85: line 1: syntax error near unexpected token `;'

This was in a DJGPP build of GDB, and the root cause of $(LIBOBJS)
being empty was another bug (see my other mail today about "libiberty
configuration for DJGPP").  But in principle, some build that has all
of the required functions in the system library can potentially fail
in the same way, no?

So how about the following patch?

2009-04-14  Eli Zaretskii  

* Makefile.in (needed-list): Add "notused" to the list of the
inner `for', to avoid failure when both $(LIBOBJS) and
$(EXTRA_OFILES) are empty.


--- libiberty/Makefile.i~0  2009-03-28 05:07:29.0 +0300
+++ libiberty/Makefile.in   2009-04-11 12:27:37.28400 +0300
@@ -383,10 +383,12 @@
 NEEDED = atexit calloc memchr memcmp memcpy memmove memset rename strchr \
 strerror strncmp strrchr strstr strtol strtoul tmpnam vfprintf vprintf 
\
 vfork waitpid bcmp bcopy bzero
+# The notused gork is for when both LIBOBJS and EXTRA_OFILES end up
+# empty: the for loop will then barf.
 needed-list: Makefile
rm -f needed-list; touch needed-list; \
for f in $(NEEDED); do \
- for g in $(LIBOBJS) $(EXTRA_OFILES); do \
+ for g in $(LIBOBJS) $(EXTRA_OFILES) notused; do \
case "$$g" in \
  *$$f*) echo $$g >> needed-list ;; \
esac; \


needed-list fails in libiberty

2009-04-14 Thread Eli Zaretskii
The following snippet from libiberty/Makefile.in:

# needed-list is used by libstdc++.  NEEDED is the list of functions
# to include there.  Do not add anything LGPL to this list; libstdc++
# can't use anything encumbering.
NEEDED = atexit calloc memchr memcmp memcpy memmove memset rename strchr \
 strerror strncmp strrchr strstr strtol strtoul tmpnam vfprintf 
vprintf \
 vfork waitpid bcmp bcopy bzero
needed-list: Makefile
rm -f needed-list; touch needed-list; \
for f in $(NEEDED); do \
  for g in $(LIBOBJS) $(EXTRA_OFILES); do \
case "$$g" in \
  *$$f*) echo $$g >> needed-list ;; \
esac; \
  done; \
done

assumes that either $(LIBOBJS) or $(EXTRA_OFILES), or both, will be
non-empty.  If that assumption is wrong, building libiberty fails:

rm -f needed-list; touch needed-list; \
for f in atexit calloc memchr memcmp memcpy memmove memset rename strchr 
strerror strncmp strrchr strstr strtol strtoul tmpnam vfprintf vprintf vfork 
waitpid bcmp bcopy bzero; do \
  for g in  ; do \
case "$g" in \
  *$f*) echo $g >> needed-list ;; \
esac; \
  done; \
done
d:\usr\tmp/dj85: line 1: syntax error near unexpected token `;'

This was in a DJGPP build of GDB, and the root cause of $(LIBOBJS)
being empty was another bug (see my other mail today about "libiberty
configuration for DJGPP").  But in principle, some build that has all
of the required functions in the system library can potentially fail
in the same way, no?

So how about the following patch?

2009-04-14  Eli Zaretskii  

* Makefile.in (needed-list): Add "notused" to the list of the
inner `for', to avoid failure when both $(LIBOBJS) and
$(EXTRA_OFILES) are empty.


--- libiberty/Makefile.i~0  2009-03-28 05:07:29.0 +0300
+++ libiberty/Makefile.in   2009-04-11 12:27:37.28400 +0300
@@ -383,10 +383,12 @@
 NEEDED = atexit calloc memchr memcmp memcpy memmove memset rename strchr \
 strerror strncmp strrchr strstr strtol strtoul tmpnam vfprintf vprintf 
\
 vfork waitpid bcmp bcopy bzero
+# The notused gork is for when both LIBOBJS and EXTRA_OFILES end up
+# empty: the for loop will then barf.
 needed-list: Makefile
rm -f needed-list; touch needed-list; \
for f in $(NEEDED); do \
- for g in $(LIBOBJS) $(EXTRA_OFILES); do \
+ for g in $(LIBOBJS) $(EXTRA_OFILES) notused; do \
case "$$g" in \
  *$$f*) echo $$g >> needed-list ;; \
esac; \


Re: needed-list fails in libiberty

2009-04-14 Thread Joseph S. Myers
On Tue, 14 Apr 2009, Eli Zaretskii wrote:

> The following snippet from libiberty/Makefile.in:
> 
> # needed-list is used by libstdc++.  NEEDED is the list of functions
> # to include there.  Do not add anything LGPL to this list; libstdc++
> # can't use anything encumbering.

Since this comment relates to libstdc++ v2 and GCC 3.0 and later do not 
use this, removing all the obsolete code (and maybe everything relating to 
building a target libiberty) would be the obvious fix for any problems 
with it.

(As far as I can tell, the only use of any libiberty code for the target 
is libstdc++-v3 using a demangler source file; libiberty built for the 
target doesn't seem to be used at all.)

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: libiberty configuration for DJGPP

2009-04-14 Thread Ian Lance Taylor
Eli Zaretskii  writes:

> 2009-04-14  Eli Zaretskii  
>
>   * configure.ac  (setobjs, msdosdjgpp): Move a-priori setting of
>   existing and required library functions to with_target_subdir
>   section, so that the native build does detect them at configure
>   time.

This is OK.

Thanks.

In the future, please send patches only to gcc-patches@, not to both
gcc-patches@ and g...@.  Thanks.

Ian


Question about creating stdint.h on systems that don't have it

2009-04-14 Thread Steve Ellcey
I am working on the c99 stdint.h support for HP-UX.  On HP-UX 11.23 and
11.31 where stdint.h exists I am setting use_gcc_stdint to "wrap" and
adding some hacks to inclhack.def and that seems to be working.

On HP-UX 11.11 there is no stdint.h but I think we want to provide one. 
I tried setting use_gcc_stdint to "provide" but that doesn't work
because HP-UX 11.11 already has typedefs in other header files for some
of the things the GCC provided stdint.h wants to typedef like
int8_fast_t.

I believe I want to create my own stdint.h header file, one that looks
more like the HP-UX 11.23 one than the GCC provided one, but I am not
sure how to do that.  I don't see any examples in inclhack.def of
providing a header file that doesn't exist, only cases of changing or
completely replacing existing header files.

Is there a standard way of having GCC provide a new header file
for a given platform?

Steve Ellcey
s...@cup.hp.com


Query gcc support on sigma design board

2009-04-14 Thread manjunatha srinivasan
Hi

Is  GCC is supporting sigma design board processors? If so which SMP
processor series is supported. If GCC doesn't support how to enable
the support for sigma design boards in GCC sources.

Regards
Manjunatha  Srinivasan N


Re: messages

2009-04-14 Thread Arthur Schwarz


> 
>   So I guess, yes, I'm asking Arthur to suggest rules
> of relevance that would
> enable the compiler to decide what kind of user error is
> implied by a given
> syntax error.
> 
>     cheers,
>       DaveK
> 
You're asking for a lot. I've never been accused of being smart (the quip 
being, 'I've always been admired for it'.)

In one of my previous e-mails I mentioned a criteria of selection. Do nothing. 
The compiler already provides adequate analysis. What is missing is an analysis 
of the analysis to produce more 'user friendly' diagnostic messages. What I 
think we get is a compiler writers' version of a suitable diagnostic. The 
messages seem to indicate what a compiler writer needs to determine what is at 
fault, and perhaps with sufficient detail to determine that the fault (message) 
is at fault. What I suggested is that a separate subsystem be created to 
extract 'meaning' from the compiler writers' message, and present a user 
version of the message. I do not believe that the compiler writers' messaging 
should be removed - it is useful for compiler writers - just that it be 
optioned-out for the user.

Taking the approach that the existing messages are satisfactory the dialog on 
relevance moves a tad to the right. The issue becomes one of comparing the 
detailed messaging to the textual fault. In the original case, one of pointing 
out that arg is at error because is doesn't match known overloads (and some 
other name for templates I suppose). In that way, the user is told that arg 
is at fault rather than having to discover that fact.

The error analysis subsystem, in the case of not finding a suitable overloaded 
function, is one of identifying user provided arguments again found potential 
overload. Essentially something like:

function()  overload()
  arg<1> arg<1>
  arg2<> arg<2> o o o
  o o o  o o o
  arg arg

and finding which arguments fail. There may be more than one answer and some 
heuristic is needed to determine which or how many messages to provide for a 
given failure. To throw an answer of relevancy on the mud-pile, suppose we 
choose to message the cases where there is the least number of failures. Using 
this heuristic for this failure the overloads which contain only 1 argument 
which fails takes precedence over overloads with 2 argument failures and so on. 
This is a mud-pile heuristic, sufficient to start a discussion but probably not 
suitable for ending one.

In most cases I would find it a daunting task to provide an algorithm to decide 
which error, among many, is the 'most important' and should be addressed while 
the others are left unaddressed. The easy answer is to provide an option which 
details the 'depth' of fault diagnostics to provide - avoiding the problem of 
finding the best diagnostic but, again, introducing the notion of fault 
ranking. I also think that for some cases it might be possible to actually 
establish the 'best' wrong answer. Again, ideas are cheap, work is hard.

In one of my previous e-mails I suggested that it might be possible to 
characterize semantic errors and to standardize  existing diagnostic messaging 
to fit the characterization. The process for many 'user friendly' messages then 
becomes one of analysis of the cause of faults against the detailed messaging 
to extract some user oriented messaging. I said that the pragmatics of the 
approach may be hard. 

Now I know that this is all hand-waving and random bits of wisdom. There is a 
wide area for investigation, discussion, and debate. 

Now that I've ended up confusing myself

art


Re: Question about creating stdint.h on systems that don't have it

2009-04-14 Thread Joseph S. Myers
On Tue, 14 Apr 2009, Steve Ellcey wrote:

> On HP-UX 11.11 there is no stdint.h but I think we want to provide one. 
> I tried setting use_gcc_stdint to "provide" but that doesn't work
> because HP-UX 11.11 already has typedefs in other header files for some
> of the things the GCC provided stdint.h wants to typedef like
> int8_fast_t.

GCC allows duplicate typedefs in system headers (there's also a proposal 
for C1x to allow them more generally, as C++ does).  So as long as the 
types are consistent with those in other headers (is this a system with 
inttypes.h but not stdint.h?) there should be no problem with using GCC's 
copy.

> Is there a standard way of having GCC provide a new header file
> for a given platform?

You'd create an alternative setting to "provide", "wrap" and "none", say 
"hpux", and add code to handle it.  But you shouldn't need to do so simply 
because some other headers define some of the types.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: Parma Polyhedra Library 0.10.1

2009-04-14 Thread Richard Guenther
On Tue, Apr 14, 2009 at 3:02 PM, Roberto Bagnara  wrote:
>
> We are pleased to announce the availability of PPL 0.10.1, a new release
> of the Parma Polyhedra Library.
>
> This release includes several important improvements to PPL 0.10,
> among which is better portability (including the support for
> cross-compilation), increased robustness, better packaging and several
> bug fixes.  The precise list of user-visible changes is available at
> http://www.cs.unipr.it/ppl/Download/ftp/releases/0.10.1/NEWS .
> For more information, please come and visit the PPL web site at
>
>    http://www.cs.unipr.it/ppl/
>
> On behalf of all the past and present contributors listed at
> http://www.cs.unipr.it/ppl/Credits/ and in the file CREDITS,
>
>        Roberto Bagnara  
>        Patricia M. Hill 
>        Enea Zaffanella  

It seems to build and test ok on {i586,ia64,ppc,ppc64,s390,x86_64}-linux
but I get

PASS: nnc_writepolyhedron1
/bin/sh: line 4: 29952 Segmentation fault  ${dir}$tst
FAIL: memory1
==
1 of 191 tests failed
Please report to ppl-de...@cs.unipr.it
==

on s390x-linux.  Does the testsuite stop after the first error?  If not,
what is memory1 testing?

Thanks,
Richard.


Re: Question about creating stdint.h on systems that don't have it

2009-04-14 Thread Steve Ellcey
On Tue, 2009-04-14 at 15:50 +, Joseph S. Myers wrote:

> > Is there a standard way of having GCC provide a new header file
> > for a given platform?
> 
> You'd create an alternative setting to "provide", "wrap" and "none", say 
> "hpux", and add code to handle it.  But you shouldn't need to do so simply 
> because some other headers define some of the types.

OK, when looking at the preprocessed output I see:

typedef signed char int_least8_t;
typedef __INT_LEAST8_TYPE__ int_least8_t;

So the problem seems to be that __INT_LEAST8_TYPE__ isn't defined.
On HP-UX 11.23 where I use "wrap" I created a hpux-stdint.h file
(like glibc-stdint.h and newlib-stdint.h) to use.  I guess I need
to use this on HP-UX 11.11 too, even though I am using "provide"
instead of "wrap".

Steve Ellcey
s...@cup.hp.com




Re: Question about creating stdint.h on systems that don't have it

2009-04-14 Thread John David Anglin
I have a patch to provide stdint.h on HP-UX that I have been testing.
I have resolved the consistency issues with inttypes.h although there
are some slightly wierd aspects.  For example, I found "signed char" and
"char" are inconsistent, yet characters are signed.  The specification
of some types differs from that in inttypes.h for some types but gcc
doesn't object.  For example, I specify "long long int" instead of just
"long long".

I have to define __STDC_EXT__ under all circumstances to get consistency
of the long long types in the 32-bit runtime.  Previously, gcc for hpux
tried to mirror the behavior of the HP C compiler wrt long long types.

I think the patch chould work for most HP-UX versions except perhaps
11.31 and later.  These systems may provide stdint.h.  I don't have
access to a machine with 11.31 or later.

Dave
-- 
J. David Anglin  dave.ang...@nrc-cnrc.gc.ca
National Research Council of Canada  (613) 990-0752 (FAX: 952-6602)


Re: messaging

2009-04-14 Thread aschwarz1309

Thanks Kai. I do have what I hope is a more specific subjective reason for 
saying that I think the existing diagnostics should be changed. Fundamentally, 
what is provided in the messaging is not an indication of what is wrong, but an 
indication of what is required to repair the fault. My objections then become:
1: As an old man full of wisdom, most developers can't distinguish a
   'primary-expression' from a washing machine. Further, to determine
   what correction might be needed most users would have to research
   the C++ standard (or other standardized document supported by the
   g++ development community) to find out exactly what constitutes a
   'primary-expression'. 
2: It puts an obligation on the g++ development community to ensure
   that the messaging is consistent with documentation and that if the 
   term 'primary-expression' changes then g++ will change the messaging
   to conform to the new term. 
3: The cause of the error is more specific than it's solution. The cause
   of the fault is the user (in this case me) provided something that
   was wrong. It wasn't the lack of a 'primary-expression' but the
   existence of the illegal constructs. My conjecture is that if the
   message says "you did this wrong" then the user would have an easy
   time of finding a fix.

I don't argue with the details of my wording. My intent is not to show that I 
am a better wordsmith but that the existing diagnostic messages are not 
specific enough. From Item 1: above, in order for the average user to fix the 
error the user must research the terms used, then compare the syntax given with 
the actual fault, and then fix the error. If the message say "this is the 
fault", the research goes the way of the woolly-mammoth.

The paradigm is that the message should provide the minimum amount of 
information required to identify the syntax/semantics which caused the failure.

art

--- On Mon, 4/13/09, Kai Henningsen  wrote:

> From: Kai Henningsen 
> Subject: Re: messaging
> To: "Arthur Schwarz" 
> Cc: gcc@gcc.gnu.org
> Date: Monday, April 13, 2009, 11:12 PM
> Arthur Schwarz schrieb:
> > In the following code fragment:
> > 
> > # include 
> > # include 
> > # include 
> > 
> > using namespace std;
> > void CommandLine(int argc, char** argv);
> > int main(int argc, char** argv) {
> >    CommandLine(argc, argv[]);
> >    ifstream x.open(argv[1], ios:in);
> >    ofstream y.open(argv[1], ios::in);
> >       return 0;
> > };
> > 
> > g++-4 messaging is:
> >>> g++-4 x.cpp
> > x.cpp: In function 'int main(int, char**)':
> > x.cpp:8: error: expected primary-expression before ']'
> token
> > x.cpp:10: error: expected primary-expression before
> ':' token
> > 
> > A recommendation and reason for change is:
> > 1: x.cpp:8 error: illegal to pass an array without
> subscript value as an    argument
> >    The given message is accurate but
> non-expressive of the reason
> >    for failure.
> 
> Actually, in this case I'd say that the original message is
> perfectly fine, and your suggestion is rather confusing.
> However, what one could say here is something like "[] is
> only allowed in declarations".
> 
> 
> > 3: cpp:10 error: illegal scope resolution operator
> ':'
> >    From memory, there are three uses of ':'
> in C++
> >    ':'   label terminator,
> :
> >    ':'   case in a switch
> statement, case :
> >    ':'   scope resolution
> operator, "::"
> >    The given diagnostic message is
> deceptive. 
> 
> Could perhaps say "':' is not a scope resolution operator",
> unless someone comes up with a use case where it is ...
>



Re: Question about creating stdint.h on systems that don't have it

2009-04-14 Thread Andrew Pinski
On Tue, Apr 14, 2009 at 9:18 AM, John David Anglin
 wrote:
> I have a patch to provide stdint.h on HP-UX that I have been testing.
> I have resolved the consistency issues with inttypes.h although there
> are some slightly wierd aspects.  For example, I found "signed char" and
> "char" are inconsistent, yet characters are signed.  The specification
> of some types differs from that in inttypes.h for some types but gcc
> doesn't object.  For example, I specify "long long int" instead of just
> "long long".

There are three distant character types in C and C++; signed char,
unsigned char and char.  Even if char is signed by default char and
signed char are still incompatible types.  This is different from long
long int and long long where they are the same type.

Thanks,
Andrew Pinski


Re: messages

2009-04-14 Thread Joe Buck
On Mon, Apr 13, 2009 at 05:10:23PM -0700, Dave Korn wrote:
> Joe Buck wrote:
> 
> > And this, of course, means we have to define relevance.  There are two
> > cases: the first is when we fail to choose an overload because of
> > ambiguity; there we can just report all of the choices that are tied for
> > "equally good".  The other case is where no overload matches.  There
> > we could try to produce a heuristic that would "score" each alternative.
> > Matching some but not all of the arguments would contribute some points,
> > likewise if the addition or removing a const qualifier would cause a
> > match, that would score points.  It would take some tweaking to produce
> > a meaningful result.
> 
>   Hmm, I'm not a language lawyer, but isn't there already a well-ordered
> definition of more or less closely-matching in the whole C++ name resolution
> thing?  It could be confusing if our warnings operated a significantly
> different standard for what's close and what's not than that defined in the
> language spec, but in terms of doing what the user meant, we'd probably want
> to treat certain mismatches as more significant than others for diagnostic
> purposes (e.g. the common char[] vs char * when passing a const string 
> problem).

Standards compliance is satisfied when GCC accepts, or fails to accept,
the code.  If it is not accepted, then the purpose of what we print is
to help the user diagnose and correct the problem.

So if there are 17 possible overloads (which is common for operator<<
bugs), GCC's current behavior of printing all 17 is ridiculous, and
some means needs to be chosen to create a shorter list.



Re: Question about creating stdint.h on systems that don't have it

2009-04-14 Thread John David Anglin
Attached is my change as it currently stands.

Dave
-- 
J. David Anglin  dave.ang...@nrc-cnrc.gc.ca
National Research Council of Canada  (613) 990-0752 (FAX: 952-6602)
Index: config.gcc
===
--- config.gcc  (revision 145670)
+++ config.gcc  (working copy)
@@ -937,6 +937,7 @@
tmake_file="$tmake_file pa/t-slibgcc-dwarf-ver"
fi
use_collect2=yes
+   use_gcc_stdint=provide
gas=yes
;;
 hppa*64*-*-hpux11*)
@@ -974,6 +975,7 @@
thread_file=posix
;;
esac
+   use_gcc_stdint=provide
gas=yes
;;
 hppa[12]*-*-hpux11*)
@@ -1003,6 +1005,7 @@
thread_file=posix
;;
esac
+   use_gcc_stdint=provide
use_collect2=yes
gas=yes
;;
Index: config/pa/pa64-hpux.h
===
--- config/pa/pa64-hpux.h   (revision 145670)
+++ config/pa/pa64-hpux.h   (working copy)
@@ -19,6 +19,20 @@
 along with GCC; see the file COPYING3.  If not see
 .  */
 
+/* C99 stdint.h types.  */
+#undef INT64_TYPE
+#define INT64_TYPE "long int"
+#undef UINT64_TYPE
+#define UINT64_TYPE "long unsigned int"
+#undef INT_LEAST64_TYPE
+#define INT_LEAST64_TYPE "long int"
+#undef UINT_LEAST64_TYPE
+#define UINT_LEAST64_TYPE "long unsigned int"
+#undef INT_FAST64_TYPE
+#define INT_FAST64_TYPE "long int"
+#undef UINT_FAST64_TYPE
+#define UINT_FAST64_TYPE "long unsigned int"
+
 /* We can debug dynamically linked executables on hpux11; we also
want dereferencing of a NULL pointer to cause a SEGV.  Do not move
the "+Accept TypeMismatch" switch.  We check for it in collect2
Index: config/pa/pa-hpux.h
===
--- config/pa/pa-hpux.h (revision 145670)
+++ config/pa/pa-hpux.h (working copy)
@@ -1,5 +1,5 @@
 /* Definitions of target machine for GNU compiler, for HP-UX.
-   Copyright (C) 1991, 1995, 1996, 2002, 2003, 2004, 2007, 2008
+   Copyright (C) 1991, 1995, 1996, 2002, 2003, 2004, 2007, 2008, 2009
Free Software Foundation, Inc.
 
 This file is part of GCC.
@@ -32,6 +32,39 @@
 #define SIZE_TYPE "unsigned int"
 #define PTRDIFF_TYPE "int"
 
+/* C99 stdint.h types.  */
+#define INT8_TYPE "char"
+#define INT16_TYPE "short int"
+#define INT32_TYPE "int"
+#define INT64_TYPE "long long int"
+#define UINT8_TYPE "unsigned char"
+#define UINT16_TYPE "short unsigned int"
+#define UINT32_TYPE "unsigned int"
+#define UINT64_TYPE "long long unsigned int"
+
+#define INT_LEAST8_TYPE "char"
+#define INT_LEAST16_TYPE "short int"
+#define INT_LEAST32_TYPE "int"
+#define INT_LEAST64_TYPE "long long int"
+#define UINT_LEAST8_TYPE "unsigned char"
+#define UINT_LEAST16_TYPE "short unsigned int"
+#define UINT_LEAST32_TYPE "unsigned int"
+#define UINT_LEAST64_TYPE "long long unsigned int"
+
+#define INT_FAST8_TYPE "int"
+#define INT_FAST16_TYPE "int"
+#define INT_FAST32_TYPE "int"
+#define INT_FAST64_TYPE "long long int"
+#define UINT_FAST8_TYPE "unsigned int"
+#define UINT_FAST16_TYPE "unsigned int"
+#define UINT_FAST32_TYPE "unsigned int"
+#define UINT_FAST64_TYPE "long long unsigned int"
+
+#define INTPTR_TYPE "long int"
+#define UINTPTR_TYPE "long unsigned int"
+
+#define SIG_ATOMIC_TYPE "unsigned int"
+
 #define LONG_DOUBLE_TYPE_SIZE 128
 #define HPUX_LONG_DOUBLE_LIBRARY
 #define FLOAT_LIB_COMPARE_RETURNS_BOOL(MODE, COMPARISON) ((MODE) == TFmode)
@@ -56,11 +89,11 @@
builtin_define ("__hpux__");\
builtin_define ("__unix");  \
builtin_define ("__unix__");\
+   builtin_define ("__STDC_EXT__");\
if (c_dialect_cxx ())   \
  { \
builtin_define ("_HPUX_SOURCE");\
builtin_define ("_INCLUDE_LONGLONG");   \
-   builtin_define ("__STDC_EXT__");\
  } \
else if (!flag_iso) \
  { \
@@ -76,8 +109,6 @@
builtin_define ("_PWB");\
builtin_define ("PWB"); \
  } \
-   else\
- builtin_define ("__STDC_EXT__");  \
  } \
if (TARGET_SIO) \
  builtin_define ("_SIO");  \
Index: config/pa/pa-hpux10.h
=

Re: Question about creating stdint.h on systems that don't have it

2009-04-14 Thread Steve Ellcey
On Tue, 2009-04-14 at 12:18 -0400, John David Anglin wrote:
> I have a patch to provide stdint.h on HP-UX that I have been testing.
> I have resolved the consistency issues with inttypes.h although there
> are some slightly wierd aspects.  For example, I found "signed char" and
> "char" are inconsistent, yet characters are signed.  The specification
> of some types differs from that in inttypes.h for some types but gcc
> doesn't object.  For example, I specify "long long int" instead of just
> "long long".
> 
> I have to define __STDC_EXT__ under all circumstances to get consistency
> of the long long types in the 32-bit runtime.  Previously, gcc for hpux
> tried to mirror the behavior of the HP C compiler wrt long long types.

Rather then define __STDC_EXT__ all the time I was looking at defining
__LL_MODE__.  I created this inclhack.def entry:


+fix = {
+hackname  = hpux_longlong;
+mach  = "*-hp-hpux11.[12]*";
+files = sys/_inttypes.h;
+select= "#endif.*__LP64__.*";
+c_fix = format;
+c_fix_arg = "%0\n#if !defined(__STDC_EXT__) && !defined(__LP64__)
&& define
d(__STDC__) && ((__STDC_VERSION__-1+1) >= 199901L)\n#define __LL_MODE__
\n#undef 
__STDC_32_MODE__\n#endif\n";
+test_text = "#include ";
+};


> I think the patch chould work for most HP-UX versions except perhaps
> 11.31 and later.  These systems may provide stdint.h.  I don't have
> access to a machine with 11.31 or later.

I think all 11.23 systems should have stdint.h too. It is only 11.11 and
older systems that do not have stdint.h.  I don't have any 11.00 systems
anymore but I am currently looking at 11.11.

I got your patch and will compare that with what I have and see if I can
merge the two.

Steve Ellcey
s...@cup.hp.com



Re: Parma Polyhedra Library 0.10.1

2009-04-14 Thread Roberto Bagnara

Richard Guenther wrote:

On Tue, Apr 14, 2009 at 3:02 PM, Roberto Bagnara  wrote:

We are pleased to announce the availability of PPL 0.10.1, a new release
of the Parma Polyhedra Library.


It seems to build and test ok on {i586,ia64,ppc,ppc64,s390,x86_64}-linux
but I get

PASS: nnc_writepolyhedron1
/bin/sh: line 4: 29952 Segmentation fault  ${dir}$tst
FAIL: memory1
==
1 of 191 tests failed
Please report to ppl-de...@cs.unipr.it
==

on s390x-linux.  Does the testsuite stop after the first error?


Hi Richard.

The testsuite does not proceed after the first directory that gives
an error.  In your case, the `tests/Polyhedron' directory produced that
error and the `tests/Grid' directory is the only subdirectory of `tests'
that has not been tested because of that error.


If not,
what is memory1 testing?


It tests the PPL features that allow to recover after an out-of-memory
error, i.e., when std::bad_alloc is thrown.  It does so by limiting
the amount of memory available to the process, attempting some
expensive computation, catching std:bad_alloc, and restart.
The key function is this one:

bool
guarded_compute_open_hypercube_generators(dimension_type dimension,
  unsigned long max_memory_in_bytes) {
  try {
limit_memory(max_memory_in_bytes);
compute_open_hypercube_generators(dimension);
return true;
  }
  catch (const std::bad_alloc&) {
nout << "out of virtual memory" << endl;
return false;
  }
  catch (...) {
exit(1);
  }
  // Should never get here.
  exit(1);
}

From the fact that you observe this failure, I gather that the configure
script found a version of GMP compiled with -fexceptions.  Unfortunately,
this is not always enough.  For instance, on the Itanium the test fails
because of the libunwind bug reported in

   http://lists.gnu.org/archive/html/libunwind-devel/2008-09/msg1.html

Hence the test is disabled if defined(__ia64).  I don't know what the
problem could be on s390x-linux.  Do you know if there is an s390x-linux
machine we can obtain access to for the purpose of debugging?
Cheers,

   Roberto

--
Prof. Roberto Bagnara
Computer Science Group
Department of Mathematics, University of Parma, Italy
http://www.cs.unipr.it/~bagnara/
mailto:bagn...@cs.unipr.it


Re: Query gcc support on sigma design board

2009-04-14 Thread Ian Lance Taylor
manjunatha srinivasan  writes:

> Is  GCC is supporting sigma design board processors? If so which SMP
> processor series is supported. If GCC doesn't support how to enable
> the support for sigma design boards in GCC sources.

This sort of question should be sent to gcc-h...@gcc.gnu.org, not to
g...@gcc.gnu.org.  Please take any follow-ups to gcc-help.  Thanks.

What matters to gcc is the CPU architecture.  It seems to me that Sigma
Design Boards use a MIPS processor.  So, yes, gcc supports it.  Note
that gcc is only a compiler, and does not include a C library or a board
support package.

Ian


Re: messaging

2009-04-14 Thread Kai Henningsen

aschwarz1...@verizon.net schrieb:

Thanks Kai. I do have what I hope is a more specific subjective reason for 
saying that I think the existing diagnostics should be changed. Fundamentally, 
what is provided in the messaging is not an indication of what is wrong, but an 
indication of what is required to repair the fault. My objections then become:
1: As an old man full of wisdom, most developers can't distinguish a
   'primary-expression' from a washing machine. Further, to determine


Well, here I think that such people should perhaps put down the keyboard 
and back away from using the compiler slowly. That's about the same as 
driving a car without knowing what a stop sign means.


At least if they're unable to infer that a primary expression is a kind 
of expression, and there's no expression between [ and ].



   what correction might be needed most users would have to research
   the C++ standard (or other standardized document supported by the
   g++ development community) to find out exactly what constitutes a
   'primary-expression'. 


Let me put it like this: in this particular case, either it isn't 
particularly hard for the progammer to realise that he left out the 
index expression he wanted to write, or if that wasn't the mistake it 
demonstrates a rather fundamental misunderstanding of the language and 
he desperately needs to consult *something* to learn what he's been 
missing - no possible compiler message could close this hole in education.



2: It puts an obligation on the g++ development community to ensure
   that the messaging is consistent with documentation and that if the 
   term 'primary-expression' changes then g++ will change the messaging
   to conform to the new term. 


It's directly from the language standard. If the standard changes 
(presumably for more reason than not liking the term), that place in the 
compiler needs changing anyway.


And really, I *like* it when the compiler uses terms directly from the 
language standard, instead of inventing some other terms. I can search 
for those terms in the standard, and most other people talking about the 
standard will use the same terms.



3: The cause of the error is more specific than it's solution. The cause
   of the fault is the user (in this case me) provided something that
   was wrong. It wasn't the lack of a 'primary-expression' but the
   existence of the illegal constructs. My conjecture is that if the
   message says "you did this wrong" then the user would have an easy
   time of finding a fix.


It could well have been a typo for all the compiler knows, where you 
inadvertantly left out the index.



I don't argue with the details of my wording. My intent is not to show that I am a better 
wordsmith but that the existing diagnostic messages are not specific enough. From Item 1: 
above, in order for the average user to fix the error the user must research the terms 
used, then compare the syntax given with the actual fault, and then fix the error. If the 
message say "this is the fault", the research goes the way of the 
woolly-mammoth.

The paradigm is that the message should provide the minimum amount of 
information required to identify the syntax/semantics which caused the failure.


And in this case, I believe that the original message does just that, 
whereas your proposal doesn't.




Re: Parma Polyhedra Library 0.10.1

2009-04-14 Thread Ralf Wildenhues
Hello,

* Roberto Bagnara wrote on Tue, Apr 14, 2009 at 06:58:01PM CEST:
> Richard Guenther wrote:
>> It seems to build and test ok on {i586,ia64,ppc,ppc64,s390,x86_64}-linux
>> but I get
>>
>> PASS: nnc_writepolyhedron1
>> /bin/sh: line 4: 29952 Segmentation fault  ${dir}$tst
>> FAIL: memory1
>> ==
>> 1 of 191 tests failed
>> Please report to ppl-de...@cs.unipr.it
>> ==
>>
>> on s390x-linux.  Does the testsuite stop after the first error?

Are you saying that there were no 190 PASSes before that FAIL?  If yes,
that would be weird, and I'd be interested in the output of
  make check SHELL="/bin/sh -x"

and the make version used.

> The testsuite does not proceed after the first directory that gives
> an error.

Why not recommend
  make -k check

then?

Cheers,
Ralf


Re: messaging

2009-04-14 Thread James Dennett
On Tue, Apr 14, 2009 at 9:21 AM,   wrote:
>
> Thanks Kai. I do have what I hope is a more specific subjective reason for 
> saying that I think the existing diagnostics should be changed. 
> Fundamentally, what is provided in the messaging is not an indication of what 
> is wrong, but an indication of what is required to repair the fault. My 
> objections then become:
> 1: As an old man full of wisdom, most developers can't distinguish a
>    'primary-expression' from a washing machine. Further, to determine
>    what correction might be needed most users would have to research
>    the C++ standard (or other standardized document supported by the
>    g++ development community) to find out exactly what constitutes a
>    'primary-expression'.

I believe that most developers manage to understand the message
sufficiently without needing (or caring) to know exactly what a
"primary-expression" is -- it's clearly some kind of expression, and
they already know what expressions are.  The additional information
provided by saying "primary-expression" is useful to those who do care
about it.  (And will motivate a few to become interested in the more
precise terminology, maybe.)

> 2: It puts an obligation on the g++ development community to ensure
>    that the messaging is consistent with documentation and that if the
>   term 'primary-expression' changes then g++ will change the messaging
>   to conform to the new term.

Being consistent with the terminology used by the C++ Standard is one
of the best ways to protect against changing terminology.  The
terminology in the standard does evolve, but generally very, very
slowly.

> 3: The cause of the error is more specific than it's solution. The cause
>    of the fault is the user (in this case me) provided something that
>    was wrong. It wasn't the lack of a 'primary-expression' but the
>    existence of the illegal constructs. My conjecture is that if the
>    message says "you did this wrong" then the user would have an easy
>    time of finding a fix.

I'm fairly sure that most g++ implementors are very happy when they
can, with reasonable confidence, suggest how to fix a problem.  The
difficulty is in doing so.  The correct fix is usually not obvious
based only on information available to the compiler, though in various
special cases it may be.  There are often many ways to eliminate an
error, one of more or which might have the correct semantics for a
given program.  Giving recommendations for how to *fix* the problem
can be counterproductive -- many programmers will happily do the first
thing they think of to make the warning/error go away, just as when
they blindly add casts to eliminate diagnostics about type errors.
Providing diagnostics that are simplistic is counterproductive.

I certainly agree that there is a lot of room for improvement in g++'s
diagnostics.  It's not a glamorous project, and it's far from easy,
but it would be valuable.  There may also be other ways to help: once
each diagnostic has a unique identifier, online documentation can
offer further advice on how to resolve issues, for example.

-- James


Re: Question about creating stdint.h on systems that don't have it

2009-04-14 Thread John David Anglin
> > I have to define __STDC_EXT__ under all circumstances to get consistency
> > of the long long types in the 32-bit runtime.  Previously, gcc for hpux
> > tried to mirror the behavior of the HP C compiler wrt long long types.
> 
> Rather then define __STDC_EXT__ all the time I was looking at defining
> __LL_MODE__.  I created this inclhack.def entry:

I guess the main issue is whether large file support should always
be enabled or not.

> +fix = {
> +hackname  = hpux_longlong;
> +mach  = "*-hp-hpux11.[12]*";

> I got your patch and will compare that with what I have and see if I can
> merge the two.

Thanks.

Dave
-- 
J. David Anglin  dave.ang...@nrc-cnrc.gc.ca
National Research Council of Canada  (613) 990-0752 (FAX: 952-6602)


Re: messages

2009-04-14 Thread Jonathan Wakely
2009/4/14 Arthur Schwarz:
> --- On Mon, 4/13/09, Joe Buck wrote:
>
>  them all.
>>
>> Consider
>>
>> #include 
>> struct Foo { int bar;};
>> int main() {
>>   std::cerr << Foo();
>> }
>>
>> Try it, the result is ugly, and I often encounter this one
>
>  (Personal opinion - not to be construed as wisdom).
>  The issue with the result is:
>  1: There is no end-of-line between candidates (or anywhere).

Do you mean there is no blank line, or no newline characters at all?
I certainly get a newline after each candidate.  Something's wrong if
you don't.

>  2: The candidate template is a large, untamed, and unruly beast.

The bigger problem is there isn't a single candidate, but several.  As
suggested elsewhere, stlfilt helps make them more readable.

>  3: The diagnostic message is not clear. I think it should say
>     that the compiler can't find something because of something.

The 'because of something' is far from simple.   How can the compiler
tell you why there's no match for calling the operator with those
argument types?

It could be because of a typo, and one of the following was meant
instead of Foo():

typedef int Food;
int foo();

Or it could be that a header wasn't included, so the relevant operator
hasn't been declared.  Or it could be that there's an operator in
scope for wide-character streams and the user meant to write to
std::wcout. I don't see how the compiler can determine which of those,
or other reasons, to point out.  Giving the wrong suggestion could
make things even worse, by misleading you and distracting you from the
real cause.

>  4: Providing a full template for each candidate is (indeed)
>     something of an overkill.

Other than doing what stlfilt does, how could you show less than the
full templates?  The return type could be suppressed without loss of
information, but Joe's suggestion of not showing all the matches
strikes me as more useful than showing less of each match.

Jonathan


Re: messaging

2009-04-14 Thread Jonathan Wakely
2009/4/14 Kai Henningsen:
> aschwarz1...@verizon.net schrieb:
>>
>> Thanks Kai. I do have what I hope is a more specific subjective reason for
>> saying that I think the existing diagnostics should be changed.
>> Fundamentally, what is provided in the messaging is not an indication of
>> what is wrong, but an indication of what is required to repair the fault. My
>> objections then become:
>> 1: As an old man full of wisdom, most developers can't distinguish a
>>   'primary-expression' from a washing machine. Further, to determine
>
> Well, here I think that such people should perhaps put down the keyboard and
> back away from using the compiler slowly. That's about the same as driving a
> car without knowing what a stop sign means.
>
> At least if they're unable to infer that a primary expression is a kind of
> expression, and there's no expression between [ and ].

Even if they are utterly flummoxed by the term, the message points
pretty clearly to the exact spot within the line that is wrong, as
does the ':' message.  In those cases it should be good enough to
point to the position that causes a problem, and a
reasonably-proficient c++ developer will spot the typo (not every time
- we can all be blind to simple typos sometimes, but that's not the
compiler's problem.)  I don't think  the compiler can be expected to
help if the developer doesn't know the language well enough to tell
that the syntax is invalid once pointed to the location of the error.

There are cases where the location in the diagnostic is (seemingly)
unrelated to the cause, but neither of your examples is in that
category.

...

> And really, I *like* it when the compiler uses terms directly from the
> language standard, instead of inventing some other terms.

Agreed.  I'd want an 'expert mode' with precise terminology if those
diagnostics were changed.

Jonathan


Re: messaging

2009-04-14 Thread Arthur Schwarz

The issues grow ever more complex. Suppose that we're dealing with macro's, 
some similarly named, and there's a typo. Suppose  several layers of template 
expansion are involved and nested deep within one there is some error. Suppose, 
suppose ... .

The motivation is not to expand the problem domain to the point were even 
stating the problem is a problem, but to creep up carefully and gradually on 
some consensus option as to what to do and then to go forward. All the points 
made are valid. At a certain time either the diagnostic message can not 
perceive nor report on the original cause of error, or the report is convoluted 
enough to be unreadable by all but the most diligent.

Let me address some general principles, which, of course, are both mine and may 
be wrong.
1: The purpose of compiler diagnostics is to present faults to a user
   in the most economic way possible.
2: The presentation should be a pointed as possible as to the detected
   fault.
3: The user should be required to understand as little as possible in
   order to understand the faulting message.

The details of specific messaging are not as important as the guidelines.

What I have seen in this thread and in a companion thread, messages, are these 
viewpoints.
1: The user should have some minimal capability to understand the 
   diagnostic messages as they are now.
2: The user is being overwhelmed with information and has difficulty
   sifting through it.
3: The messages show the fix but not the problem.

Clearly I am biased to 2: and 3:. But let me turn to 1; for a moment. 

In order to develop software in most languages, C++ being only one, it is not 
necessary to read nor understand the syntax equations for the language. The 
notion that developers should be compelled to acquire a knowledge of syntax 
equations won't work in practice. There is no authority to compell this 
knowledge nor to deny employment (or hobby work) for someone who doesn't have 
it. It might be nice but ... . So we are left with compiler users with minimal 
or none of the assumed pre-requisite knowledge.

The notion that these unknowledgeable users should be abandoned in provided 
diagnostic messages eventually translates into the compiler being abandoned by 
the users. In the small, probably fine. In the large I would think it 
unacceptable.

And there are competitive compilers. Some with better messaging and better 
messaging resources at the very point where g++ is weakest. You might argue 
that they are 'better in what way?', but I think the real argument is in what 
ways can these other products be a model for g++ to improve itself. Unless the 
notion is that g++ needs no improvement.

A reasoned attitude (I think) is to address each item without prejudice and see 
if there is some common ground, and then to proceed to see what is possible in 
general and what edge cases can't be simply solved. 

I think that there is a way to creep upon a general consensus which may not 
give everyone everything, but will give most something. And I believe the 
solution is not a 'camel by committee' but a more useable product. 

art


Re: messaging

2009-04-14 Thread Manuel López-Ibáñez
2009/4/14 Arthur Schwarz :
>
> And there are competitive compilers. Some with better messaging and better 
> messaging resources at the very point where g++ is weakest. You might argue 
> that they are 'better in what way?', but I think the real argument is in what 
> ways can these other products be a model for g++ to improve itself. Unless 
> the notion is that g++ needs no improvement.
>

Then, you should mention what kind of error messages are given by
other compilers (or C++ front-ends). In my experience that helps a lot
to get your point across. Then, maintainers (who are the ones that at
the end decide what is accepted and what is not) can assess if the
alternative message is better or worse. But to do that, maintainers
will probably prefer a patch implementing the message.

GCC diagnostics need a lot of improvement. There are many open PRs
that are just waiting for someone to work on them. Sometimes someone
works on one of them, produces a patch, the patch gets accepted, the
PR gets fixed and GCC diagnostics are slightly improved.

> A reasoned attitude (I think) is to address each item without prejudice and 
> see if there is some common ground, and then to proceed to see what is 
> possible in general and what edge cases can't be simply solved.

And yet, you are arguing about "gcc messaging" in general. Without any
working knowledge of gcc capabilities or internals is just a pointless
exercise. You seem to assume that "someone" will implement your
proposals. That may happen, but in my experience, it is very, very
unlikely. What is more likely is that (a) some people will not
understand your verbal description of an implementation, (b) some
people are happy with things as they are and will resist change, (c)
some people will agree with you and do nothing else. In any case, the
result would be that nothing is done.

> I think that there is a way to creep upon a general consensus which may not 
> give everyone everything, but will give most something. And I believe the 
> solution is not a 'camel by committee' but a more useable product.
>

You do not need any consensus. You just need to put forward a patch
that implements your proposal and give enough reasons to the relevant
maintainers to accept your patch. And you'll need a lot of patience
and be willing to compromise. Otherwise, this thread has a 99% chance
of being completely futile (Reporting precise, well-argued and
detailed PRs will lower the chances).

Cheers,

Manuel.


gcc-4.4-20090414 is now available

2009-04-14 Thread gccadmin
Snapshot gcc-4.4-20090414 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.4-20090414/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.4 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_4-branch 
revision 146067

You'll find:

gcc-4.4-20090414.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.4-20090414.tar.bz2 C front end and core compiler

gcc-ada-4.4-20090414.tar.bz2  Ada front end and runtime

gcc-fortran-4.4-20090414.tar.bz2  Fortran front end and runtime

gcc-g++-4.4-20090414.tar.bz2  C++ front end and runtime

gcc-java-4.4-20090414.tar.bz2 Java front end and runtime

gcc-objc-4.4-20090414.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.4-20090414.tar.bz2The GCC testsuite

Diffs from 4.4-20090407 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.4
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Error messages

2009-04-14 Thread Brian O'Mahoney
Dave makes a very important point, who to believe:

Certainly System header prototypes, but then the

file, line-number, and prototype which the implementation contradicts, 
whenever.

mfg, Brian


[plugins] Merge with post-plugins trunk @146060

2009-04-14 Thread Diego Novillo
This merge brings over all the changes I made when I committed
the plugin patches to mainline.

There are a few patches that still need to be moved from plugins, but
they are not too big.  I will get to those later this week.

Tested on x86_64.


2009-04-14  Diego Novillo  

Merge with mainline @146060.

* configure.ac (ACX_PKGVERSION): Update.
* configure: Regenerate.

2009-04-14  Diego Novillo  

* gcc-plugin.h (enum plugin_event): Rename
from PLUGIN_FINISH_STRUCT.
(plugin_event_name): Declare.
* plugin.c: Only include dlfcn.h if ENABLE_PLUGIN is set.
Do not include errors.h.
Include timevar.h.
Do not prefix strings in calls to error() with G_.
(plugin_event_name): Define.
(str_plugin_init_func_name): Only declare if ENABLE_PLUGIN is
set.
(invoke_plugin_callbacks): Call timevar_push/timevar_pop.
(try_init_one_plugin, init_one_plugin): Protect with
ENABLE_PLUGIN macro.
(initialize_plugins): Call timevar_push/timevar_pop.
(plugins_active_p): New.
(dump_active_plugins): New.
(debug_active_plugins): New.
* plugin.h: Tidy declarations.
(plugins_active_p): Declare.
(dump_active_plugins): Declare.
(debug_active_plugins): Declare.

testsuite/ChangeLog.plugins:

2009-04-14  Diego Novillo  

* gcc.dg/plugin/plugin.exp: Check for ENABLE_PLUGIN.
* g++.dg/plugin/plugin.exp: Likewise.
* g++.dg/plugin/dumb_plugin.c: Register PLUGIN_FINISH_TYPE.


Index: gcc-plugin.h
===
--- gcc-plugin.h(revision 146060)
+++ gcc-plugin.h(working copy)
@@ -20,10 +20,11 @@ along with GCC; see the file COPYING3.
 #ifndef GCC_PLUGIN_H
 #define GCC_PLUGIN_H

+/* Event names.  Keep in sync with plugin_event_name[].  */
 enum plugin_event
 {
   PLUGIN_PASS_MANAGER_SETUP,/* To hook into pass manager.  */
-  PLUGIN_FINISH_STRUCT, /* After finishing parsing a struct/class.  */
+  PLUGIN_FINISH_TYPE,   /* After finishing parsing a type.  */
   PLUGIN_FINISH_UNIT,   /* Useful for summary processing.  */
   PLUGIN_CXX_CP_PRE_GENERICIZE, /* Allows to see low level AST in C++ FE.  */
   PLUGIN_FINISH,/* Called before GCC exits.  */
@@ -32,6 +33,8 @@ enum plugin_event
array.  */
 };

+extern const char *plugin_event_name[];
+
 struct plugin_argument
 {
   char *key;/* key of the argument.  */
Index: plugin.c
===
--- plugin.c(revision 146060)
+++ plugin.c(working copy)
@@ -18,21 +18,38 @@ along with GCC; see the file COPYING3.
 .  */

 /* This file contains the support for GCC plugin mechanism based on the
-   APIs described in the following wiki page:
-
-   http://gcc.gnu.org/wiki/GCC_PluginAPI  */
+   APIs described in doc/plugin.texi.  */

-#include 
-#include 
 #include "config.h"
 #include "system.h"
+
+/* If plugin support is not enabled, do not try to execute any code
+   that may reference libdl.  The generic code is still compiled in to
+   avoid including to many conditional compilation paths in the rest
+   of the compiler.  */
+#ifdef ENABLE_PLUGIN
+#include 
+#endif
+
 #include "coretypes.h"
-#include "errors.h"
 #include "toplev.h"
 #include "tree.h"
 #include "tree-pass.h"
 #include "intl.h"
 #include "plugin.h"
+#include "timevar.h"
+
+/* Event names as strings.  Keep in sync with enum plugin_event.  */
+const char *plugin_event_name[] =
+{
+  "PLUGIN_PASS_MANAGER_SETUP",
+  "PLUGIN_FINISH_TYPE",
+  "PLUGIN_FINISH_UNIT",
+  "PLUGIN_CXX_CP_PRE_GENERICIZE",
+  "PLUGIN_FINISH",
+  "PLUGIN_INFO",
+  "PLUGIN_EVENT_LAST"
+};

 /* Object that keeps track of the plugin name and its arguments
when parsing the command-line options -fplugin=/path/to/NAME.so and
@@ -78,10 +95,11 @@ struct pass_list_node
 static struct pass_list_node *added_pass_nodes = NULL;
 static struct pass_list_node *prev_added_pass_node;

+#ifdef ENABLE_PLUGIN
 /* Each plugin should define an initialization function with exactly
this name.  */
 static const char *str_plugin_init_func_name = "plugin_init";
-
+#endif

 /* Helper function for the hash table that compares the base_name of the
existing entry (S1) with the given string (S2).  */
@@ -93,6 +111,7 @@ htab_str_eq (const void *s1, const void
   return !strcmp (plugin->base_name, (const char *) s2);
 }

+
 /* Given a plugin's full-path name FULL_NAME, e.g. /pass/to/NAME.so,
return NAME.  */

@@ -108,6 +127,7 @@ get_plugin_base_name (const char *full_n
   return base_name;
 }

+
 /* Create a plugin_name_args object for the give plugin and insert it to
the hash table. This function is called when -fplugin=/path/to/NAME.so
option is processed.  */
@@ -134,7 +154,7 @@ add_new_plugin (const char* plugin_name)
 {
   plugin 

[Ann] Test

2009-04-14 Thread Mass Mailer

Test Only no reply



[Ann] Test

2009-04-14 Thread Mass Mailer

Test Only no reply