gcc trunk 20070110 build failure (--enable-targets=all i486-linux-gnu)

2007-01-10 Thread Matthias Klose
trunk configured on i486 (on x86_64 hardware) with

  --enable-targets=all i486-linux-gnu

fails to configure the first 64bit library (libiberty), not finding
the correct libgcc.

/scratch/packages/gcc/snap/gcc-snapshot-20070110/build/i486-linux-gnu/64/libiberty$
 /scratch/packages/gcc/snap/gcc-snapshot-20070110/build/./gcc/xgcc 
-B/scratch/packages/gcc/snap/gcc-snapshot-20070110/build/./gcc/ 
-B/usr/lib/gcc-snapshot/i486-linux-gnu/bin/ 
-B/usr/lib/gcc-snapshot/i486-linux-gnu/lib/ -isystem 
/usr/lib/gcc-snapshot/i486-linux-gnu/include -isystem 
/usr/lib/gcc-snapshot/i486-linux-gnu/sys-include -m64 -o conftest -O2 -g -O2 
foo.c   
/usr/bin/ld: skipping incompatible 
/scratch/packages/gcc/snap/gcc-snapshot-20070110/build/./gcc/libgcc.a when 
searching for -lgcc
/usr/bin/ld: cannot find -lgcc
collect2: ld returned 1 exit status

$ ls build/i486-linux-gnu/64
libiberty

$ ls build/gcc/64
ls: build/gcc/64: No such file or directory

so the 64bit libgcc doesn't seem to be built and the multilib directory not 
created.


Re: build broken

2007-03-06 Thread Matthias Klose

Mike Stump schrieb:

It appears that one of these:
+ '[' -s .bad_compare ']'
+ exit 1

I have a feeling sed -i isn't our friend.


fixed.

  Matthias

2007-03-06  Matthias Klose  <[EMAIL PROTECTED]>

* doc/Makefile.am(gkeytool.pod): Don't use sed -i.
* doc/Makefile.in: Regenerate.

Index: doc/Makefile.am
===
--- doc/Makefile.am (Revision 122631)
+++ doc/Makefile.am (Arbeitskopie)
@@ -67,10 +67,10 @@
 
 # hack around the cross references and the enumeration
 gkeytool.pod: $(srcdir)/cp-tools.texinfo
-   -$(TEXI2POD) -D gkeytool < $< > $@
-   sed -i -e 's/^For more details.*/See I for more 
details./' \
+   -$(TEXI2POD) -D gkeytool < $< \
+ | sed -e 's/^For more details.*/See I for more 
details./' \
-e 's/1\.<\([^>]*\)>/- \1/' \
-   $@
+   > $@
 
 gnative2ascii.pod: $(srcdir)/cp-tools.texinfo
-$(TEXI2POD) -D gnative2ascii < $< > $@


Re: Building mainline and 4.2 on Debian/amd64

2007-03-19 Thread Matthias Klose
Florian Weimer writes:
> Is there a convenient switch to make GCC bootstrap on Debian/amd64
> without patching the build infrastructure?  Apparently, GCC tries to
> build 32-bit variants of all libraries (using -m32), but the new
> compiler uses the 64-bit libc instead of the 32-bit libc, hence
> building them fails.
> 
> I don't need the 32-bit libraries, so disabling their compilation
> would be fine. --enable-targets at configure time might do the trick,
> but I don't know what arguments are accepted.

Others already mentioned --disable-multilib.

If you do want to build a multilib build, make sure that the
libc6-dev-i386 package is installed, and the appended patch is applied
(Maybe a patch using some Makefile conditionals could be applied
upstream). For historical reasons lib64 is a symlink to lib, and the
32bit libs are installed into lib32. Building the java gtk/x peers
doesn't work, because the gtk/gnome libs are not available for 32bit.

  Matthias


--- gcc/config/i386/t-linux64~  2002-11-28 14:47:02.0 +
+++ gcc/config/i386/t-linux64   2004-06-02 16:07:30.533131301 +
@@ -6,7 +6,7 @@
 
 MULTILIB_OPTIONS = m64/m32
 MULTILIB_DIRNAMES = 64 32 
-MULTILIB_OSDIRNAMES = ../lib64 ../lib
+MULTILIB_OSDIRNAMES = ../lib ../lib32
 
 LIBGCC = stmp-multilib
 INSTALL_LIBGCC = install-multilib



Re: 4.3 bootstrap broken on i386-linux

2007-03-26 Thread Matthias Klose
FX Coudert writes:
> Hi all,
> 
> My nightly bootstrap of mainline on i386-linux failed tonight, on  
> revision 123192, with:
> 
> /home/fxcoudert/gfortran_nightbuild/trunk/libgcc/../libdecnumber/ 
> decLibrary.c: In function ?isinfd64?:
> /home/fxcoudert/gfortran_nightbuild/trunk/libgcc/../libdecnumber/ 
> decLibrary.c:65: error: unrecognizable insn:
> (insn 11 10 12 3 /home/fxcoudert/gfortran_nightbuild/trunk/libgcc/../ 
> libdecnumber/decLibrary.c:62 (set (reg/f:SI 61)
>  (pre_dec:SI (reg/f:SI 7 sp))) -1 (nil)
>  (nil))
> /home/fxcoudert/gfortran_nightbuild/trunk/libgcc/../libdecnumber/ 
> decLibrary.c:65: internal compiler error: in extract_insn, at recog.c: 
> 2119
> 
> The last revision known to compile OK on that particular setup was:  
> 123178. I filed it as PR 31344 in bugzilla. The compilation fails for  
> -mtune=i[345]86 while it doesn't ICE for -mtune=i686.

I see another bootstrap failure with a compiler configured for
i486-linux-gnu.

configure --enable-languages=c,c++,java,fortran,objc,obj-c++,ada,treelang 
--prefix=/usr/lib/gcc-snapshot --enable-shared --with-system-zlib  
--disable-nls --enable-__cxa_atexit --enable-clocale=gnu 
--enable-libstdcxx-debug --enable-java-maintainer-mode --enable-java-awt=gtk 
--enable-gtk-cairo --enable-plugin --with-java-home=/usr/lib/gcc-snapshot/jre 
--with-ecj-jar=/usr/share/java/ecj.jar --enable-mpfr --disable-werror 
--build=i486-linux-gnu --host=i486-linux-gnu --target=i486-linux-gnu
[...]
/scratch/packages/gcc/snap/gcc-snapshot-20070326/build/./gcc/xgcc 
-B/scratch/packages/gcc/snap/gcc-snapshot-20070326/build/./gcc/ 
-B/usr/lib/gcc-snapshot/i486-linux-gnu/bin/ 
-B/usr/lib/gcc-snapshot/i486-linux-gnu/lib/ -isystem 
/usr/lib/gcc-snapshot/i486-linux-gnu/include -isystem 
/usr/lib/gcc-snapshot/i486-linux-gnu/sys-include -g -fkeep-inline-functions -O2 
 -O2 -g -O2  -DIN_GCC-W -Wall -Wwrite-strings -Wstrict-prototypes 
-Wmissing-prototypes -Wold-style-definition  -isystem ./include  -fPIC -g 
-DHAVE_GTHR_DEFAULT -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED   -I. -I. 
-I../.././gcc -I../../../src/libgcc -I../../../src/libgcc/. 
-I../../../src/libgcc/../gcc -I../../../src/libgcc/../include 
-I../../../src/libgcc/../libdecnumber/no -I../../../src/libgcc/../libdecnumber 
-I../../libdecnumber -o decLibrary.o -MT decLibrary.o -MD -MP -MF 
decLibrary.dep -c ../../../src/libgcc/../libdecnumber/decLibrary.c
../../../src/libgcc/../libdecnumber/decLibrary.c:32:24: error: decimal128.h: No 
such file or directory
../../../src/libgcc/../libdecnumber/decLibrary.c:33:23: error: decimal64.h: No 
such file or directory
../../../src/libgcc/../libdecnumber/decLibrary.c:34:23: error: decimal32.h: No 
such file or directory
../../../src/libgcc/../libdecnumber/decLibrary.c:36: error: expected 
declaration specifiers or '...' before 'decimal32'
../../../src/libgcc/../libdecnumber/decLibrary.c:37: error: expected 
declaration specifiers or '...' before 'decimal64'
../../../src/libgcc/../libdecnumber/decLibrary.c:38: error: expected 
declaration specifiers or '...' before 'decimal128'
../../../src/libgcc/../libdecnumber/decLibrary.c: In function 'isinfd32':
../../../src/libgcc/../libdecnumber/decLibrary.c:48: error: 'decNumber' 
undeclared (first use in this function)
../../../src/libgcc/../libdecnumber/decLibrary.c:48: error: (Each undeclared 
identifier is reported only once
../../../src/libgcc/../libdecnumber/decLibrary.c:48: error: for each function 
it appears in.)
../../../src/libgcc/../libdecnumber/decLibrary.c:48: error: expected ';' before 
'dn'
../../../src/libgcc/../libdecnumber/decLibrary.c:49: error: 'decimal32' 
undeclared (first use in this function)
../../../src/libgcc/../libdecnumber/decLibrary.c:49: error: expected ';' before 
'd32'
../../../src/libgcc/../libdecnumber/decLibrary.c:51: error: 'd32' undeclared 
(first use in this function)
../../../src/libgcc/../libdecnumber/decLibrary.c:51: error: too many arguments 
to function '__host_to_ieee_32'
../../../src/libgcc/../libdecnumber/decLibrary.c:52: warning: implicit 
declaration of function 'decimal32ToNumber'
../../../src/libgcc/../libdecnumber/decLibrary.c:52: error: 'dn' undeclared 
(first use in this function)
../../../src/libgcc/../libdecnumber/decLibrary.c:53: warning: implicit 
declaration of function 'decNumberIsInfinite'
../../../src/libgcc/../libdecnumber/decLibrary.c: In function 'isinfd64':
../../../src/libgcc/../libdecnumber/decLibrary.c:59: error: 'decNumber' 
undeclared (first use in this function)
../../../src/libgcc/../libdecnumber/decLibrary.c:59: error: expected ';' before 
'dn'
../../../src/libgcc/../libdecnumber/decLibrary.c:60: error: 'decimal64' 
undeclared (first use in this function)
../../../src/libgcc/../libdecnumber/decLibrary.c:60: error: expected ';' before 
'd64'
../../../src/libgcc/../libdecnumber/decLibrary.c:62: error: 'd64' undeclared 
(first use in this function)
../../../src/libgcc/../libdecnumber/decLibrary.c:62: error: too many arguments 
to function '__host_to_ie

GCC 4.0 Ada Status Report (2005-04-09)

2005-04-09 Thread Matthias Klose
Laurent GUERBY writes:
 > Hi, fromm gcc-testresults here is where we stand on 4.0/Ada after
 > the tree-sra Ada patch. I'm looking for results for platforms where I
 > believe Ada could work:
 > 
 > powerpc-linux

http://gcc.gnu.org/ml/gcc-testresults/2005-03/msg01875.html


Re: sparc-linux results for 4.0.1 RC3

2005-07-06 Thread Matthias Klose
Paolo Carlini writes:
> Eric Botcazou wrote:
> 
> >>hmm, I get a few libstdc++ testsuite failuers
> >>
> >>http://gcc.gnu.org/ml/gcc-testresults/2005-07/msg00304.html
> >>
> >>other than that, looks pretty fine.
> >>
> >>
> >Did you get them with 4.0.0 too?  If no, the libstdc++ folks will have to 
> >say 
> >whether they are really regressions (the testsuite harness has changed).
> >
> Yes, I would definitely encourage a little more analysis. I'm rather
> puzzled. We have got very nice testsuites on sparc-solaris and on
> *-linux, in general, and those failures certainly are not expected.
> However, missing additional details, it's very difficult to guess: can
> be almost anything, from a weirdness in the installed localedata to a
> defect of the testsuite harness, to a code generation bug, to a latent
> bug in the generic code of the library exposed only by that target, and
> only now.

I don't see these regression on Debian unstable, not exactly built
from the snapshot, but from CVS at the same date. Test results at

http://gcc.gnu.org/ml/gcc-testresults/2005-07/msg00280.html

However, there are 8 gfortran regressions, compared to 4.0.0:

FAIL: gfortran.dg/f2c_2.f90  -O0  execution test
FAIL: gfortran.dg/f2c_2.f90  -O1  execution test
FAIL: gfortran.dg/f2c_2.f90  -O2  execution test
FAIL: gfortran.dg/f2c_2.f90  -O3 -fomit-frame-pointer  execution test
FAIL: gfortran.dg/f2c_2.f90  -O3 -fomit-frame-pointer -funroll-loops
execution test
FAIL: gfortran.dg/f2c_2.f90  -O3 -fomit-frame-pointer
-funroll-all-loops -finline-functions  execution test
FAIL: gfortran.dg/f2c_2.f90  -O3 -g  execution test
FAIL: gfortran.dg/f2c_2.f90  -Os  execution test


[rfc] libstdc++ include directories for biarch builds

2005-07-19 Thread Matthias Klose
currently the C++ include directories for biarch builds
(i.e. i486-linux-gnu, x86_64-linux-gnu) are not set properly for the
non-default target. At least the c++config.h header differs. On
i486-linux-gnu a x86_64-linux dir is installed, but is not used, for
powerpc-linux-gnu, no powerpc64-linux-gnu directory is installed,
although c++config.h differs as well.

Looking at the FC builds, c++config.h is replaced by a wrapper file to
include the proper c++config.h. I experimented with the standard
include directories to include one of the both directories dpending on
the current mode, currently there seems to be no way to deduce the
names of the directories from the build infrastructure, i.e. something
like MULTILIB_DIRS is missing. The following hack (for the 4.0 branch)
does work for me, but is not suited for upstream inclusion.

  Matthias


--- gcc/cppdefault.h~   2004-11-03 04:23:49.0 +0100
+++ gcc/cppdefault.h2005-07-08 20:58:14.016437112 +0200
@@ -43,6 +43,7 @@
   C++.  */
   const char add_sysroot;  /* FNAME should be prefixed by
   cpp_SYSROOT.  */
+  const char biarch;/* 32/64 bit biarch include */
 };
 
 extern const struct default_include cpp_include_defaults[];
--- gcc/c-incpath.c~2005-01-23 16:05:27.0 +0100
+++ gcc/c-incpath.c 2005-07-08 21:09:40.572064792 +0200
@@ -139,6 +139,13 @@
 now.  */
  if (sysroot && p->add_sysroot)
continue;
+ if (p->biarch)
+   {
+ if (p->biarch == 64 && !(target_flags & MASK_64BIT))
+   continue;
+ if (p->biarch == 32 && (target_flags & MASK_64BIT))
+   continue;
+   }
  if (!strncmp (p->fname, cpp_GCC_INCLUDE_DIR, len))
{
  char *str = concat (iprefix, p->fname + len, NULL);
@@ -150,6 +157,14 @@
 
   for (p = cpp_include_defaults; p->fname; p++)
 {
+  if (p->biarch)
+   {
+ if (p->biarch == 64 && !(target_flags & MASK_64BIT))
+   continue;
+ if (p->biarch == 32 && (target_flags & MASK_64BIT))
+   continue;
+   }
+
   if (!p->cplusplus || cxx_stdinc)
{
  char *str;
--- gcc/Makefile.in~2005-04-04 21:45:13.0 +0200
+++ gcc/Makefile.in 2005-07-08 21:04:29.808308064 +0200
@@ -2680,6 +2680,8 @@
   -DLOCAL_INCLUDE_DIR=\"$(local_includedir)\" \
   -DCROSS_INCLUDE_DIR=\"$(CROSS_SYSTEM_HEADER_DIR)\" \
   -DTOOL_INCLUDE_DIR=\"$(gcc_tooldir)/include\" \
+  -DTARGET32_MACHINE=\"i486-linux-gnu\" \
+  -DTARGET64_MACHINE=\"x86_64-linux-gnu\" \
   @TARGET_SYSTEM_ROOT_DEFINE@
 
 cppdefault.o: cppdefault.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) \

--- gcc/cppdefault.c~   2004-11-03 03:23:49.0 +
+++ gcc/cppdefault.c2005-07-09 10:19:46.762899104 +
@@ -50,9 +70,15 @@
 /* Pick up GNU C++ generic include files.  */
 { GPLUSPLUS_INCLUDE_DIR, "G++", 1, 1, 0 },
 #endif
+#if defined (CROSS_COMPILE)
+/* Pick up GNU C++ target-dependent include files.  */
+{ GPLUSPLUS_TOOL_INCLUDE_DIR, "G++", 1, 1, 0, 32 },
+#else
 #ifdef GPLUSPLUS_TOOL_INCLUDE_DIR
 /* Pick up GNU C++ target-dependent include files.  */
-{ GPLUSPLUS_TOOL_INCLUDE_DIR, "G++", 1, 1, 0 },
+{ GPLUSPLUS_INCLUDE_DIR "/" TARGET32_MACHINE, "G++", 1, 1, 0, 32 },
+{ GPLUSPLUS_INCLUDE_DIR "/" TARGET64_MACHINE, "G++", 1, 1, 0, 64 },
+#endif
 #endif
 #ifdef GPLUSPLUS_BACKWARD_INCLUDE_DIR
 /* Pick up GNU C++ backward and deprecated include files.  */


Re: GCC 4.0.2 Released

2005-09-30 Thread Matthias Klose
Mark Mitchell writes:
> Daniel Jacobowitz wrote:
> 
> >>My inclination is to do nothing (other than correct the target
> >>milestones on these bugs in bugzilla) and move on.  The Solaris problem
> >>is bad, and I beat up on Benjamin to get it fixed, but I'm not sure it's
> >>a crisis meriting another release cycle.  The C++ change fixed a
> >>regression relative to 3.4.x, but not 4.0.x.  Andreas' change is only
> >>known to affect m68k.
> > 
> > ... but IIRC it cripples GCC for m68k; Debian turned up hundreds of
> > build failures because of this bug and it set builds back several
> > weeks.
> 
> Was this a regression from 4.0.0 or 4.0.1?

I don't know. We noticed when we did switch the compiler from 3.3 to
4.0.  IMO, we (Debian) can make sure that the patch is applied in the
compiler.

  Matthias


Re: Multilibs in stage-1

2020-05-07 Thread Matthias Klose
On 5/7/20 9:02 AM, Jonathan Wakely wrote:
> On Thu, 7 May 2020 at 07:28, Uros Bizjak via Gcc  wrote:
>>
>> On Thu, May 7, 2020 at 8:16 AM Richard Biener
>>  wrote:
>>>
>>> On May 6, 2020 11:15:08 PM GMT+02:00, Uros Bizjak via Gcc  
>>> wrote:
 Hello!

 I wonder, if the build process really needs to build all multilibs in
 stage-1 bootstrap build. IIRC, stage-1 uses system compiler to build
 stage-1 gcc, so there is no need for multilibs, apart from library
 that will be used by stage-1 gcc during compilation of stage-2
 compiler.
>>>
>>> Correct. Only stage3 needs those. But IIRC we already avoid building them? 
>>> Likewise we avoid building libsanitizer and friends in stage 1/2 unless 
>>> ubsan bootstrap is enabled.
>>
>> Looking at:
>>
>> [gcc-build]$ ls stage1-x86_64-pc-linux-gnu/32/
>> libgcc  libgomp  libstdc++-v3
>>
>> it seems that 32bit multilibs are built anyway, also in stage2:
>>
>> [gcc-build]$ ls prev-x86_64-pc-linux-gnu/32/
>> libgcc  libgomp  libstdc++-v3
> 
> Debian have a local patch to skip those:
> https://gcc.gnu.org/legacy-ml/libstdc++/2015-11/msg00164.html

Sorry, I never looked at Richi's suggestion there:

"""I think the more natural way would be to bootstrap the host libstdc++ and
only build the target libstdc++ for all multilibs."""

I assume you would need to do that for all of libgcc, libgomp, libatomic and the
sanitizer libs.  For non-multilib builds that would an extra build for those
libs, building the final target libs?

Matthias


install location of math-vector-fortran.h

2020-06-08 Thread Matthias Klose
[not subscribed to the libc-alpha list]

GCC and glibc need to agree on the install location for math-vector-fortran.h.
Currently it is installed into

  /usr/include/finclude/math-vector-fortran.h

However the file is architecture specific, currently only having variants for
x86_64-*-gnu, x86_64-*-gnux32, and a generic variant.  This creates problems
when the file is contained in a Debian package which is marked as Multi-Arch:
same, also it should create problems installing the i386 and amd64 rpm's  on the
the same system.  How to solve this issue?

 - The header file currently seems to be completely conditionalized.
   Is it safe to assume that the x86 variant is still considered empty
   for any other architecture?  Will it stay this way?  In this case
   this variant could be installed everywhere, or better, glibc could
   stop maintaining the variant at all.

 - Move the file to an architecture specific location.  For multiarch that
   could be

 /usr/include//finclude/math-vector-fortran.h

   GCC would need patching to look at this location as an alternative.
   Are other tools need patching as well?  What would be the
   solution for co-installable i386/amd64 packages?

Thanks, Matthias


Re: install location of math-vector-fortran.h

2020-06-08 Thread Matthias Klose
On 6/8/20 1:03 PM, Florian Weimer via Gcc wrote:
> * Matthias Klose:
> 
>> [not subscribed to the libc-alpha list]
>>
>> GCC and glibc need to agree on the install location for 
>> math-vector-fortran.h.
>> Currently it is installed into
>>
>>   /usr/include/finclude/math-vector-fortran.h
>>
>> However the file is architecture specific, currently only having
>> variants for x86_64-*-gnu, x86_64-*-gnux32, and a generic variant.
>> This creates problems when the file is contained in a Debian package
>> which is marked as Multi-Arch: same, also it should create problems
>> installing the i386 and amd64 rpm's on the the same system.  How to
>> solve this issue?
> 
> Uhm.  If you want an upstream solution, you need to upstream your
> multi-arch patches.

they are in GCC.  Please note that there is no patch yet for this header file,
so I can't upstream anything yet.

>>  - The header file currently seems to be completely conditionalized.
>>Is it safe to assume that the x86 variant is still considered empty
>>for any other architecture?  Will it stay this way?  In this case
>>this variant could be installed everywhere, or better, glibc could
>>stop maintaining the variant at all.
> 
> I do not understand these questions.  The Fortran header is the
> equivalent of the  C header.  Its contents depends
> on what is necessary to describe the libmvec ABI.  We will only know the
> ABI for libmvec once there is a port for other architectures, so future
> evolution of the header is impossible to predict.

this file currently only has lines like:

!GCC$ builtin (cos) attributes simd (notinbranch) if('x86_64')

so it shouldn't have any effect to other architectures?  Or are the conditionals
explicitly done to exclude the 32bit x86 configuration?

Matthias


documentation of powerpc64{,le}-linux-gnu as primary platform

2020-07-09 Thread Matthias Klose
https://gcc.gnu.org/gcc-8/criteria.html lists the little endian platform first
as a primary target, however it's not mentioned for GCC 9 and GCC 10. Just an
omission?

https://gcc.gnu.org/legacy-ml/gcc-patches/2018-07/msg00854.html suggests that
the little endian platform should be mentioned, and maybe the big endian
platform should be dropped?

Jakub suggested to fix that for GCC 9 and GCC 10, and get a consensus for GCC 
11.

Matthias


Re: documentation of powerpc64{,le}-linux-gnu as primary platform

2020-07-09 Thread Matthias Klose
On 7/9/20 1:58 PM, David Edelsohn via Gcc wrote:
> On Thu, Jul 9, 2020 at 7:03 AM Matthias Klose  wrote:
>>
>> https://gcc.gnu.org/gcc-8/criteria.html lists the little endian platform 
>> first
>> as a primary target, however it's not mentioned for GCC 9 and GCC 10. Just an
>> omission?
>>
>> https://gcc.gnu.org/legacy-ml/gcc-patches/2018-07/msg00854.html suggests that
>> the little endian platform should be mentioned, and maybe the big endian
>> platform should be dropped?
>>
>> Jakub suggested to fix that for GCC 9 and GCC 10, and get a consensus for 
>> GCC 11.
> 
> Why are you so insistent to drop big endian?  No.  Please leave this alone.

No, I don't leave this alone.  The little endian target is dropped in GCC 9 and
GCC 10.  Is this really what you intended to do?

Matthias



Re: GCC 10.2 Release Candidate available from gcc.gnu.org

2020-07-17 Thread Matthias Klose
On 7/17/20 9:19 AM, Romain Naour wrote:
> Hello,
> 
> Le 15/07/2020 à 13:50, Richard Biener a écrit :
>>
>> The first release candidate for GCC 10.2 is available from
>>
>>  https://gcc.gnu.org/pub/gcc/snapshots/10.2.0-RC-20200715/
>>  ftp://gcc.gnu.org/pub/gcc/snapshots/10.2.0-RC-20200715/
>>
>> and shortly its mirrors.  It has been generated from git commit
>> 932e9140d3268cf2033c1c3e93219541c53fcd29.
>>
>> I have so far bootstrapped and tested the release candidate on
>> x86_64-linux.  Please test it and report any issues to bugzilla.
>>
>> If all goes well, I'd like to release 10.2 on Thursday, July 23th.
>>
> 
> GCC 10 and 9 build may fail to build due a missing build dependency, see
> 
> https://gcc.gnu.org/pipermail/gcc-patches/2020-May/546248.html
> 
> We need to backport this patch from master:
> 
> https://gcc.gnu.org/git/?p=gcc.git;a=commitdiff;h=b19d8aac15649f31a7588b2634411a1922906ea8

thanks for tracking this down!  I sometimes see these even without using ccache
on both the Debian and Ubuntu buildds, which then usually go away retrying the
builds.

Matthias


Re: Pytest usage in DejaGNU?

2020-12-14 Thread Matthias Klose
On 12/14/20 10:21 PM, Joseph Myers wrote:
> On Mon, 14 Dec 2020, Martin Liška wrote:
> 
>> +spawn -noecho pytest -rA -s --tb=no $script
> 
> "pytest" might not be the right command everywhere.  If I install 
> python3-pytest on Ubuntu 20.04 I only get /usr/bin/pytest-3 not 
> /usr/bin/pytest.

 -m pytest should work everywhere.


Re: GCC 10.3 Release Candidate available from gcc.gnu.org

2021-04-02 Thread Matthias Klose
On 4/1/21 2:35 PM, Richard Biener wrote:
> 
> The first release candidate for GCC 10.3 is available from
> 
>  https://gcc.gnu.org/pub/gcc/snapshots/10.3.0-RC-20210401/
>  ftp://gcc.gnu.org/pub/gcc/snapshots/10.3.0-RC-20210401/
> 
> and shortly its mirrors.  It has been generated from git commit
> 892024d4af83b258801ff7484bf28f0cf1a1a999.
> 
> I have so far bootstrapped and tested the release candidate on
> x86_64-linux.  Please test it and report any issues to bugzilla.

with the backport of PR95842, the plugin header install is now broken.

Needs also backporting of 9a83366b62e585cce5577309013a832f895ccdbf
"Fix up plugin header install".

Matthias


Re: Remove RMS from the GCC Steering Committee

2021-04-06 Thread Matthias Klose
On 4/6/21 12:27 PM, Richard Biener via Gcc wrote:
> On Thu, Apr 1, 2021 at 9:21 PM Ian Lance Taylor via Gcc  
> wrote:
>>
>> On Thu, Apr 1, 2021 at 10:08 AM Nathan Sidwell  wrote:
>>>
>>> Richard Biener pointed out dysfunction in the SC.  The case of the
>>> missing question I asked in 2019 also points to that.  This response
>>> gives me no confidence that things will materially change.  I call for
>>> the dissolution of the SC, replacing it with a more open, functional and
>>> inclusive body (which includes, nothing).
>>
>> I'm fine with that in principle.  But it's like everything else with
>> GCC, and with free software in general: someone has to do the work.
>> We can't literally replace the SC with nothing, at least not unless we
>> do a much bigger overhaul of the GCC development process: someone has
>> to decide who is going to have maintainership rights and
>> responsibilities for different parts of the compiler.
> 
> Seeing the word "dysfunction" I don't remember using I want to clarify
> the non-openess which I intended to criticize.  The SC is not "open" because:
> - it appoints itself (new members, that is) - in fact in theory it
> should be appointed
>   by the FSF because the SC is the GNU maintainer of GCC
> - all requests and discussions are _private_ - the SC does not report to the
>   GCC project (it might report to the FSF which it is formally a delegate of)
> - you can reach the SC only indirectly (unless you know the secret mailing 
> list
>   it operates on) - CC an SC member and hope a request is forwarded
> 
> now I understand the SC sees itself as buffer between GCC and the FSF (RMS
> in particular) and it thinks we need to be protected from direct engagement.  
> I
> think this is wrong.  I can very well say NO to RMS myself.
> 
> I'm actually curious how many of the 13 SC members actively contribute or
> whether the "SC show" is a one or two persons game and the "13" is just
> to make the SC appear as a big representative group of people.
> 
> Thus I request an archive of the SC mailing list be made publically available
> and the SC discussion from now on take place in an open forum (you can
> choose to moderate everybody so the discussion while carried out in open
> is still amongst SC members only).

Not sure if a completely open SC list would help, seeing other SC's or tech
boards having a private communication channel as well.  But +1 on a public point
of contact, with a ML archive behind.  Issues are involuntarily dropped, or not
communicated like last year's gm2 contribution which stayed silent for quiet a
while and the SC thought that a resolution had been communicated.

Matthias


Re: 33 unknowns left

2015-08-27 Thread Matthias Klose
On 08/26/2015 09:41 PM, Jeff Law wrote:
> On 08/26/2015 01:31 PM, Eric S. Raymond wrote:
>>> mib = mib 
> Michael Bushnell.  Aagain, not active in forever. m...@geech.gnu.ai.mit.edu
> probably doesn't work anymore.
> 
>> miles = miles 
> Miles Bader.  mi...@gnu.ai.mit.edu
> 
>> mkoch = mkoch 
> Michael Koch?  konque...@gmx.de/

yes, mostly worked on classpath

Matthias



gold on trunk breaks aarch64- and arm32-linux (Re: [gold][PATCH] PR gold/19119: Gold accepts bogus target emulation)

2015-10-25 Thread Matthias Klose

On 15.10.2015 17:57, Cary Coutant wrote:

 PR gold/19119
 * options.h (General_options): Remove "obsolete" from -m.


I'm a little reluctant to remove "obsolete" from the description --
maybe "deprecated" instead?


 * parameters.cc (set_parameters_target): Check if input target
 is compatible with output emulation set by "-m emulation".


This is OK. Thanks!


hmm, this breaks any released gcc on aarch64-linux-gnu and arm-linux-gnueabi*

$ gcc -fuse-ld=gold foo.c
/usr/bin/ld.gold: error: unrecognised output emulation: aarch64linux
collect2: error: ld returned 1 exit status

$ gcc -fuse-ld=gold foo.c
/usr/bin/ld.gold: error: unrecognised output emulation: armelf_linux_eabi
collect2: error: ld returned 1 exit status

Matthias



Re: gold on trunk breaks powerpc-, aarch64- and arm32-linux (Re: [gold][PATCH] PR gold/19119: Gold accepts bogus target emulation)

2015-10-25 Thread Matthias Klose

On 25.10.2015 18:40, H.J. Lu wrote:

On Sun, Oct 25, 2015 at 10:37 AM, Matthias Klose  wrote:

On 15.10.2015 17:57, Cary Coutant wrote:


  PR gold/19119
  * options.h (General_options): Remove "obsolete" from -m.



I'm a little reluctant to remove "obsolete" from the description --
maybe "deprecated" instead?


  * parameters.cc (set_parameters_target): Check if input target
  is compatible with output emulation set by "-m emulation".



This is OK. Thanks!



hmm, this breaks any released gcc on aarch64-linux-gnu and
arm-linux-gnueabi*

$ gcc -fuse-ld=gold foo.c
/usr/bin/ld.gold: error: unrecognised output emulation: aarch64linux
collect2: error: ld returned 1 exit status

$ gcc -fuse-ld=gold foo.c
/usr/bin/ld.gold: error: unrecognised output emulation: armelf_linux_eabi
collect2: error: ld returned 1 exit status



What do ld.bfd -V and ld.gold -V report?  They should support the
same set of emulations.



powerpc-linux-gnu as well:
/usr/bin/ld.gold: error: unrecognised output emulation: elf32ppclinux


aarch64-linux:gnu

gold: supported emulations: aarch64_elf64_le_vec aarch64_elf64_be_vec 
aarch64_elf32_le_vec aarch64_elf32_be_vec elf64-tradlittlemips 
elf32-tradlittlemips-nacl elf64-tradbigmips elf32-tradlittlemips-nacl 
elf32-tradlittlemips elf32-tradlittlemips-nacl elf32-tradbigmips 
elf32-tradlittlemips-nacl elf32tilegx_be elf64tilegx_be elf32tilegx elf64tilegx 
armelfb armelfb_nacl armelf armelf_nacl elf64lppc elf64ppc elf32lppc elf32ppc 
elf64_sparc elf32_sparc elf32_x86_64 elf32_x86_64_nacl elf_x86_64 
elf_x86_64_nacl elf_iamcu elf_i386 elf_i386_nacl


$ ld -V
GNU ld (GNU Binutils for Ubuntu) 2.25.51.20151022
  Supported emulations:
   aarch64linux
   aarch64elf
   aarch64elf32
   aarch64elf32b
   aarch64elfb
   armelf
   armelfb
   aarch64linuxb
   aarch64linux32
   aarch64linux32b
   armelfb_linux_eabi
   armelf_linux_eabi


arm-linux-gnueabihf

gold: supported emulations: aarch64_elf64_le_vec aarch64_elf64_be_vec 
aarch64_elf32_le_vec aarch64_elf32_be_vec elf64-tradlittlemips 
elf32-tradlittlemips-nacl elf64-tradbigmips elf32-tradlittlemips-nacl 
elf32-tradlittlemips elf32-tradlittlemips-nacl elf32-tradbigmips 
elf32-tradlittlemips-nacl elf32tilegx_be elf64tilegx_be elf32tilegx elf64tilegx 
armelfb armelfb_nacl armelf armelf_nacl elf64lppc elf64ppc elf32lppc elf32ppc 
elf64_sparc elf32_sparc elf32_x86_64 elf32_x86_64_nacl elf_x86_64 
elf_x86_64_nacl elf_iamcu elf_i386 elf_i386_nacl


$ ld -V
GNU ld (GNU Binutils for Ubuntu) 2.25.51.20151022
  Supported emulations:
   armelf_linux_eabi
   armelfb_linux_eabi


powerpc-linux-gnu

supported emulations: elf64lppc elf64ppc elf32lppc elf32ppc

$ ld -V
GNU ld (GNU Binutils for Ubuntu) 2.25.1
  Supported emulations:
   elf32ppclinux
   elf32ppc
   elf32ppcsim
   elf64ppc



Re: gold on trunk breaks powerpc-, aarch64- and arm32-linux (Re: [gold][PATCH] PR gold/19119: Gold accepts bogus target emulation)

2015-10-25 Thread Matthias Klose

On 26.10.2015 01:14, H.J. Lu wrote:


Please open a gold bug with missing emulations.


PR gold/19172



Re: Solaris vtv port breaks x32 build

2015-11-30 Thread Matthias Klose

On 01.12.2015 03:58, Ulrich Drepper wrote:

On Mon, Nov 30, 2015 at 9:14 PM, Jeff Law  wrote:

Right, but isn't AC_COMPILE_IFELSE a compile test, not a run test?



The problem macro is _AC_COMPILER_EXEEXT_WORKS.  The message is at the end.

This macro *should* work for cross-compiling but somehow it doesn't
work.  In libvtv/configure $cross_compiling is not defined
appropriately.  I'm configuring with the following which definitely
indicates that cross-compiling is selected.


that might be another instance of
https://gcc.gnu.org/ml/gcc-patches/2015-01/msg02064.html
Does something like this help?

Index: libvtv/configure.ac
===
--- libvtv/configure.ac (revision 231050)
+++ libvtv/configure.ac (working copy)
@@ -6,6 +6,8 @@
 #AC_INIT(package-unused, version-unused, libvtv)
 AC_CONFIG_SRCDIR([vtv_rts.h])

+AM_ENABLE_MULTILIB(, ..)
+
 # ---
 # Options
 # ---
@@ -73,7 +75,6 @@
 AM_CONDITIONAL(ENABLE_VTABLE_VERIFY, test $use_vtable_verify = yes)

 AM_INIT_AUTOMAKE(foreign no-dist)
-AM_ENABLE_MULTILIB(, ..)
 AM_MAINTAINER_MODE

 LIBVTV_CONFIGURE



Re: Solaris vtv port breaks x32 build

2015-12-02 Thread Matthias Klose

On 02.12.2015 13:29, Rainer Orth wrote:

Exactly: moving AM_ENABLE_MULTILIB up as Matthias suggested sets
cross_compiling=maybe for non-default multilibs early, which should
achieve the desired behaviour.  All other libraries that invoke both
macros already do so in this order.


now committed.

2015-12-02  Matthias Klose  

* configure.ac: Move AM_ENABLE_MULTILIB before
GCC_LIBSTDCXX_RAW_CXX_FLAGS.
* configure: Regenerate.



distro test rebuild using GCC 6

2016-01-13 Thread Matthias Klose
Here are some first results from a distro test rebuild using GCC 6. A snapshot 
of the current Ubuntu development series was taken on 20151218 for all 
architectures (amd64, arm64, armhf, i386/i686, powerpc, ppc64el, s390x), and 
rebuilt unmodified using the current GCC 5 branch, and using GCC 6 20160101 
(then updated to 20160109).


The build logs for package builds regressing with GCC 6 can be found at
http://people.canonical.com/~doko/tmp/gcc6-regr/ (918 packages, compared to 
around 500 regressions seen in GCC 5)


extracted from
http://people.ubuntuwire.org/~wgrant/rebuild-ftbfs-test/test-rebuild-20151218.1-gcc6-xenial.html
http://people.ubuntuwire.org/~wgrant/rebuild-ftbfs-test/test-rebuild-20151218.1-xenial-baseline-xenial.html

The GCC 6 packages can be found at
https://launchpad.net/~ubuntu-toolchain-r/+archive/ubuntu/test/
GCC 6 packages for Debian are in Debian/experimental.

Bug reports for all ICEs were submitted to the GCC bug tracker, excluding some 
where cc1/cc1plus was killed by the OS (haskell-src-exts, octomap, 
plasma-desktop, seqan (all arm64), freeorion (ppc64el).


I haven't yet looked into the build failures except for the ICEs.  If somebody 
wants to help please let me know so that work isn't duplicated.


I'm planning to do a second test rebuild for Debian/unstable (amd64 only) in 
early Feb.


Matthias


Re: Status of GCC 6 on x86_64 (Debian)

2016-01-21 Thread Matthias Klose

On 22.01.2016 06:09, Martin Michlmayr wrote:

In terms of build failures, I reported 520 bugs to Debian.  Most of them
were new GCC errors or warnings (some packages use -Werror and many
-Werror=format-security).

Here are some of the most frequent errors see:


[...]
Martin tagged these issues; https://wiki.debian.org/GCC6 has links with these 
bug searches.




Re: Status of GCC 6 on x86_64 (Debian)

2016-02-02 Thread Matthias Klose

On 22.01.2016 08:27, Matthias Klose wrote:

On 22.01.2016 06:09, Martin Michlmayr wrote:

In terms of build failures, I reported 520 bugs to Debian.  Most of them
were new GCC errors or warnings (some packages use -Werror and many
-Werror=format-security).

Here are some of the most frequent errors see:


[...]
Martin tagged these issues; https://wiki.debian.org/GCC6 has links with these
bug searches.


Now added the issues with the gcc6-unknown tag, including packages with build 
failures in running the test suites, which might point out wrong-code issues.


see
http://bugs.debian.org/cgi-bin/pkgreport.cgi?tag=gcc-6-unknown;users=debian-...@lists.debian.org



libffi maintenance within GCC?

2016-10-27 Thread Matthias Klose
With the removal of libgcj, the only user of libffi in GCC is libgo, however
there is now no maintainer listed anymore for libffi in the MAINTAINERS file,
and the libffi subdir is a bit outdated compared to the libffi upstream
repository (got aware of this by libffi issue #197).  Who would be responsible
now to update / review libffi patches, just the global reviewers, or should
libffi be maintained by the libgo maintainers?

Matthias


boehm-gc maintenance within GCC ?

2016-10-27 Thread Matthias Klose
With the removal of GCJ, boehm-gc is now only used in libobjc to build an
additional variant of libobjc.  In the GCJ removal thread I proposed to remove
boehm-gc and build the libobjc_gc variant using an external boehm-gc, however
that didn't find everybody's approval.  Assuming that boehm-gc should be kept,
who will update and maintain it, the libobjc maintainers?

Matthias


Debian/Ubuntu test rebuilds using GCC 7 (r243559)

2016-12-16 Thread Matthias Klose
Here are the results of a first test rebuild of the Debian (amd64) and Ubuntu
(all architectures) archives.  The test was started with a GCC trunk around
20161202, and then build failures were retried later with r243559. I filed
around 10-15 issues for ICEs, the most of them already fixed on the trunk.

I'll be in vacation mode until January, so won't be able to do a further
analysis until next year, then hopefully with another rebuild using a recent
snapshot.

The build logs for the failing builds (but succeeding with GCC 6) can be found
at http://people.canonical.com/~doko/tmp/regressions-gcc7/

These are the regressions identified from these two test rebuilds:
http://qa.ubuntuwire.org/ftbfs/rebuilds/test-rebuild-20161202-zesty.html
http://qa.ubuntuwire.org/ftbfs/rebuilds/test-rebuild-20161202-gcc7-zesty.html

The build logs for the Debian amd64 test rebuild (done by Lucas Nussbaum) can be
found at

http://aws-logs.debian.net/2016/12/04.gcc7/00diff-results.txt
http://aws-logs.debian.net/2016/12/04.gcc7/00all-results.txt
http://aws-logs.debian.net/2016/12/04.gcc7/

Grep the first file for "OK Failed" to get the packages that succeeded
in unstable but failed with gcc7.

Scanning these build logs there is a number of failures unrelated to GCC 7, like

 - gnat not yet updated, GCC 6/7 mix is failing

 - packages recording GCC versions in public include files, and failing
   the build on mismatch (annoying ...)

If people want to have a look ...

 - the GCC 7 packages for Debian can be found in the experimental distribution

 - for Ubuntu use the PPA (zesty 17.04)
   https://launchpad.net/~ubuntu-toolchain-r/+archive/ubuntu/test/

Matthias



Re: .../lib/gcc//7.1.1/ vs. .../lib/gcc//7/

2017-01-07 Thread Matthias Klose
On 06.01.2017 15:13, Szabolcs Nagy wrote:
> On 06/01/17 13:11, Jakub Jelinek wrote:
>> On Fri, Jan 06, 2017 at 01:07:23PM +, Szabolcs Nagy wrote:
>>> On 06/01/17 12:48, Jakub Jelinek wrote:
 SUSE and some other distros use a hack that omits the minor and patchlevel
 versions from the directory layout, just uses the major number, it is very
>>>
>>> what is the benefit?
>>
>> Various packages use the paths to gcc libraries/includes etc. in various
>> places (e.g. libtool, *.la files, etc.).  So any time you upgrade gcc
> 
> it is a bug that gcc installs libtool la files,
> because a normal cross toolchain is relocatable
> but the la files have abs path in them.
> 
> that would be nice to fix, so build scripts don't
> have to manually delete the bogus la files.
> 
>> (say from 6.1.0 to 6.2.0 or 6.2.0 to 6.2.1), everything that has those paths
>> needs to be rebuilt.  By having only the major number in the paths (which is
>> pretty much all that matters), you only have to rebuild when the major
>> version of gcc changes (at which time one usually want to mass rebuild
>> everything anyway).
> 
> i thought only the gcc driver needs to know
> these paths because there are no shared libs
> there that are linked into binaries so no binary
> references those paths so nothing have to be
> rebuilt.

You also end up with dependencies of the form
/usr/lib/gcc///../../.././include which then break when you
update to a new branch version.



Re: .../lib/gcc//7.1.1/ vs. .../lib/gcc//7/

2017-01-07 Thread Matthias Klose
On 06.01.2017 13:48, Jakub Jelinek wrote:
> Hi!
> 
> SUSE and some other distros use a hack that omits the minor and patchlevel
> versions from the directory layout, just uses the major number, it is very
> uncommon to have more than one compiler for the same major number installed
> in the same prefix now that major bumps every year and the distinction
> between minor and patchlevel is just the amount of bugfixes it got after
> the initial release.
> 
> Dunno if the following is the latest version.

Looking at the variable naming it looks like these are taken from the
Debian/Ubuntu packages.  The latest version is
https://anonscm.debian.org/viewvc/gcccvs/branches/sid/gcc-7/debian/patches/gcc-base-version.diff?view=markup

> The question is, do we want something like this upstream too, and
> unconditionally or based on a configure option (--enable-major-version-only
> ?) and in the latter case what the default should be.
> 
> I must say I don't understand the cppbuiltin.c part in the patch,
> CFLAGS-cppbuiltin.o += $(PREPROCESSOR_DEFINES) -DBASEVER=$(FULLVER_s)
> cppbuiltin.o: $(FULLVER)
> should already provide it with the full version.  And libjava bit is
> obviously no longer needed.

I didn't want to change the preprocessor defines. Maybe it's clearer to
s/BASEVER/FULLVER/ in cppbuiltin.c and just passing FULLVER to the build.

> If we apply the patch as is (sans those last two files?), the change would
> be unconditional, and we'd have to adjust maintainer scripts etc. so that
> if there is FULL-VER file, the full version is in there and needs to be
> bumped and BASE-VER is then just the major from that.  The patch doesn't
> seem to be complete though, e.g. gcc/configure.ac uses gcc_BASEVER
> var for plugins and expects it to be the full version.  Or do we want
> GCCPLUGIN_VERSION to be also solely the major version?

The patch predates the plugin, I should update it for the gccplugin as well.

> Another possibility for still unconditional change would be to sed
> the major out from BASE-VER in all the places that read it from BASE-VER
> file.  Files to look at are:

Some configure files use sed, some use gcc -dumpversion to construct gcc libdir.

Matthias



powerpc64le ada binaries

2014-04-30 Thread Matthias Klose
Ada binaries for powerpc64le-linux-gnu can be found at [1], these should be good
for bootstrapping (or install gnat-4.8 in Ubuntu 14.04 LTS).  Ada should then be
buildable from the 4.9.0 release. Install the deb, or use ar(1) on the deb file
to extract the files.  Thanks to Ulrich Weigand helping a lot with the initial
bootstrap.

  Matthias

[1] https://launchpad.net/ubuntu/+source/gcc-snapshot/20140405-0ubuntu1


Re: lib{atomic, itm}/configure.tgt uses -mcpu=v9 as default for sparc

2014-06-03 Thread Matthias Klose
Am 02.06.2014 22:30, schrieb Eric Botcazou:
>> I have successfully built without the switch, but I am not sure of the
>> effects at runtime.
> 
> For sure libitm cannot work, there is a 'flushw' in config/sparc/sjlj.S.
> 
>> If V9 is indeed required, is there a way to build without those libs? Or
>> has pre V9 support been dropped at some point?
> 
> No, V8 is still supported, but nobody has ported the libraries to it.
> 
>> IMHO an efficiency enhancement should not prevent running less
>> efficiently on a supported architecture. If target triple is
>> sparcv9-*-*, the next case will match and will add the "-mcpu=v9" to
>> XCFLAGS, but adding it for non-v9 sparc-*-* targets is at least weird.
> 
> Well, V9 is about 20 years old now so defaulting to it is not unreasonable, 
> especially for all the native OSes.  But patches are of course welcome.

V9 is currently bound to 64bit, you can't build a sparc-linux-gnu compiler
defaulting to V9 without patches.  But anyway, Debian did drop the sparc port
two months ago anyway.

  Matthias




missing symbols in libstdc++.so.6 built from the 4.9 branch

2014-07-01 Thread Matthias Klose
on some linux architectures there are some symbols missing in libstdc++.so.6
built from the 4.9 branch.  I didn't notice before due to a packaging bug.
affected are ARM32, HPPA, SPARC.

 - ARM32 (build log [1], both soft and hard float) are missing
 __aeabi_atexit@CXXABI_ARM_1.3.3
 __aeabi_vec_*

   Can these be ignored?

 - HPPA (build log [2]), is missing all the future_base symbols and
   exception_ptr13exception symbols, current_exception and
   rethrow_exception.

 - SPARC (build log [3]) configured for sparc64-linux-gnu is missing
   symbols in the 32bit multilib build, although these are present
   in a sparc-linux-gnu build. Missing are same ones as in the HPPA
   build, long double 128 related symbols, numeric_limits, and some
   math symbols.

   Looks like more than one issue is involved, I remember that the
   math symbols were already dropped in earlier versions for other
   architectures. The build is configured -with-long-double-128.

Matthias

[1]
https://buildd.debian.org/status/fetch.php?pkg=gcc-4.9&arch=armhf&ver=4.9.0-8&stamp=1403809654
[2]
http://buildd.debian-ports.org/status/fetch.php?pkg=gcc-4.9&arch=hppa&ver=4.9.0-9&stamp=1404018503
[3]
http://buildd.debian-ports.org/status/fetch.php?pkg=gcc-4.9&arch=sparc64&ver=4.9.0-9&stamp=1404033854



Re: missing symbols in libstdc++.so.6 built from the 4.9 branch

2014-07-01 Thread Matthias Klose
Am 01.07.2014 11:32, schrieb Jonathan Wakely:
> On 1 July 2014 09:40, Matthias Klose wrote:
>>  - HPPA (build log [2]), is missing all the future_base symbols and
>>exception_ptr13exception symbols, current_exception and
>>rethrow_exception.
> 
> This implies ATOMIC_INT_LOCK_FREE <= 1 for that target. Our future and
> exception_ptr implementations rely on usable atomics.

thanks for the reminder. then the same missing symbols for sparc is a missing
--with-cpu-32=ultrasparc.

  Matthias



Re: Towards GNU11

2014-10-09 Thread Matthias Klose
Am 08.10.2014 um 09:16 schrieb Richard Biener:
> On Tue, 7 Oct 2014, Marek Polacek wrote:
> I think it makes sense to do this (and I expect C++ will follow
> with defaulting to -std=c++11 once the ABI stuff has settled).
> 
> Of course it would be nice to look at the actual fallout in
> a whole-distribution rebuild...

I can certainly do that, once stage1 is finished, hopefully for more than x86
architectures.

What happened to the plans to stabilize the libstdc++ c++11 ABI?  Is this still
a target for GCC 5?

  Matthias



please document requirements on sphinx

2015-03-03 Thread Matthias Klose
Both gccjit and gnat now use sphinx to build the documentation.  While not a
direct part of the build process, it would be nice to document the requirements
on sphinx, and agree on a common version used to generate that documentation.

Coming from a distro background where I have to "build from source", I know that
sphinx is a bit less stable than say doxygen and texinfo.  So some kind of
version information, about not using sphinx plugins, etc. would be appreciated.

thanks, Matthias


how to use the uninstalled C & C++ compilers?

2015-03-31 Thread Matthias Klose
In the past (at least it worked for me in 4.9) it was possible to use the
uninstalled C & C++ compilers to build another compiler, using the just built
compilers.  Useful if you want to build e.g. libgccjit in a second step without
bootstrapping again.  This doesn't seem to work anymore with 5. At least I can't
get it working by removing the system c++ and g++ after the build of the first
compiler and then building as

CC=$(builddir)/gcc/xgcc -B$(builddir)/gcc/
CXX=$(builddir)/gcc/xg++ -B$(builddir)/gcc/ \
-I$(builddir)/$(TARGET_ALIAS)/libstdc++-v3/include \
-I$(builddir)/$(TARGET_ALIAS)/libstdc++-v3/include/$(TARGET_ALIAS)

and the next build (without bootstrapping) with these values. Did somebody got
this working?  Sometimes it looks like gcc is built using the C compiler,
sometimes it doesn't find the "new" header, and probably won't find the
libstdc++ library later. Not setting CXX seems to silently use the g++ system
compiler if it exists.

Matthias


Re: how to use the uninstalled C & C++ compilers?

2015-03-31 Thread Matthias Klose
On 03/31/2015 01:09 PM, Matthias Klose wrote:
> In the past (at least it worked for me in 4.9) it was possible to use the
> uninstalled C & C++ compilers to build another compiler, using the just built
> compilers.  Useful if you want to build e.g. libgccjit in a second step 
> without
> bootstrapping again.  This doesn't seem to work anymore with 5. At least I 
> can't
> get it working by removing the system c++ and g++ after the build of the first
> compiler and then building as
> 
> CC=$(builddir)/gcc/xgcc -B$(builddir)/gcc/
> CXX=$(builddir)/gcc/xg++ -B$(builddir)/gcc/ \
> -I$(builddir)/$(TARGET_ALIAS)/libstdc++-v3/include \
> -I$(builddir)/$(TARGET_ALIAS)/libstdc++-v3/include/$(TARGET_ALIAS)
> 
> and the next build (without bootstrapping) with these values. Did somebody got
> this working?  Sometimes it looks like gcc is built using the C compiler,
> sometimes it doesn't find the "new" header, and probably won't find the
> libstdc++ library later. Not setting CXX seems to silently use the g++ system
> compiler if it exists.

this is what is now working for me.

  CC = $(builddir)/gcc/xgcc -B$(builddir)/gcc/
  CXX = $(builddir)/gcc/xg++ -B$(builddir)/gcc/ \
-B$(builddir)/$(TARGET_ALIAS)/libstdc++-v3/src/.libs \
-B$(builddir)/$(TARGET_ALIAS)/libstdc++-v3/libsupc++/.libs \
-I$(builddir)/$(TARGET_ALIAS)/libstdc++-v3/include \
-I$(builddir)/$(TARGET_ALIAS)/libstdc++-v3/include/$(TARGET_ALIAS) \
-I$(srcdir)/libstdc++-v3/libsupc++ \
-L$(builddir)/$(TARGET_ALIAS)/libstdc++-v3/src/.libs \
-L$(builddir)/$(TARGET_ALIAS)/libstdc++-v3/libsupc++/.libs




C++ compat symbol not emitted anymore in GCC 8

2018-10-16 Thread Matthias Klose
This is seen in a distro upgrade, with a shared library built using GCC 6, which
now fails to dynamically link, when the library is rebuilt using GCC 8.

Details in https://bugs.debian.org/911090

Jonathan pointed me to PR71712, fixing the C++ mangling.

$ cat > foo.C
#include 
struct foo {
operator std::string();
};

foo::operator std::string() { return "Hi"; }

$ g++-8 -shared -fPIC -o libfoo.so foo.C && nm -D libfoo.so | grep foo
1136 T 
_ZN3foocvNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcv

g++-7 -shared -fPIC -o libfoo.so foo.C && nm -D libfoo.so | grep foo
115a T
_ZN3foocvNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEB5cxx11Ev
115a T 
_ZN3foocvNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcv

$ g++-8 -fabi-version=10 -shared -fPIC -o libfoo.so foo.C && nm -D libfoo.so |
grep foo
1136 T
_ZN3foocvNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEB5cxx11Ev
1136 T 
_ZN3foocvNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcv


GCC 7 emits the old/compat symbol, and GCC 8 emits it when explicitly built with
-fabi-version=10.  This ABI change results in silent breakage, maybe in more
libraries than that one.  Is there a reason that this compat symbol isn't
emitted anymore in GCC 8?

Matthias


LTO+profiled enabled builds

2019-07-04 Thread Matthias Klose
I'm running into some issues building LTO+profiled enabled configurations in
some constrained build environment called buildds, having four cores and 16GB of
RAM.

configured for all frontends (maximum number of LTO links) and configured with

  --enable-bootstrap \
  --with-build-config=bootstrap-lto-lean \
  --enable-link-mutex

and building the make profiledbootstrap-lean target.

Most builds time out after 150 minutes.

A typical LTO link runs for around one minute on this hardware, however a LTO
link with -fprofile-use runs for up to three hours.

So gcc/lock-and-run.sh runs the first lto-link, waits for all other 300 seconds,
then removes the "stale" locks, and runs everything in parallel ...  Which
surprisingly goes well, because -flto=jobserver is in effect, so I don't see any
memory constraints yet.

The machine then starts building all front-ends, but apparently is not
overloaded, as -flto=jobserver is in effect.  However there is no output, and
that triggers the timeout. Richi mentioned on IRC that the LTO links only have
buffered output (unless you run in debug mode), and that is only emitted once
the link finishes.  However even with unbuffered output, there could be times
when nothing is happening, no warnings?

I'm currently experimenting with a modified lock-and-run.sh, which basically
sets the delay for releasing the "stale" locks to 30min instead of 5 min, runs
the LTO link in the background and checks for the status of the background job,
emitting some "running ..." messages while not finished.  Still adjusting some
parameters, but at least that succeeds on some of my configurations.

The locking mechanism was introduced in 2013,
https://gcc.gnu.org/ml/gcc-patches/2013-05/msg1.html

lock-and-run.sh should probably modified not to release the "stale" locks based
on a fixed timeout value. How?

While the "no-output" problem can be fixed in the lock script as well
(attached), this doesn't apply to third party apps.  Having unbuffered output
and/or an option to print progress would be beneficial.

Matthias





lock-and-run.sh
Description: application/shellscript


Re: state of play/strategy for including Modula-2 into the trunk (licence queries)

2019-10-01 Thread Matthias Klose

On 30.09.19 18:46, Gaius Mulley wrote:

again is this sensible?  Are there [obvious] issues I've missed?


does the profiled LTO build now work? I didn't check recently myself.

Matthias


Re: GCC 7.5 Release Candidate available from gcc.gnu.org

2019-11-07 Thread Matthias Klose

On 05.11.19 13:45, Richard Biener wrote:


The first release candidate for GCC 7.5 is available from

  https://gcc.gnu.org/pub/gcc/snapshots/7.5.0-RC-20191105/

and shortly its mirrors.  It has been generated from SVN revision 277823.

I have so far bootstrapped and tested the release candidate on
{x86_64,i586,ppc64le,s390x,aarch64}-linux.  Please test it
and report any issues to bugzilla.

If all goes well, I'd like to release 7.5 on Thursday, November 14th.


With a distribution build (Ubuntu) on amd64, i386, armhf, arm64, ppc64el and 
s390x, I don't see any regressions in the GCC testsuite (compared to 7.4.0), 
except for two issues on ppc64el:


FAIL: gcc.target/powerpc/pr87532.c (test for excess errors)
Excess errors:
/build/gcc-7-8odB_r/gcc-7-7.4.0/src/gcc/testsuite/gcc.target/powerpc/pr87532.c:45:27: 
warning: format '%d' expects argument of type 'int', but argument 2 has type 
'size_t {aka long unsigned int}' [-Wformat=]


is a new test, and only caused by default hardening settings.

PASS: gcc.dg/vect/slp-perm-4.c execution test
FAIL: gcc.dg/vect/slp-perm-4.c scan-tree-dump-times vect "vectorized 1 loops" 1
PASS: gcc.dg/vect/slp-perm-4.c scan-tree-dump-times vect "gaps requires scalar 
epilogue loop" 0
FAIL: gcc.dg/vect/slp-perm-4.c scan-tree-dump-times vect "vectorizing stmts 
using SLP" 1


Matthias


libsanitizer in GCC 10 is dropping symbols without bumping the soversions

2019-11-29 Thread Matthias Klose
libsanitizer on trunk only bumps the soversion for asan, but the other libraries
drop some symbols without bumping the soname, Are these changes intended, and
should the soversions be bumped?

Matthias


diff --git a/debian/liblsan0.symbols b/debian/liblsan0.symbols
index f318d9a..5aa23a6 100644
--- a/debian/liblsan0.symbols
+++ b/debian/liblsan0.symbols
@@ -1,5 +1,4 @@
 liblsan.so.0 liblsan0 #MINVER#
- OnPrint@Base 8
  _ZdaPv@Base 4.9
  _ZdaPvRKSt9nothrow_t@Base 4.9
  _ZdaPvSt11align_val_t@Base 8
@@ -138,7 +140,6 @@ liblsan.so.0 liblsan0 #MINVER#
  calloc@Base 4.9
  cfree@Base 4.9
  free@Base 4.9
- (arch=base-any-any-amd64 any-mips any-mipsel)internal_sigreturn@Base 7
  mallinfo@Base 4.9
  malloc@Base 4.9
  malloc_usable_size@Base 4.9

diff --git a/debian/libtsan0.symbols b/debian/libtsan0.symbols
index 827bb58..6a282a4 100644
--- a/debian/libtsan0.symbols
+++ b/debian/libtsan0.symbols
@@ -35,7 +35,6 @@ libtsan.so.0 libtsan0 #MINVER#
  AnnotateThreadName@Base 4.9
  AnnotateTraceMemory@Base 4.9
  AnnotateUnpublishMemoryRange@Base 4.9
- OnPrint@Base 8
  RunningOnValgrind@Base 4.9
  ThreadSanitizerQuery@Base 4.9
  ValgrindSlowdown@Base 4.9
@@ -1637,7 +1712,7 @@ libtsan.so.0 libtsan0 #MINVER#
  initgroups@Base 4.9
  inotify_init1@Base 4.9
  inotify_init@Base 4.9
- (arch=base-any-any-amd64 any-mips any-mipsel)internal_sigreturn@Base 7
+#MISSING: 10# (arch=base-any-any-amd64 any-mips
any-mipsel)internal_sigreturn@Base 7
  ioctl@Base 4.9
  kill@Base 4.9
  lgamma@Base 4.9

diff --git a/debian/libubsan1.symbols b/debian/libubsan1.symbols
index b829376..731d0db 100644
--- a/debian/libubsan1.symbols
+++ b/debian/libubsan1.symbols
@@ -1,5 +1,4 @@
 libubsan.so.1 libubsan1 #MINVER#
- OnPrint@Base 8
  __asan_backtrace_alloc@Base 4.9
  __asan_backtrace_close@Base 4.9
@@ -91,8 +93,8 @@ libubsan.so.1 libubsan1 #MINVER#
  __ubsan_handle_dynamic_type_cache_miss_abort@Base 4.9
  __ubsan_handle_float_cast_overflow@Base 4.9
  __ubsan_handle_float_cast_overflow_abort@Base 4.9
- __ubsan_handle_function_type_mismatch@Base 4.9
- __ubsan_handle_function_type_mismatch_abort@Base 4.9
+ __ubsan_handle_function_type_mismatch_v1@Base 4.9
+ __ubsan_handle_function_type_mismatch_v1_abort@Base 4.9
  __ubsan_handle_implicit_conversion@Base 9
  __ubsan_handle_implicit_conversion_abort@Base 9
  __ubsan_handle_invalid_builtin@Base 8
@@ -126,4 +128,3 @@ libubsan.so.1 libubsan1 #MINVER#
  __ubsan_handle_vla_bound_not_positive_abort@Base 4.9
  __ubsan_on_report@Base 9
  __ubsan_vptr_type_cache@Base 4.9
- (arch=base-any-any-amd64 any-mips any-mipsel)internal_sigreturn@Base 7


Re: multiple definition of symbols" when linking executables on ARM32 and AArch64

2020-01-06 Thread Matthias Klose
On 06.01.20 11:03, Andrew Pinski wrote:
> +GCC
> 
> On Mon, Jan 6, 2020 at 1:52 AM Matthias Klose  wrote:
>>
>> In an archive test rebuild with binutils and GCC trunk, I see a lot of build
>> failures on both aarch64-linux-gnu and arm-linux-gnueabihf failing with
>> "multiple definition of symbols" when linking executables, e.g.
> 
> THIS IS NOT A BINUTILS OR GCC BUG.
> GCC changed the default to -fno-common.
> It seems like for some reason, your non-aarch64/arm builds had changed
> the default back to being with -fcommon turned on.

what would that be?  I'm not aware of any active change doing that.  Packages
build on x86, ppc64el and s390x at least.


Re: multiple definition of symbols" when linking executables on ARM32 and AArch64

2020-01-06 Thread Matthias Klose
On 06.01.20 13:30, Wilco Dijkstra wrote:
> On 06.01.20 11:03, Andrew Pinski wrote:
>> +GCC
>>
>> On Mon, Jan 6, 2020 at 1:52 AM Matthias Klose  wrote:
>>>
>>> In an archive test rebuild with binutils and GCC trunk, I see a lot of build
>>> failures on both aarch64-linux-gnu and arm-linux-gnueabihf failing with
>>> "multiple definition of symbols" when linking executables, e.g.
>>
>> THIS IS NOT A BINUTILS OR GCC BUG.
>> GCC changed the default to -fno-common.
>> It seems like for some reason, your non-aarch64/arm builds had changed
>> the default back to being with -fcommon turned on.
> 
>> what would that be?  I'm not aware of any active change doing that.  Packages
>> build on x86, ppc64el and s390x at least.
> 
> Well if you want to build old archived code using latest GCC then you may 
> need to
> force -fcommon just like you need to add many warning disables. Maybe you were
> using an older GCC for the other targets? As Andrew notes, this isn't 
> Arm-specific.

found out about why. Started the test rebuild with trunk 20191219, then gave
back all build failures yesterday with trunk 20200104.  And I saw most of the
armhf/arm64 ftbfs when I retriggered failing builds.  To get consistent results
I should finish that test rebuild with the -fno-common change reverted.

However, this is an undocumented change in the current NEWS, and seeing
literally hundreds of package failures, I doubt that's the right thing to do, at
least without any deprecation warning first.  Could that be handled, deprecating
in GCC 10 first, and the changing that for GCC 11?

Matthias


Re: multiple definition of symbols" when linking executables on ARM32 and AArch64

2020-01-06 Thread Matthias Klose
On 06.01.20 14:29, Matthias Klose wrote:
> On 06.01.20 13:30, Wilco Dijkstra wrote:
>> On 06.01.20 11:03, Andrew Pinski wrote:
>>> +GCC
>>>
>>> On Mon, Jan 6, 2020 at 1:52 AM Matthias Klose  wrote:
>>>>
>>>> In an archive test rebuild with binutils and GCC trunk, I see a lot of 
>>>> build
>>>> failures on both aarch64-linux-gnu and arm-linux-gnueabihf failing with
>>>> "multiple definition of symbols" when linking executables, e.g.
>>>
>>> THIS IS NOT A BINUTILS OR GCC BUG.
>>> GCC changed the default to -fno-common.
>>> It seems like for some reason, your non-aarch64/arm builds had changed
>>> the default back to being with -fcommon turned on.
>>
>>> what would that be?  I'm not aware of any active change doing that.  
>>> Packages
>>> build on x86, ppc64el and s390x at least.
>>
>> Well if you want to build old archived code using latest GCC then you may 
>> need to
>> force -fcommon just like you need to add many warning disables. Maybe you 
>> were
>> using an older GCC for the other targets? As Andrew notes, this isn't 
>> Arm-specific.
> 
> found out about why. Started the test rebuild with trunk 20191219, then gave
> back all build failures yesterday with trunk 20200104.

hmm, no. that change was made on November 20, not December 20 (r278509). So why
do I see these only on ARM32 and AArch64?

> And I saw most of the
> armhf/arm64 ftbfs when I retriggered failing builds.  To get consistent 
> results
> I should finish that test rebuild with the -fno-common change reverted.
> 
> However, this is an undocumented change in the current NEWS, and seeing
> literally hundreds of package failures, I doubt that's the right thing to do, 
> at
> least without any deprecation warning first.  Could that be handled, 
> deprecating
> in GCC 10 first, and the changing that for GCC 11?
> 
> Matthias
> 



Re: multiple definition of symbols" when linking executables on ARM32 and AArch64

2020-01-06 Thread Matthias Klose
On 06.01.20 15:02, Wilco Dijkstra wrote:
>> However, this is an undocumented change in the current NEWS, and seeing
>>> literally hundreds of package failures, I doubt that's the right thing to 
>>> do, at
>>> least without any deprecation warning first.  Could that be handled, 
>>> deprecating
>>> in GCC 10 first, and the changing that for GCC 11?
> 
> This change was first proposed for GCC8, and rejected because of failures in 
> the
> distros. Two years have passed, and there are still failures... Would this 
> change if
> we postpone it even longer? My feeling is that nobody is going to actively 
> fix their
> code if the default isn't changed first.

I think we were missing a proper announcement for GCC 8, and maybe we were not
aware of the amount of code changes needed in third party packages.  So maybe we
should have had in GCC 8 a configure option to make that the default behavior,
announce it in the release notes, ask for testing using that option, and do the
defaults change in GCC 9.  We could do that for GCC 10/11, or just bite in the
apple and now fix all packages builds.

Matthias


Re: Cygwin + zlib

2017-02-24 Thread Matthias Klose
On 24.02.2017 18:55, NightStrike wrote:
> Currently, to build natively on cygwin, this patch is required to zlib:
> 
> https://github.com/Alexpux/MSYS2-packages/blob/master/zlib/1.2.11-cygwin-no-widechar.patch
> 
> Otherwise, this error occurs:
> 
> ./../zlib/libz.a(libz_a-gzlib.o):gzlib.c:(.text+0x646): undefined
> reference to `_wopen'
> ./../zlib/libz.a(libz_a-gzlib.o):gzlib.c:(.text+0x646): relocation
> truncated to fit: R_X86_64_PC32 against undefined symbol `_wopen'
> 
> 
> Given that cygwin is a supported system, and given that zlib is
> in-tree (and was recently updated), does it make sense to include this
> patch in the gcc in-tree version of zlib?

please could you forward this issue upstream as well?



Re: dejagnu version update?

2017-05-16 Thread Matthias Klose
On 16.05.2017 05:35, Bernhard Reutner-Fischer wrote:
> On 16 May 2017 at 14:16, Jonathan Wakely  wrote:
>> On 16 May 2017 at 13:13, Bernhard Reutner-Fischer wrote:
>>> 1.5.0 wouldn't buy us anything as the "libdirs" handling is only in 1.5.2 
>>> and later.
>>
>> Ah I missed that in the earlier discussion.
>>
>> The change I care about in 1.5.3 is
>> http://git.savannah.gnu.org/gitweb/?p=dejagnu.git;a=commit;h=5256bd82343000c76bc0e48139003f90b6184347
> 
> the libdirs handling is
> http://git.savannah.gnu.org/gitweb/?p=dejagnu.git;a=commit;h=5481f29161477520c691d525653323b82fa47ad7
> and applies cleanly to everything 1.5.x-ish. Didn't try if it applies to 
> 1.4.4.

this patch is part of dejagnu in Ubuntu 14.04 LTS.



Re: Steering committee, please, consider using lzip instead of xz

2017-06-07 Thread Matthias Klose
On 07.06.2017 13:25, Antonio Diaz Diaz wrote:
> Dear GCC steering committee,
> 
> This has been recently asked in this list[1], but in case you have missed it
> because of a subject line not explicit enough, I would like to appeal to you
> directly.
> 
> [1] http://gcc.gnu.org/ml/gcc/2017-06/msg9.html
> 
> Since 2017-05-24 weekly snapshots use xz compression instead of bzip2. I 
> suppose
> this means that release tarballs will also use xz at some point.
> 
> If this is the case, I politely request you to consider using lzip instead of
> xz. I have spent a lot of time during the last 9 years developing lzip and
> studying the xz format, and based on this experience I consider that lzip is a
> better choice than xz, now and in the long term.
> 
> I have been developing software since the early 80s, and I am a GNU maintainer
> since 2003. You are all experienced developers. All I ask is that you read
> carefully the following references and then consider lzip and xz based on 
> their
> technical merits.
> 
> http://www.nongnu.org/lzip/xz_inadequate.html
> http://www.nongnu.org/lzip/lzip_benchmark.html#xz1
> 
> Also note that 'lzip -9' produces a tarball a 1% smaller than xz, in spite of
> lzip using half the RAM to compress and requiring half the RAM to decompress
> than xz.
> 
> -rw-r--r-- 1 58765134 2017-06-07 09:13 gcc-8-20170604.tar.lz
> -rw-r--r-- 1 59367680 2017-06-07 09:13 gcc-8-20170604.tar.xz

I proposed and implemented the change to use xz instead of bzip2 because of the
space savings compared to bzip2.  I'm not commenting on the "inadequateness" of
xz, but maybe it would better help lzip to address some project issues and
promoting it as an alternative rather than appealing to the GCC steering 
committee.

 - lzip is not a GNU project (afaics), same as for xz.
 - lzip doesn't have a public VCS.
 - lzip doesn't have a documented API, doesn't build as a library,
   and I can't find language bindings for lzip.
 - lzip isn't (yet) used for software distributions, while xz is (and afaics
   xz is used for GNU projects in addition to gz).

Matthias


Re: GCC 8.0.0 Status Report (2018-01-15), Trunk in Regression and Documentation fixes only mode

2018-02-06 Thread Matthias Klose
On 17.01.2018 09:19, Richard Biener wrote:
> On Tue, Jan 16, 2018 at 8:20 PM, Andrew Roberts  
> wrote:
>> Boot strap on Darwin x86_64 with llvm now seems broken as of last 8.0.0
>> snapshot, it still is working fine with 7.2.0.
>> I've added bug: 83903
>>
>> x86_64, armv6, armv7, aarch64 all seem fine on linux. I've been building
>> with latest gmp (6.1.2), mpfr (4.0.0) and mpc (1.1.) across all my systems.
>>
>> I observe isl was updated to 0.18 in the download_prerequisites script. Are
>> the other's going to get updated before the 8.0.0 release?
> 
> Now that mpc 1.1.0 was released we could update the versions if we get
> sufficient
> "positives" from people testing with newer releases.

I have seen some issues with mpfr 4.0.0 on 32bit platforms, however not in GCC
itself yet.  These are all fixed in 4.0.1 rc2, so maybe document 4.0.1 instead
of 4.0.0 once it is released.

Matthias


Re: [RFC] Adding Python as a possible language and it's usage

2018-07-18 Thread Matthias Klose
On 18.07.2018 14:49, Joel Sherrill wrote:
> On Wed, Jul 18, 2018, 7:15 AM Jonathan Wakely  wrote:
> 
>> On Wed, 18 Jul 2018 at 13:06, Eric S. Raymond wrote:
>>>
>>> Jonathan Wakely :
 On Wed, 18 Jul 2018 at 11:56, David Malcolm wrote:
> Python 2.6 onwards is broadly compatible with Python 3.*. and is
>> about
> to be 10 years old.  (IIRC it was the system python implementation in
> RHEL 6).

 It is indeed. Without some regular testing with Python 2.6 it could be
 easy to introduce code that doesn't actually work on that old version.
 I did that recently, see PR 86112.

 This isn't an objection to using Python (I like it, and anyway I don't
 touch the parts of GCC that you're talking about using it for). Just a
 caution that trying to restrict yourself to a portable subset isn't
 always easy for casual users of a language (also a problem with C++98
 vs C++11 vs C++14 as I'm sure many GCC devs are aware).
>>>
>>> It's not very difficult to write "polyglot" Python that is indifferent
>>> to which version it runs under.  I had to solve this problem for
>>> reposurgeon; techniques documented here...
>>
>> I don't see any mention of avoiding dict comprehensions (not supported
>> until 2.7, so unusable on RHEL6/CentOS6 and SLES 11).
>>
>> I maintain it's easy to unwittingly use a feature (such as dict
>> comprehensions) which works fine on your machine, but aren't supported
>> by all versions you intend to support. Regular testing with the oldest
>> version is needed to prevent that (which was the point I was making).
>>
> 
> I think the RTEMS Community may be a good precedence here. RTEMS is always
> cross compiled and we are as host agnostic as possible. We use as close to
> the latest release of GCC, binutils, gdb, and newlib as possible. Our host
> side tools are in a combination of Python and C++. We use Sphinx for
> documentation.
> 
> We are careful to use the Python on RHEL 6 as a baseline. You can build an
> RTEMS environment there. But at least one of the Sphinx pieces requires a
> Python of at least RHEL 7 vintage.
> 
> We have a lot of what I will politely call institutional and large
> organization users who have to adhere to strict IT policies. I think RHEL 7
> is common but can't swear there is no RHEL 6 out there and because of that,
> we set the Python 2.x as a minimum.
> 
> Yes these are old. And for native new distribution use, it doesn't matter.
> But for cross and local upgrades, old distributions matter. Particularly
> those targeting enterprise users. And those are glacially slow.
> 
> As an aside, it was not being able to build the RTEMS documentation that
> pushed me off RHEL 6 as my primary personal environment last year. I wanted
> to be using the oldest distribution I thought was in use in our community.

doesn't RHEL 6 has overlays for that very reason to install a newer Python3?

Please don't start with Python2 anymore. It's discontinued in less than two
years and then you'll have distributions not having Python2 anymore.  If you
don't have a recent Python3, then you probably can build it for your platform
itself.

Python3 is also cross-buildable, and much easier to cross-build than guile or 
perl.

Matthias


Re: [RFC] Adding Python as a possible language and it's usage

2018-07-18 Thread Matthias Klose
On 18.07.2018 19:29, Paul Koning wrote:
> 
> 
>> On Jul 18, 2018, at 1:22 PM, Boris Kolpackov  wrote:
>>
>> Paul Koning  writes:
>>
 On Jul 18, 2018, at 11:13 AM, Boris Kolpackov  
 wrote:

 I wonder what will be the expected way to obtain a suitable version of
 Python if one is not available on the build machine? With awk I can
 build it from source pretty much anywhere. Is building newer versions
 of Python on older targets a similarly straightforward process (somehow
 I doubt it)? What about Windows?
>>>
>>> It's the same sort of thing: untar the sources, configure, make, make
>>> install.

Windows binaries and MacOSX binaries are available from upstream.  The build
process on *ix targets is autoconf based and easy as for awk/gawk.

>> Will this also install all the Python packages one might plausible want
>> to use in GCC?

some extension modules depend on external libraries, but even if those don't
exist, the build succeeds without building these extension modules. The sources
come with embedded libs for zlib, libmpdec,  libexpat.  They don't include
libffi (only in 3.7), libsqlite, libgdbm, libbluetooth, libdb.  I suppose the
usage of such modules should be banned by policy.  The only needed thing is any
of libdb (Berkley/SleepyCat) or gdbm to build the anydbm module which might be
necessary.

> It installs the entire standard Python library (corresponding to the 1800+ 
> pages of the library manual).  I expect that will easily cover anything GCC 
> might want to do.

The current usage of awk and perl doesn't include any third party libraries.
That's where the usage of Python should start with.

Matthias


Re: [RFC] Adding Python as a possible language and it's usage

2018-07-20 Thread Matthias Klose
On 19.07.2018 22:20, Karsten Merker wrote:
> David Malcolm wrote:
>> On Tue, 2018-07-17 at 14:49 +0200, Martin Liška wrote:
>>> I've recently touched AWK option generate machinery and it's
>>> quite unpleasant to make any adjustments.  My question is
>>> simple: can we starting using a scripting language like Python
>>> and replace usage of the AWK scripts?  It's probably question
>>> for Steering committee, but I would like to see feedback from
>>> community.
>>
>> As you know, I'm a fan of Python.  As I noted elsewhere in this
>> thread, one issue is Python 2 vs Python 3 (and minimum
>> versions).  Within Python 2.*, Python 2.6 onwards is broadly
>> compatible with Python 3.*, and there's a well-known common
>> subset that works in both languages.
>>
>> To what extent would this complicate bootstrap?  (I don't think
>> so, in that it would appear to be just an external build-time
>> dependency on the build machine).
>>
>> Would this make it harder for people to build GCC?  It's one
>> more dependency, but CPython is widely available and relatively
>> easy to build.  (I don't have experience of doing bring-up of a
>> new architecture, though).
> 
> Hello,
> 
> I have recently been working on bringing up a new Debian port for
> the riscv64 architecture from scratch, so I would like to add
> some of my personal experiences here.
> 
> Adding a dependency on python for building gcc would make life
> for distribution porters quite a bit harder.  There are a bunch
> of packages that are more or less essential for a modern Linux
> distribution but at the same time extremely difficult to properly
> cross-build.  For a distribution porter trying to bootstrap a new
> architecture, this means that one has to resort to native
> building sooner or later, i.e. one has to build native toolchain
> packages and then work forward from there.  During the bootstrap
> process it is often necessary to break dependency cycles and
> natively rebuild toolchain packages with different build-profiles
> enabled, or to build newer versions of the same toolchain packages
> with bugfixes for the new architecture.
> 
> A dependency on python would mean that to be able to do a native
> rebuild of the toolchain one would need a native python.  The
> problem here is that python has an enormous number of transitive
> build-dependencies and not all of them are easily cross-buildable,
> i.e. one needs a native compiler to build some of them in a
> bootstrap scenario.  This can lead to a catch-22-style situation
> where one would need a native python package and its dependencies
> for natively building the gcc package and a native gcc package
> for building (some of) the dependencies of the python package.
> 
> With awk we don't have this problem as in contrast to python awk
> doesn't pull in any dependencies that aren't required by gcc
> anyway.  From a distro porter's point of view I would therefore
> appreciate very much if it would be possible to avoid adding a
> python dependency to the gcc build process.

I don't see that as an issue.  As said in another reply in this thread, you can
do a staged python build, which has the same build dependencies as awk (maybe
except the db/gdvm module). And if you need to, you can cross build python as
well more easily than for example perl or guile.

Matthias


Re: [RFC] Adding Python as a possible language and it's usage

2018-07-20 Thread Matthias Klose
On 20.07.2018 20:53, Konovalov, Vadim wrote:
>> From: Segher Boessenkool
>> On Fri, Jul 20, 2018 at 12:54:36PM -0400, Paul Koning wrote:
> Fully agree with that. Coming up with a new scripts written in python2 
> really
> makes no sense.

 Then python cannot be a build requirement for GCC, since some of our
 primary targets do not ship python3.
>>>
>>> Is it required that GCC must build with only the stock
>>> support elements on the primary target platforms?
>>
>> Not that I know.  But why
>> should we make it hugely harder for essentially
>> no benefit?
>>
>> All the arguments
>> against awk were arguments against *the current scripts*.
>>
>> And yes, we can (and
>> perhaps should) rewrite those build scripts as C code,
>> just like all the other
>> gen* we have.
> 
> +1 
> 
>>> Or is it allowed to require installing prerequisites?  Yes,
>>> some platforms are so far behind they still don't ship Python 3, but 
>>> installing
> 
> Sometimes those are not behind, those could have no python for other reasons 
> - 
> maybe those are too forward? They just don't have python yet?
> 
>>> it is straightforward.
>>
>> Installing it is not straightforward at all.
> 
> I also agree with this;

all == "Installing it is not straightforward" ?

I do question this. I mentioned elsewhere what is needed.

> Please consider that both Python - 2 and 3 - they both do not 
> support build chain on Windows with GCC
> 
> for me, it is a showstopper

This seems to be a different issue.  However I have to say that I'm not booting
Windows on a regular basis.  Does build chain on Windows means Cygwin?  If yes,
there surely is Python available prebuilt.

Matthias


Re: gcc-gnat for Linux/MIPS-32bit-be, and HPPA2

2018-07-22 Thread Matthias Klose
On 22.07.2018 03:24, Carlo Pisani wrote:
> hi guys
> got some deb files from an old Debian's archive(1), converted .deb

Debian stretch (9) comes with GCC 6, and gnat cross compilers available, same
for Ubuntu 18.04 LTS (GCC 7).  It may be better to start with more recent
versions (packages are gnat-6-hppa-linux-gnu, gnat-6-mips-linux-gnu, and
corresponding GCC 7 vesions).


Re: [RFC] Adding Python as a possible language and it's usage

2018-07-27 Thread Matthias Klose
On 27.07.2018 16:31, Michael Matz wrote:
> Hi,
> 
> On Fri, 27 Jul 2018, Michael Matz wrote:
> 
>> Using any python scripts as part of generally building GCC (i.e. where 
>> the generated files aren't prepackaged) will introduce a python 
>> dependency for distro packages.  And for those distros that bootstrap a 
>> core cycle of packages (e.g. *SUSE) this will include python (and all 
>> its dependencies) into that bootstrap cycle.
>>
>> That will be terrible.
> 
> Oh, and of course, I haven't read any really convincing arguments for 
> why python would be so much better than awk to counter the disadvantages.
> 
> Building a compiler (especially one that regards itself as a 
> multi-target/host one) should have extremely few prerequisites (ideally 
> only a compiler and runtime for the language its written in), and I 
> wouldn't call a full python distro that (no matter how much people claim 
> that getting the necessary subset of python is mostly trivial.  compiling 
> any random awk is trivial, especially given a compiler you already need 
> anyway; python is not).

that very much depends on your bootstrap system supporting staged builds.  You
already have to do that for glibc/gcc anyway.  But yes, if you think that adding
a staged python build is more complicated ...

> Hell, if anything I'd say we should rewrite the awk scripts into POSIX sh 
> (!).  I'll concede that for text processing AWK is nicer ;-)
> 
> So, if it's only for a minor convenience of writing some text 
> processing scripts, no, that's not a good reason to complicate our 
> prerequisites.  (The helper scripts in contrib/ as long as they aren't 
> used during GCC build can use any fancy language they want)



Re: GCC 4.5.3 Release Candidate available from gcc.gnu.org

2011-04-28 Thread Matthias Klose

On 04/21/2011 12:40 PM, Richard Guenther wrote:


A first release candidate for GCC 4.5.3 is available from

   ftp://gcc.gnu.org/pub/gcc/snapshots/4.5.3-RC-20110421/

and shortly its mirrors.  It has been generated from SVN revision 172803.

I have sofar bootstrapped and tested the release candidate on
x86_64-unknown-linux-gnu.  Please test it and report any issues to
bugzilla.

If all goes well the final GCC 4.5.3 release will happen late next week.


I didn't see regressions in the testsuite compared to 4.5.2, for the 
architecures built by Debian.


  Matthias


Re: Bugzilla components for target libraries

2011-11-14 Thread Matthias Klose
On 11/10/2011 06:30 PM, Joseph S. Myers wrote:
> On Thu, 10 Nov 2011, Rainer Orth wrote:
> 
>> I've recently noticed that several of our target libraries are not
>> properly (if at all) represented as bugzilla components.  The following
>> table shows the current situation:
>>
>>   directory  component
> 
> You omitted boehm-gc and zlib, both used in target libraries (libgcjgc, 
> libzgcj) though not intended for direct use as such by GCC users (anyone 
> wanting to use them directly should use the upstream releases).

boehm-gc is used for the gc enabled libobjc as well.


Re: Troubleshooting with gcc 4.6

2011-11-14 Thread Matthias Klose
On 11/09/2011 07:50 PM, Ian Lance Taylor wrote:
> santi  writes:
> 
>> I recently updated my Ubuntu 10.10 to 11.10 and since then I have been
>> having problems with my compiler. I have seen that this new Ubuntu
>> distribution uses gcc 4.6 whilest my old 10.10 used gcc 4.4.5 or
>> 4.4.6.
>>
>> The main problem I have nowadays is with the math.h library when I
>> need to use functions as sqrt() or pow() that I used to use without
>> any problem in the old distribution (well, I had to write the -lm
>> option when I tried to compile my source files but it did run
>> perfectly). Today I'm getting and unresolve refernce to 'sqrt' when I
>> comile my files even though I'm using the -lm option.

this is caused by passing --as-needed by default to the linker. Make sure to
pass libraries on the command line behind objects (you need the symbol
referenced before the definition is found). You'll likely find this issue on
OpenSuse releases (it may be enabled for package builds only).

> This question is not appropriate for the mailing list gcc@gcc.gnu.org.
> It would be appropriate for gcc-h...@gcc.gnu.org.  Please take any
> followups to gcc-help.  Thanks.
> 
> When asking a question of this sort, it helps a lot if you show us
> precisely what you did and precisely what happened.  Without seeing
> that, I am going to guess that you are running into multiarch libraries.
> Debian, and therefore Ubuntu, decided to move the system libraries from
> the locations where all GNU/Linux distros have put them for many years.
> They have updated their own versions of gcc, but the mainstream gcc
> releases have not been updated.
> 
> This is going to be an ongoing problem for many years for people who use
> Debian or Ubuntu.  I do not know how to resolve it.

This is not a multiarch issue. Passing --as-needed by default to the linker was
enabled in the Ubuntu 11.10 release, which is one month old [1].

Even multiarch is only seven month old (first appeared in Ubuntu 11.04), so I
honestly can't see any justification for your "many years" statement.

Yes, I do need to re-submit the updated multiarch patch.

  Matthias

[1]
https://wiki.ubuntu.com/OneiricOcelot/ReleaseNotes?action=show&redirect=OneiricOcelot%2FTechnicalOverview#GCC_4.6_Toolchain


Re: GCC 4.3.4 release candidate available

2009-08-03 Thread Matthias Klose

On 27.07.2009 18:12, Richard Guenther wrote:


A release candidate for the GCC 4.3.4 is now available at

ftp://gcc.gnu.org/pub/gcc/snapshots/4.3.4-RC-20090727

I plan to roll out the final release at the beginning of next week
if there are no major problems reported.


testsuite doesn't show regressions compared to 4.3.3 on various Debian 
architectures (the compiler tested was the Debian gcc-4.3 package, not exactly 
the snapshot provided).


alpha   http://gcc.gnu.org/ml/gcc-testresults/2009-08/msg00134.html
arm http://gcc.gnu.org/ml/gcc-testresults/2009-08/msg00135.html
hppahttp://gcc.gnu.org/ml/gcc-testresults/2009-08/msg00249.html
i486http://gcc.gnu.org/ml/gcc-testresults/2009-07/msg03148.html
ia64http://gcc.gnu.org/ml/gcc-testresults/2009-07/msg03233.html
mipshttp://gcc.gnu.org/ml/gcc-testresults/2009-08/msg00136.html
mipsel  http://gcc.gnu.org/ml/gcc-testresults/2009-07/msg03328.html
powerpc http://gcc.gnu.org/ml/gcc-testresults/2009-07/msg03234.html
s390http://gcc.gnu.org/ml/gcc-testresults/2009-07/msg03330.html
sparc   http://gcc.gnu.org/ml/gcc-testresults/2009-07/msg03329.html
x86_64  http://gcc.gnu.org/ml/gcc-testresults/2009-07/msg03232.html

i486-kfreebsd http://gcc.gnu.org/ml/gcc-testresults/2009-07/msg03235.html

  Matthias


Re: Anyone else run ACATS on ARM?

2009-08-12 Thread Matthias Klose

On 12.08.2009 23:07, Martin Guy wrote:

On 8/12/09, Joel Sherrill  wrote:

  So any ACATS results from any other ARM target would be
  appreciated.


I looked into gnat-arm for the new Debian port and the conclusion was
that it has never been bootstrapped onto ARM. The closest I have seen
is Adacore's GNATPro x86->xscale cross-compiler hosted on Windows and
targetting Nucleus OS (gak!)

The community feeling was that it would "just go" given a prodigal
burst of cross-compiling, but I never got achieved sufficiently high
blood pressure to try it...


is there any arm-linx-gnueabi gnat binary that could be used to bootstrap an 
initial gnat-4.4 package for debian?


  Matthias


Re: Anyone else run ACATS on ARM?

2009-08-20 Thread Matthias Klose

On 17.08.2009 12:00, Mikael Pettersson wrote:

On Wed, 12 Aug 2009 23:08:00 +0200, Matthias Klose  wrote:

On 12.08.2009 23:07, Martin Guy wrote:

On 8/12/09, Joel Sherrill   wrote:

   So any ACATS results from any other ARM target would be
   appreciated.


I looked into gnat-arm for the new Debian port and the conclusion was
that it has never been bootstrapped onto ARM. The closest I have seen
is Adacore's GNATPro x86->xscale cross-compiler hosted on Windows and
targetting Nucleus OS (gak!)

The community feeling was that it would "just go" given a prodigal
burst of cross-compiling, but I never got achieved sufficiently high
blood pressure to try it...


is there any arm-linx-gnueabi gnat binary that could be used to bootstrap an
initial gnat-4.4 package for debian?

  >
  > Matthias

Yes, see<http://user.it.uu.se/~mikpe/linux/arm-eabi-ada/>.


I used these binaries to build a .deb package [1], test results at [2], now 
trying to build trunk with this as a bootstrap package. The Debian gnat-4.4 
package doesn't build yet as it requires zero cost exception support and 
probably more.


  Matthias

[1] http://people.debian.org/~doko/gcc/gcc-snapshot_20090722-0ubuntu1_armel.deb
[2] http://gcc.gnu.org/ml/gcc-testresults/2009-08/msg02018.html


Re: Anyone else run ACATS on ARM?

2009-08-22 Thread Matthias Klose

On 20.08.2009 10:16, Matthias Klose wrote:

On 17.08.2009 12:00, Mikael Pettersson wrote:

On Wed, 12 Aug 2009 23:08:00 +0200, Matthias Klose
wrote:

On 12.08.2009 23:07, Martin Guy wrote:

On 8/12/09, Joel Sherrill wrote:

So any ACATS results from any other ARM target would be
appreciated.


I looked into gnat-arm for the new Debian port and the conclusion was
that it has never been bootstrapped onto ARM. The closest I have seen
is Adacore's GNATPro x86->xscale cross-compiler hosted on Windows and
targetting Nucleus OS (gak!)

The community feeling was that it would "just go" given a prodigal
burst of cross-compiling, but I never got achieved sufficiently high
blood pressure to try it...


is there any arm-linx-gnueabi gnat binary that could be used to
bootstrap an
initial gnat-4.4 package for debian?

>
> Matthias

Yes, see<http://user.it.uu.se/~mikpe/linux/arm-eabi-ada/>.


I used these binaries to build a .deb package [1], test results at [2],
now trying to build trunk with this as a bootstrap package. The Debian
gnat-4.4 package doesn't build yet as it requires zero cost exception
support and probably more.

Matthias

[1]
http://people.debian.org/~doko/gcc/gcc-snapshot_20090722-0ubuntu1_armel.deb
[2] http://gcc.gnu.org/ml/gcc-testresults/2009-08/msg02018.html


Mikael's patch applies to the trunk as well; test results are about the same as 
for the 4.4 branch [1]. Package can be found at [2]. As long as Ada builds on 
the trunk, the ada test results will be included in my emails to gcc-testresults.


  Matthias

[1] http://gcc.gnu.org/ml/gcc-testresults/2009-08/msg02343.html
[2] 
http://people.canonical.com/~doko/tmp/gcc-snapshot_20090821-1ubuntu1_armel.deb


Re: Build with graphite (cloog, ppl) as installed on Debian testing (20090823) fails with -Wc++-compat error.

2009-08-24 Thread Matthias Klose

On 24.08.2009 15:57, Toon Moene wrote:

Tobias Grosser wrote:

The problem was in the cloog-ppl headers and was fixed in CLooG-ppl

0.15.4 (I think).

We should add a check for ClooG revision to make configure fail on
outdated cloog 0.15 revisions.


I think that's the best option. I was waiting for Debian to include a
current cloog/ppl package, so that I wouldn't have to build it myself (I
use mpc, gmp and mpfr from the standard Debian testing install).


updated. should migrate to testing within ten days. I didn't realize that it's 
not merged/updated upstream.  Btw, the source tarball is called cloog-ppl on the 
ftp site, but the shared library is still called libcloog. Debian did rename the 
library to libcloog_ppl as well.


  Matthias


Re: hppa testsuite stalls?

2009-09-10 Thread Matthias Klose

On 09.09.2009 03:07, John David Anglin wrote:





the testsuite on the hppa machine (gcc61 on the compile farm) has

always hanged for me from time to time.  However, lately (at least
since I returned from vacation last Monday) it hangs every time.


This is likely a kernel problem.  There are long standing problems
with testsuite timeouts and occassional hangs on linux.  The frequency
of these is kernel and hardware dependent.


Is this a know problem?  How should I investigate such problems?  It
makes proper testing on that platform rather impossible for me.


It's very difficult as there's little relationship between cause
and symptoms.  If you come across a reproducible testcase, please
report it to the parisc-linux list.


if this is a debian parisc-linux system you might want to install expect-tcl8.3 
(which removes the expect package) to avoid the timeouts. at least that works 
around the timeouts on the debian buildds.


  Matthias


--enable-plugin option overloaded

2009-10-18 Thread Matthias Klose
--enable-plugin is used by classpath (part of libjava) and now by GCC itself. 
disabling the build of the gcjwebplugin now disables plugin support in GCC as 
well. Please could the option for enabling GCC plugin support be renamed to 
something like --enable-plugins, --enable-gcc-plugin, --enable-gcc-plugins ? The 
only reason for not renaming the existing libjava option is that it was there 
first, and that it is part of an imported tree.


  Matthias


Re: --enable-plugin option overloaded

2009-10-22 Thread Matthias Klose

On 19.10.2009 19:42, Andrew Haley wrote:

Ian Lance Taylor wrote:

Andrew Haley  writes:


Matthias Klose wrote:

--enable-plugin is used by classpath (part of libjava) and now by GCC
itself. disabling the build of the gcjwebplugin now disables plugin
support in GCC as well. Please could the option for enabling GCC plugin
support be renamed to something like --enable-plugins,
--enable-gcc-plugin, --enable-gcc-plugins ? The only reason for not
renaming the existing libjava option is that it was there first, and
that it is part of an imported tree.

That doesn't seem like a good enough reason to me.  We should rename
the libjava option --enable-web-plugin or --enable-browser-plugin .


We could rename in the top leve configure/Makefile if we don't want to
touch the classpath sources.


That sounds like a nice solution.


it's not necessary to do this at toplevel, changing libjava is ok. tested with 
--enable-plugin --disable-browser-plugin, and --disable-plugin and 
--enable-browser-plugin.


Ok for the trunk?

  Matthias
gcc/

2009-10-22  Matthias Klose  

* doc/install.texi: Document --enable-browser-plugin.

libjava/

2009-10-22  Matthias Klose  

* configure.ac: Rename --enable-plugin to --enable-browser-plugin,
pass --{en,dis}able-plugin to the classpath configure.
* configure: Regenerate.


Index: gcc/doc/install.texi
===
--- gcc/doc/install.texi(revision 153445)
+++ gcc/doc/install.texi(working copy)
@@ -1882,6 +1884,9 @@
 @item --enable-aot-compile-rpm
 Adds aot-compile-rpm to the list of installed scripts.
 
+...@item --enable-browser-plugin
+Build the gcjwebplugin web browser plugin.
+
 @table @code
 @item ansi
 Use the single-byte @code{char} and the Win32 A functions natively,

Index: libjava/configure.ac
===
--- libjava/configure.ac(revision 153445)
+++ libjava/configure.ac(working copy)
@@ -55,15 +55,15 @@
 [version_specific_libs=no]
 )
 
-AC_ARG_ENABLE(plugin,
-  AS_HELP_STRING([--enable-plugin],
+AC_ARG_ENABLE(browser-plugin,
+  AS_HELP_STRING([--enable-browser-plugin],
  [build gcjwebplugin web browser plugin]),
 [case "$enableval" in
-  yes) plugin_enabled=yes ;;
-  no)  plugin_enabled=no ;;
-  *)   AC_MSG_ERROR([Unknown argument to enable/disable plugin]);;
+  yes) browser_plugin_enabled=yes ;;
+  no)  browser_plugin_enabled=no ;;
+  *)   AC_MSG_ERROR([Unknown argument to enable/disable browser plugin]);;
  esac],
-[plugin_enabled=no]
+[browser_plugin_enabled=no]
 )
 
 AC_ARG_ENABLE(gconf-peer,
@@ -491,8 +491,10 @@
 dnl FIXME?
 ac_configure_args="$ac_configure_args --disable-examples"
 ac_configure_args="$ac_configure_args --with-glibj=build"
-if test "$plugin_enabled" != yes; then
+if test "$browser_plugin_enabled" != yes; then
   ac_configure_args="$ac_configure_args --disable-plugin"
+else
+  ac_configure_args="$ac_configure_args --enable-plugin"
 fi
 if test "$gconf_enabled" != yes; then
   ac_configure_args="$ac_configure_args --disable-gconf-peer"


Re: PATCH: Support --enable-gold=both --with-linker=[bfd|gold]

2010-01-05 Thread Matthias Klose

On 05.01.2010 23:29, Ian Lance Taylor wrote:

"H.J. Lu"  writes:


On Tue, Jan 5, 2010 at 1:35 PM, Ian Lance Taylor  wrote:

Roland McGrath  writes:


I'm still not entirely convinced that this is the way to go.  It seems
to me that ideally one wants to be able to select the linker at
runtime.  I don't see how this patch supports that.  What am I
missing?


It covers the first step by letting you run "ld.bfd" or "ld.gold" to
choose.  Having the two binaries installed by those names is a good start
and seems likely to be part of how any fancier plan would work, so why not
start there?


Mainly because an alternative is to install them in subdirectories
with the name ld.  Then gcc can run them directly using a -B option.
I don't know which approach is best.



Plugin only works with gold. So I configured my gcc with

-with-plugin-ld=ld.gold

If both linkers have the same name, it will be harder to
use ld by default and use gold only for plugin.


The issue can be addressed with symlinks.

Of course, if we have a way to tell gcc the linker to use, by name, at
runtime, that will also work.


symlinks are only a solution for a globally configured default. when building a 
package which requires a specific linker, you'll have to work with explicit 
build-depends/build-conflicts which need package installation/removal for 
building a single package. this might be feasible for a machine used by a single 
developer, but not for a machine where you don't have root access, and you still 
want to be able to use both ld versions. For this kind of setup an option 
interpreted by the gcc driver like --ld= would be useful. even managing this 
symlink with alternatives or diversions gives you the flexibility.


  Matthias


Re: PATCH: Support --enable-gold=both --with-linker=[bfd|gold]

2010-01-05 Thread Matthias Klose

On 05.01.2010 23:59, Roland McGrath wrote:
>> I'm still not entirely convinced that this is the way to go.  It seems
>> to me that ideally one wants to be able to select the linker at
>> runtime.  I don't see how this patch supports that.  What am I
>> missing?
>
> It covers the first step by letting you run "ld.bfd" or "ld.gold" to
> choose.  Having the two binaries installed by those names is a good start
> and seems likely to be part of how any fancier plan would work, so why not
> start there?

agreed on this.


Mainly because an alternative is to install them in subdirectories
with the name ld.  Then gcc can run them directly using a -B option.
I don't know which approach is best.


I think it keeps things simplest for humans to understand if the actual
binaries are available as ld.bfd and ld.gold.  If you then want some
obscure directory names containing an "ld" for gcc's use, then make those
symlinks.  Personally, I think -Wl,--gold (via $(bindir)/ld being a wrapper
script) is nicer than -B/usr/libexec/binutils/gold/ or whatnot.


why not make this more explicit by adding an option --ld which is directly 
understood by the gcc driver?


  Matthias


rebuild test of Debian packages with GCC trunk 20100107

2010-01-11 Thread Matthias Klose
A rebuild test of the current Debian unstable distribution on x86_64-linux-gnu 
was done, one rebuild test with the current gcc-4.4 from the branch, and another 
one with GCC trunk 20100107. The latter did show about 200 additional build 
failures, which are listed in [1] (minus some already known failures due to 
misconfiguration for the rebuild with gcc-4.5). The GCC test packages used for 
the test rebuild are available for i386 and amd64 at [2].


Lucas did do the test rebuild and a quick analysis of the results, repeated 
here:

 - some symbols problems (sqlxx, strigi)
 - tests failure (incorrect code generation?)
 - many new gcc errors (compiler becoming stricter, as usual?)
 - some ICEs
 - several builds where g++ hanged (e.g fwbuilder, but there are others).
   the timeout for the builds was set to 60min.

I'll go through the list of build failures this week, trying to file appropriate 
bug reports for GCC or the distribution.


  Matthias

[1] https://wiki.ubuntu.com/GCC/Rebuild20100107
[2] deb http://people.debian.org/~doko/archive/ experimental/


Re: PR 42485 Delete both -b and -V options

2010-02-23 Thread Matthias Klose

On 23.02.2010 12:10, Manuel López-Ibáñez wrote:

On 23 February 2010 12:00, Richard Guenther  wrote:

On Tue, 23 Feb 2010, Manuel López-Ibáñez wrote:


Bootstrapped and regression tested (it seems nothing was testing these
options) on x86_64-unknown-linux-gnu.

OK?


This is ok if nobody has serious objections and at the same time
is willing to either fix these options for 4.5 or at least show
that they do work when special care is taken at configure and
install time and is willing to document those restrictions.

Please wait 48h to give people a chance and wait until an
eventual discussion dies down.


Actually, I haven't tested that -b does not work. I don't have any
cross-compiler or want to build one. Could any one test that?

I see that several Debian packages use (or have used) -b  but perhaps
those are fixed already.

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=318750


the given example uses this upstream, not just in the package. I didn't see many 
uses of this. However distributions tend to patch around this.


As a plus, removing -V could allow for not-changing paths in an installation 
across subminor releases.


  Matthias


Re: packaging GCC plugins using gengtype (e.g. MELT)?

2010-03-14 Thread Matthias Klose

On 14.03.2010 13:15, Basile Starynkevitch wrote:

Basile Starynkevitch wrote in
http://lists.debian.org/debian-gcc/2010/03/msg00047.html


Now, one of the issues about MELT & Debian packaging is the fact that
melt-runtime.c (the source of melt.so plugin) uses GTY
http://gcc.gnu.org/onlinedocs/gccint/Type-Information.html#Type-Information
& register GGC roots thru PLUGIN_REGISTER_GGC_ROOTS ... Hence, it
needs gengtype (from GCC 4.5 build tree) to generate gt-melt-runtime.h
[#include-ed from melt-runtime.c] so the entire GCC 4.5 source & build
trees are needed to build melt.so (or any other gengtyp-ing GCC plugin).


there was another request to add the gengtype binary to the package, required by 
the dragonegg plugin. details at:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=562882#13

Is the binary all, which would need to be added?

  Matthias


[rfc] common location for plugins

2010-03-23 Thread Matthias Klose
For packages of GCC I would like to see a common location where plugins can be 
installed; currently a path to the plugin has to be given on the command line, 
which is likely to be different for different installations.  What about 
-fplugin= (without the .so) meaning to search for the plugin in a default 
location like /plugins for the plugin? -fplugin=.so could also 
be used, but maybe would be ambiguous looking in the current dir, or the plugin dir.


  Matthias


Re: GCC 4.3.3 Release Candidate available from gcc.gnu.org

2009-01-22 Thread Matthias Klose
Richard Guenther schrieb:
> A release candidate for GCC 4.3.3 is available from
> 
> ftp://gcc.gnu.org/pub/gcc/snapshots/4.3.3-RC-20090117/
> 
> and shortly its mirrors.  It has been generated from SVN revision 143460.

Lucas did do two test rebuilds of the current Debian lenny/testing archive for
i486 using the Debian gcc-4.3 packages, one based on gcc-4_3-branch from
20080905 and one based on gcc-4_3-branch from 20090117 including the proposed
patch for PR38905.

progressions (in terms of packages):

 - ed, testsuite runs sucessful
 - freefem3d: fixed build failure
 - gambit: fixed build failure
 - ilmbase: fixed build failure
 - gmsh: fixed link failure
 - speech-tools: fixed build failure
 - texmacs: fixed build failure
 - xapian-core: testsuite runs sucessful

regression:

 - debtags, filed PR38902

the builds for btanks, dcmtk, gnat-gps, gnome-games, insighttoolkit,
latex-cjk-chinese-arphic, linux-2.6, lyx, openjdk-6, openoffice.org were not
finished in time.

  Matthias


Re: GCC for mipsel-unknown-linux-gnu state on 4.3 and 4.4?

2009-02-01 Thread Matthias Klose
Laurent GUERBY schrieb:
> Before my testresult I could find only trunk 140564
> in september 2008 with a patch by David Daney then no more testresults:
> 
> http://gcc.gnu.org/ml/gcc-testresults/2008-09/msg02009.html
> 
> I could'nt find any 4.3 result on mipsel posted on gcc-testresults (?).
> 
> GCC 4.2.4 doesn't seem to have those pch FAIL:
> http://gcc.gnu.org/ml/gcc-testresults/2009-01/msg01914.html
> 
> Same for 4.1.3:
> http://gcc.gnu.org/ml/gcc-testresults/2009-01/msg01913.html
> 
> And 3.4.6: 
> http://gcc.gnu.org/ml/gcc-testresults/2008-10/msg02151.html

Sorry, didn't notice this. The build logs of the mips*-linux logs were not
uploaded, because of the size of the logs (all 64bit tests failing). You can
find these now at http://people.debian.org/~doko/tmp/gcc-mips/
Arthur Loiret did want to have a look why these fail (the Debian build is
patched to build a triarch compiler).

  Matthias


exception propagation support not enabled in libstdc++ 4.4 on {armeabi,hppa,sparc}-linux

2009-05-02 Thread Matthias Klose
Noticed that some symbols introduced for the exception propagation support are
missing in libstdc++.so.6 on arm-linux-gnueabi, hppa-linux-gnu and
sparc-linux-gnu (no results for mips*-linux yet). The libstdc++ configure check
GLIBCXX_ENABLE_ATOMIC_BUILTINS fails, because three of the five __sync_*
functions are still seen in the asm code (not seen: __sync_lock_release and
__sync_synchronise).

libgcc.a has all of the __sync_* functions defined, and the configure (link)
tests in libgomp and libgfortran do succeed. Unsure what I'm doing wrong with
the libstdc++ configury. Is this seen on other linux builds for these targets as
well?

  Matthias


Re: exception propagation support not enabled in libstdc++ 4.4 on {armeabi,hppa,sparc}-linux

2009-05-05 Thread Matthias Klose
Paolo Carlini schrieb:
> Paolo Carlini wrote:
>> Ok, thanks. Then, I think I'll implement this, for now. Seems in any
>> case conservative to have a link type test identical to the one used in
>> libgomp and libgfortran and a fall back to the .s file (as currently used).
>>   
> I committed the below to mainline. Assuming no issues are noticed with
> it, I mean to apply it to 4_4-branch too in a few days.

with this patch applied to the 4.4 branch, the tests now succeed as expected on
the 4_4-branch on {armeabi,hppa,sparc}-linux.

thanks, Matthias


Re: exception propagation support not enabled in libstdc++ 4.4 on {armeabi,hppa,sparc}-linux

2009-05-06 Thread Matthias Klose
Paolo Carlini schrieb:
> Matthias Klose wrote:
>> Paolo Carlini schrieb:
>>   
>>> Paolo Carlini wrote:
>>> 
>>>> Ok, thanks. Then, I think I'll implement this, for now. Seems in any
>>>> case conservative to have a link type test identical to the one used in
>>>> libgomp and libgfortran and a fall back to the .s file (as currently used).
>>>>   
>>>>   
>>> I committed the below to mainline. Assuming no issues are noticed with
>>> it, I mean to apply it to 4_4-branch too in a few days.
>>> 
>> with this patch applied to the 4.4 branch, the tests now succeed as expected 
>> on
>> the 4_4-branch on {armeabi,hppa,sparc}-linux.
>>   
> Good. I have now backported the patch to 4_4-branch too. Please double
> check that the regression tests are also fine, thanks in advance.

Jakub pointed out on irc that sparc-linux (32bit, v8) doesn't have the atomic
support functions. Building a 32bit v9 compiler as part of a biarch build is
currently not supported (64bit and v9 are tightly coupled).

No regressions on hppa-linux.

On arm-linux-gnueabi there are regressions of the form

/usr/bin/ld: ./atomic-1.exe: hidden symbol `__sync_val_compare_and_swap_4' in
/home/doko/gcc/4.4/gcc-4.4-4.4.0/build/gcc/libgcc.a(linux-atomic.o) is
referenced by DSO
/usr/bin/ld: final link failed: Nonrepresentable section on output

in the libgomp, libstdc++ and g++ testsuites. Linking the shared libstdc++ with
both -lgcc_s and -lgcc does fix these (testsuites currently running), however
I'm not sure how to do this properly, as libtool removes any `-lgcc' argument.

  Matthias


Re: exception propagation support not enabled in libstdc++ 4.4 on {armeabi,hppa,sparc}-linux

2009-05-07 Thread Matthias Klose
Ralf Wildenhues schrieb:
> * Matthias Klose wrote on Wed, May 06, 2009 at 09:44:07AM CEST:
>> On arm-linux-gnueabi there are regressions of the form
>>
>> /usr/bin/ld: ./atomic-1.exe: hidden symbol `__sync_val_compare_and_swap_4' in
>> /home/doko/gcc/4.4/gcc-4.4-4.4.0/build/gcc/libgcc.a(linux-atomic.o) is
>> referenced by DSO
>> /usr/bin/ld: final link failed: Nonrepresentable section on output
>>
>> in the libgomp, libstdc++ and g++ testsuites. Linking the shared libstdc++ 
>> with
>> both -lgcc_s and -lgcc does fix these (testsuites currently running), however
>> I'm not sure how to do this properly, as libtool removes any `-lgcc' 
>> argument.
> 
> In order to be able to anayze this (and either confirm or refute a
> possible libtool bug), I would like to see the './libtool --mode=link'
> command that fails, plus all of its output with --debug added as first
> argument; further the output of
>   ./libtool --tag=TAG --config
> 
> with TAG replaced by the tag used for the link.

log attached. the link command doesn't fail, but it differs which files are
included from libgcc.a. -lm -lgcc_s -lgcc gets reordered to -lm -lgcc -lc
-lgcc_s, which does resolve the __aeabi_* symbols as well. just appending -lgcc
to libtool's postdeps only resolves the __sync_* symbols (only found in 
libgcc.a).

  Matthias


log.gz
Description: GNU Zip compressed data


Re: GFDL/GPL issues

2010-05-26 Thread Matthias Klose

On 26.05.2010 02:44, Mark Mitchell wrote:

In a biweekly call with the other GCC Release Managers, I was asked
today on the status of the SC/FSF discussions re. GFDL/GPL issues.  In
particular, the question of whether or not we can use "literate
programming" techniques to extract documentation from code and take bits
of what is currently in GCC manuals and put that into comments in code
and so forth and so on.


there is another issue with the manual pages.  Debian considers GFDL licensed 
files with invariant sections and/or cover texts as non-free.  You may agree or 
disagree with this, but the outcome is that Debian has to ship the gcc 
documentation and the manual pages in its non-free section.  The issue was 
raised with the FSF some years ago, but issues with the GFDL seem to be low 
priority within the FSF (Mako may correct me).  It would be nice to know if the 
files used to generate the manual pages (gcc/doc/invoke.texi, 
gcc/fortran/gfortran.texi, gcc/java/gcj.texi) could be dual-licensed as well, so 
that is possible to provide basic documentation in Debian as well.


  Matthias


Re: GFDL/GPL Issue

2010-06-02 Thread Matthias Klose

On 02.06.2010 01:31, Mark Mitchell wrote:

I will state explicitly up front a few topics I am not raising, because
I do not think they are either necessary, or likely to be productive:

* Whether or not the GFDL is a "free" license, or whether it's a good
license, or anything else about its merits or lack thereof


Maybe, but GFDL is not GFDL.  Please could you clarify if such documentation is 
licensed under a GFDL with frontcover/backcover texts and/or invariant sections? 
 Would any of these clauses even make sense for this kind of documentation?


  Matthias


Re: Source for current ECJ not available on sourceware.org

2010-06-28 Thread Matthias Klose

On 28.06.2010 23:25, David Daney wrote:

On 06/28/2010 01:11 PM, Brett Neumeier wrote:

The GCC build process uses ecj, which is obtained from sourceware.org
using contrib/download_ecj. The current latest version of ecj, used
for the GCC build, is ecj 4.5.

The previous version of ecj was 4.3, the source for which can be found
at the same location on sourceware.org. But the FTP site doesn't
contain the source for ecj 4.5.

Are there any plans to publish the source code along with the binary
jar file? In the meantime, where can I find the source code for the
current ecj, as needed by gcc? Is there a source repository I can get
to?



The source is available somewhere, I have seen it.

j...@gcc.gnu.org is the best place to ask java questions. Let's see what
they say over there.


sourceware.org:/pub/java holds both ecj-4.3 and ecj-4.5



Re: installing with minimal sudo

2010-06-30 Thread Matthias Klose

On 30.06.2010 23:18, Basile Starynkevitch wrote:

Practical advices welcome.

Cheers.

PS. On Debian, the make-kpkg command has a --rootcmd=sudo option. I am
trying to imagine the equivalent for GCC.  Of course on my machine sudo
don't ask any password.


unsure if I understand this correctly, but you could install setting DESTDIR to 
a temporary installation location and then copying the files later to the final 
location.


  Matthias


Re: forcing the linker to be a particular one (i.e. gold vs bfd)

2010-10-10 Thread Matthias Klose

On 10.10.2010 22:02, Kalle Olavi Niemitalo wrote:

Basile Starynkevitch  writes:


Of course, one can always force ld to be a particular linker (i.e. the
BFD one on a system where the default is GOLD, or vice versa) with ugly
$PATH and symlink tricks. But that is ugly.


You mentioned you use Debian; their gcc-4.4 4.4.3-4 and later
have a "gold-and-ld" patch to support gcc -fuse-ld=bfd and
-fuse-ld=gold.  This patch is not in the FSF's GCC 4.4.5,
4.5.1, nor trunk.


sorry to say so, but that's a bad story.  this is a patch which is in current 
binutils trunk, submitted for review for gcc trunk,  finally got reviewed by 
Mark Mitchell after some months, but didn't get any feedback from the original 
submitter. With binutils 2.21 getting close to release, we'll have a mismatch 
with gcc and binutils.  any idea how to make some progress?


  Matthias




Re: PATCH RFA: Do not build java by default

2010-11-01 Thread Matthias Klose

On 31.10.2010 20:09, Ian Lance Taylor wrote:

Currently we build the Java frontend and libjava by default.  At the GCC
Summit we raised the question of whether should turn this off, thus only
building it when java is explicitly selected at configure time with
--enable-languages.  Among the people at the summit, there was general
support for this, and nobody was opposed to it.

Here is a patch which implements that.  I'm sending this to the mailing
lists gcc@ and java@, as well as the relevant -patches@ lists, because
it does deserve some broader discussion.

This is not a proposal to remove the Java frontend nor is it leading up
to that.  It is a proposal to not build the frontend by default, putting
Java in the same category as Ada and Objective C++.  The main argument
in favor of this proposal is twofold: 1) building libjava is a large
component of gcc bootstrap time, and thus a large component in the
amount of time it takes to test changes; 2) it is in practice very
unusual for middle-end or back-end changes to cause problems with Java
without also causing problems for C/C++, thus building and testing
libjava does not in practice help ensure the stability of the compiler.
A supporting argument is since Sun has released their Java tools under
the GPL, community interest seems to have shifted toward the Sun tools;
gcc's Java frontend is in maintenance mode, with little new development
currently planned.


please note that gcj is still required for a bootstrap of openjdk on platforms 
which don't yet have a working openjdk. At least for this purpose it is still 
maintained.



This patch should not of course change whether or not distros choose to
package the Java compiler; undoubtedly they would continue to do so,
just as they package the Ada compiler today.

Comments?  Approvals?


if build speed is the only issue, why not

 - disable the static libgcj build, if not explicitely enabled?

 - disable the biarch build for libgcj? most distributions don't
   have all of the depending libraries available for biarch builds.

Matthias


Re: Patch policy for branches

2006-02-19 Thread Matthias Klose
Mark Mitchell writes:
> and the 3.4.x branch is official dead at this point.

No, see http://gcc.gnu.org/ml/gcc/2005-12/msg00189.html

  Matthias


[FYI] Building the whole Debian archive with GCC 4.1: a summary

2006-03-26 Thread Matthias Klose
Summary:

GCC 4.1 itself appears to be very stable, both on MIPS and AMD64.
There are, however, a large number of packages using code (especially
C++) which GCC 4.1 treats as errors.  Fortunately, most of them are
trivial to fix.  By compiling about 6200 packages, over 500 new
bugs have been discovered and submitted, 280 of which are specific
to the increased strictness of GCC 4.1.  Patches for 2/3 of those
GCC 4.1 specific bugs have been submitted.

Complete email at http://lists.debian.org/debian-gcc/2006/03/msg00405.html


Re: hppa libiberty configure failure: Link tests are not allowed after GCC_NO_EXECUTABLES.

2006-06-01 Thread Matthias Klose
Martin Michlmayr writes:
> I get the following failure while building gcc 4.2 on hppa:
> 
> checking for pid_t... no
> checking for library containing strerror... configure: error: Link tests are 
> not allowed after GCC_NO_EXECUTABLES.
> make[3]: *** [configure-target-libiberty] Error 1
> make[3]: Leaving directory
> `/build/buildd/gcc-snapshot-20060530/build-hppa64'
> make[2]: *** [all] Error 2

you try to build some target library for hppa64-linux, which won't
work. most likely some --disable-gomp is missing when configuring the
cross compiler.

  Matthias


Re: sorry, unimplemented: 64-bit mode not compiled in - ?!

2006-08-11 Thread Matthias Klose
Andrew Pinski writes:
> 
> On Jul 28, 2006, at 4:47 AM, Richard Guenther wrote:
> 
> >
> > The memory requirement for PR12245 will nearly double.
> 
> Saying it will double is not prove, please provide the memory usage
> dumps.  If it does double then you should not be using x86 to optimize
> the memory usage and instead using powerpc-linux-gnu to do that work.

appended are the timing results for i486, powerpc and sparc, no memory
usage reports yet.

Daniel Jacobowitz writes:
> On Fri, Jul 28, 2006 at 11:44:12AM +0200, Jan Hubicka wrote:
> > Interesting, the major reason for disabling -m64 by default for 32bit
> > compilers was the fact that it enforces HOST_WIDE_INT to be 64bit
> > slowing down the whole compiler considerably.  Are Debian's folks happy
> > to wait longer for compilation or has this overhead changed?
> 
> They haven't done any measuring I know of, but we needed 64-bit
> compilers for a variety of reasons and this was much less disruptive
> than packaging a second copy of GCC.

"They"?, thought you still join the party ;)  the s390, sparc,
powerpc architectures are all 32bit userland, but need at least a
64bit compiler building the kernel. Another reason is to have some
applications to run in 64bit userland.


The timing results are user time (using the time command). sparc
biarch currently doesn't build on the trunk, so no results yet.


2006-07-29  2006-07-21
4.1bi   4.1 4.2bi   4.2


PR8361 -O2  i48619,69   18,8295,58% 44,24   45,00   101,72%
PR8361 -O0  i486 5,805,1188,10%  7,738,06   104,27%

 
 2006-06-27  
PR8361 -O2  powerpc 32,48   32,84   101,11% 76,10   75,7899,58%
PR8361 -O0  powerpc  9,839,8099,69% 13,69   13,4898,47%

PR8361 -O2  sparc   46,20   45,4998,46%   n/a  129,38   n/a
PR8361 -O0  sparc   11,33   11,2299,03%   n/a   18,10   n/a


  Matthias


Re: 4.1 status?

2006-09-10 Thread Matthias Klose
Richard Guenther writes:
> On 9/9/06, Mark Mitchell <[EMAIL PROTECTED]> wrote:
> > Kenny Simpson wrote:
> > > What is the status of the 4.1 branch?  Any word on 4.1.2?
> >
> > My current plan is to do a 4.1.2 along with 4.2.0.  My concern has been
> > that with 4.2.0 moving slowly, trying to organize another release might
> > just distract the developer community.
> >
> > However, I realize that's a pretty wide gap between 4.1.1 and 4.1.2.  We
> > could also do 4.1.2 sooner, and then do 4.1.3 along with 4.2.0.  (I want
> > to do a 4.1.x release along with 4.2.0 so as to avoid the problems we
> > have in past with quality "going backwards" between releases from
> > different branches.)
> >
> > I'm sure that, a priori, people would prefer a 4.1.2 release, but it
> > does take effort.  On the other hand, many 4.1 bugs are also in 4.2.
> >
> > Any thoughts?
> 
> With my vendor hat on I'd prefer a 4.1.2 release sooner than I expect
> 4.2.0 - which would make the time we branch for 4.2.0 a good candidate.
> >From a pure GCC development side I do not care very much about
> 4.1.2 (or even 4.0.4 which I don't expect at all).
> 
> I guess a release of 4.1.2 together with branching for 4.2.0 might encourage
> to backport regression fixes from 4.2 to 4.1, as with stage1 starting, 4.1.2
> will get even less attention than 4.2.0.

+1 for a 4.1.2 release when/after the 4.2.0 branch is created.  I'll
propose this release to the Debian release managers to be included
into the upcoming Debian 4.0 release.

  Matthias


-flto and -Werror

2021-05-04 Thread Matthias Klose
Using -flto exposes some new warnings in code, as seen in the both build logs
below, for upstream elfutils and systemd.  I have seen others.  These upstreams
enable -Werror by default, but probably don't see these warnings turning to
errors themself, because the LTO flags are usually injected by the packaging 
tools.

e.g.
https://launchpadlibrarian.net/536740411/buildlog_ubuntu-hirsute-ppc64el.systemd_248.666.gd4d7127d94+21.04.20210503043716_BUILDING.txt.gz
e.g.
https://launchpadlibrarian.net/536683989/buildlog_ubuntu-hirsute-amd64.elfutils_0.183.43.g92980edc+21.04.20210502190301_BUILDING.txt.gz

showing:

../src/shared/efi-loader.c: In function ‘efi_get_reboot_to_firmware’:
../src/shared/efi-loader.c:168:16: error: ‘b’ may be used uninitialized in this
function [-Werror=maybe-uninitialized]

i386_lex.c: In function ‘i386_restart’:
i386_lex.c:1816:25: error: potential null pointer dereference
[-Werror=null-dereference]
 1816 | b->yy_bs_column = 0;

A coworker worked out by review that these warnings are false positives.  Now
the first option already has the *maybe* in it's name, the second option gives
this hint in the message (*potentially*).  Now getting the complaint that
-Werror isn't usable with -flto anymore.

Would it make sense to mark warnings with a high potential of false positives,
which are not turned into errors with -Werror? And only turn these into errors
with a new option, e.g. -Wall-errors?

Matthias


ARM32 configury changes, with no FPU as a default

2021-09-17 Thread Matthias Klose
Starting with GCC 8, the configury allows to encode extra features into the
architecture string. Debian and Ubuntu's armhf (hard float) architecture is
configured with

  --with-arch=armv7-a --with-fpu=vfpv3-d16

and now should be configured with

  --with-arch=armv7-a+fp

The --with-fpu configure option is deprecated.  The problem with this approach
is that there is no default for the fpu setting, while old compilers silently
pick up the -mfpu from the configured compiler.

This breaks software which explicitly configures things like -march=armv7-a, or
where the architecture string is embedded in the source as an attribute.  So
going from one place in the compiler about configuring the ABI for a distro
arch, this config now moves to some dozen places in different packages.  Not the
thing I would expect.

Richard tells me that the --with-fpu option goes away in some future GCC
version, just asking here how others solved this issue, or are we the only ones
still building for ARM 32bit?

Thanks, Matthias


Re: [RISCV] RISC-V GNU Toolchain Biweekly Sync-up call (Jan 27, 2022)

2022-01-27 Thread Matthias Klose
On 1/26/22 14:04, jia...@iscas.ac.cn wrote:
> Hi all,
> 
> There is an agenda for tomorrow's meeting. If you have topics to
> discuss or share, please let me know and I can add them to the agenda.
> 
> Agenda:
> 
> 
> 
> 
> 
> - Bump GCC default ISA spec and got bug report[1] for that.

Tried to join the meeting, but it ended early apparently.

Using in Debian and Ubuntu binutils 2.38, I see then with GCC 11.2 warnings for
every link:

/usr/bin/ld: warning: /usr/lib/gcc/riscv64-linux-gnu/10/crtn.o: mis-matched ISA
version 2.0 for 'i' extension, the output version is 2.1
/usr/bin/ld: warning: /usr/lib/gcc/riscv64-linux-gnu/10/crtn.o: mis-matched ISA
version 2.0 for 'a' extension, the output version is 2.1
/usr/bin/ld: warning: /usr/lib/gcc/riscv64-linux-gnu/10/crtn.o: mis-matched ISA
version 2.0 for 'f' extension, the output version is 2.2
/usr/bin/ld: warning: /usr/lib/gcc/riscv64-linux-gnu/10/crtn.o: mis-matched ISA
version 2.0 for 'd' extension, the output version is 2.2

Are there any plans to backport the support for ISA 2.1/2.2 to GCC 11? Or do we
need to configure binutils to use 2.0 until the compiler is changed to GCC 12?

There's also no documentation about that change, at least in binutils. Should
that be mentioned in the release notes?

Thanks, Matthias


Re: [RISCV] RISC-V GNU Toolchain Biweekly Sync-up call (Jan 27, 2022)

2022-01-28 Thread Matthias Klose
On 1/28/22 11:06, David Abdurachmanov wrote:
> On Thu, Jan 27, 2022 at 6:21 PM Matthias Klose  wrote:
> 
>> On 1/26/22 14:04, jia...@iscas.ac.cn wrote:
>>> Hi all,
>>>
>>> There is an agenda for tomorrow's meeting. If you have topics to
>>> discuss or share, please let me know and I can add them to the agenda.
>>>
>>> Agenda:
>>>
>>>
>>>
>>>
>>>
>>> - Bump GCC default ISA spec and got bug report[1] for that.
>>
>> Tried to join the meeting, but it ended early apparently.
>>
>> Using in Debian and Ubuntu binutils 2.38, I see then with GCC 11.2
>> warnings for
>> every link:
>>
>> /usr/bin/ld: warning: /usr/lib/gcc/riscv64-linux-gnu/10/crtn.o:
>> mis-matched ISA
>> version 2.0 for 'i' extension, the output version is 2.1
>> /usr/bin/ld: warning: /usr/lib/gcc/riscv64-linux-gnu/10/crtn.o:
>> mis-matched ISA
>> version 2.0 for 'a' extension, the output version is 2.1
>> /usr/bin/ld: warning: /usr/lib/gcc/riscv64-linux-gnu/10/crtn.o:
>> mis-matched ISA
>> version 2.0 for 'f' extension, the output version is 2.2
>> /usr/bin/ld: warning: /usr/lib/gcc/riscv64-linux-gnu/10/crtn.o:
>> mis-matched ISA
>> version 2.0 for 'd' extension, the output version is 2.2
>>
>> Are there any plans to backport the support for ISA 2.1/2.2 to GCC 11? Or
>> do we
>> need to configure binutils to use 2.0 until the compiler is changed to GCC
>> 12?
>>
> 
> There are a few packages that will be broken (because they set -march
> assuming the old standard). Things like kernel (patch proposed), opensbi
> (patch proposed), u-boot, grub and similar. Kudos to Aurelien Jarno for
> sending these patches already.

seems to be a bit more ...
we are seeing a lot of build failures building parts of the KDE stack.
https://launchpadlibrarian.net/582482377/buildlog_ubuntu-jammy-riscv64.ktexteditor_5.90.0-0ubuntu2_BUILDING.txt.gz

while you see only warnings, the linker errors out.

Also GCC 11 fails to build with binutils 2.38:

libtool: link: `sstream-inst.lo' is not a valid libtool object
make[8]: *** [Makefile:633: libc++11convenience.la] Error 1


Is there a plan, or procedure to follow doing this ISA change for a set of
packages, e.g. building packages in a particular order?

> The warnings can be ignored here IIRC. The patch was proposed back in
> December:
> [PATCH 1/2] RISC-V: Don't report mismatch warnings when versions are larger
> than 1.0.
> https://sourceware.org/pipermail/binutils/2021-December/119060.html
> 
> But it wasn't committed yet. The patch was part of the series to change
> default ISA to 20191213.
> 
> IIUC all combinations:
> - GCC 12 + binutils 2.38
> - GCC 11 + binutils 2.37
> - GCC 12 + binutils 2.37
> are valid, but you might want to use --with-isa-spec=2.2 for now until some
> of the packages are patched.

so this leaves "GCC 11 + binutils 2.38" as invalid.  binutils 2.38 will be
released before GCC 12, that is what people will be using. IMO the binutils 2.38
release should default to the old ISA.

Matthias


Re: Porting the Docs to Sphinx - project status

2022-02-04 Thread Matthias Klose
On 1/31/22 15:06, Martin Liška wrote:
> Hello.
> 
> It's about 5 months since the last project status update:
> https://gcc.gnu.org/pipermail/gcc-patches/2021-August/577108.html
> Now it's pretty clear that it won't be merged before GCC 12.1 gets released.
> 
> So where we are? I contacted documentation maintainers (Gerald, Sandra and
> Joseph) at the
> end of the year in a private email, where I pinged the patches. My take away 
> is
> that both
> Gerald and Joseph are fine with the porting, while Sandra has some concerns.
> Based on her
> feedback, I was able to improve the PDF generated output significantly and I'm
> pleased by the
> provided feedback. That led to the following 2 Sphinx pulls requests that need
> to be merged
> before we can migrate the documentation: [1], [2].
> 
> Since the last time I also made one more round of proofreading and the layout
> was improved
> (mainly for PDF part). Current version of the documentation can be seen here:
> https://splichal.eu/scripts/sphinx/
> 
> I would like to finish the transition once GCC 12.1 gets released in May/June
> this year.
> There are still some minor regressions, but overall the Sphinx-based
> documentation should
> be a significant improvement over what we've got right now.
> 
> Please take this email as urgent call for a feedback!

Please take care about the copyrights.  I only checked the D frontend manual,
and this one suddenly has a copyright with invariant sections, compared to the
current gdc.texi which has a copyright *without* the invariant sections.  Debian
doesn't allow me to ship documentation with invariant sections ...

I didn't look how much you reorganized the sources, but it would nice to split
the files into those documenting command line options (used to generate the man
pages) and other documentation.  This is already done for gcc/doc, but not for
other frontends.  It would allow having manual pages with a copyright requiring
front and back cover texts in the manual pages.

It would also be nice to require the latest sphinx version (and probably some
plugins), so that distros can build the docs with older sphinx versions as well.

Matthias


Re: GCC 4.8.0 Release Candidate available from gcc.gnu.org

2013-03-17 Thread Matthias Klose
Am 16.03.2013 04:57, schrieb Jakub Jelinek:> GCC 4.8.0 Release Candidate
available from gcc.gnu.org
>
> The first release candidate for GCC 4.8.0 is available from
>
>  ftp://gcc.gnu.org/pub/gcc/snapshots/4.8.0-RC-20130316
>
> and shortly its mirrors.  It has been generated from SVN revision 196699.
>
> I have so far bootstrapped and tested the release candidate on
> x86_64-linux and i686-linux.  Please test it and report any issues to
> bugzilla.

looks good on KFreebsd, the Hurd, and Linux on x86_64, i586, x32, ia64, powerpc,
ppc64, s390, s390x, other architectures still building (sparc, mips* still
building however mips did fail to bootstrap in the past). Both ARM hard and soft
float fail to build for me (PR56640).

  Matthias



Re: If you had a month to improve gcc build parallelization, where would you begin?

2013-04-03 Thread Matthias Klose
Am 03.04.2013 17:27, schrieb Simon Baldwin:
> Suppose you had a month in which to reorganise gcc so that it builds
> its 3-stage bootstrap and runtime libraries in some massively parallel
> fashion, without hardware or resource constraints(*).  How might you
> approach this?
> 
> I'm looking for ideas on improving the build time of gcc itself.  So
> far I have identified two problem areas:
> 
> - Structural.  In a 3-stage bootstrap, each stage depends on the
> output of the one before.  No matter what does the actual compilation
> (make, jam, and so on), this constrains the options.
> - Mechanical.  Configure scripts cause bottlenecks in the build
> process.  Even if compilation is offloaded onto something like distcc,
> configures run locally and randomly throughout the complete build,
> rather than (say) all at once upfront.  Source code compilation
> blocks until configure is completed.

- the configuration and build of the runtime libraries (libgcc,
  libgomp, libstdc++) during the bootstrap is mostly serial.

- multilibs are built for stage2 and stage3, but are not needed.
  multilib builds for the libararies are only needed for the final
  build of the target libraries.

  the simple approach to disable the multilib build in the toplevel
  makefile   doesn't work, as the same target apparently used for the
  final build.

- libjava should not be built for multilibs at all unless configured
  otherwise. and maybe the static build should be disabled by default too.

Matthias



plugin.exp testsuite dependencies on prev- paths

2013-05-27 Thread Matthias Klose
I see all the plugin tests fail when trying to build the tests; this looks a bit
like PR41569, and the tests fail with

  make -k -C /gcc check RUNTESTFLAGS="plugin.exp --debug"


In file included from /gcc/testsuite/../../gcc/gcc-plugin.h:28:0,
   from /gcc/testsuite/g++.dg/plugin/attribute_plugin.c:3:
# include 
   ^
compilation terminated.

because the compilation is called like:

Executing on build: /./prev-gcc/xg++ -B/./prev-gcc/ ... \
  -nostdinc++ \
  -I/prev-x86_64-linux-gnu/libstdc++-v3/include

However for a staged build, both prev-gcc and prev-x86_64-linux-gnu don't exist
anymore after a successful build. Re-creating these symlinks for the test run
works around the issue.

I can't see these failures for other submitted test results, so any hint what
I'm doing wrong here?

  Matthias


  1   2   >