Re: Dw2 CIE no longer contains personality routine augmentation?
On Thu, Oct 1, 2009 at 2:47 AM, Dave Korn wrote: > > Hi everyone, > > I'm using g++.old-deja/g++.brendan/new3.C as a testcase to investigate a > problem with dllimport at the moment, and noticed something a bit unusual: > > Here is the CIE data from new3.C as compiled with gcc-4.3.4 > >> .section .eh_frame,"w" >> Lframe1: >> .long LECIE1-LSCIE1 >> LSCIE1: >> .long 0x0 >> .byte 0x1 >> .def ___gxx_personality_v0; .scl 2; .type 32; .endef >> .ascii "zP\0" >> .uleb128 0x1 >> .sleb128 -4 >> .byte 0x8 >> .uleb128 0x5 >> .byte 0x0 >> .long ___gxx_personality_v0 >> .byte 0xc >> .uleb128 0x4 >> .uleb128 0x4 >> .byte 0x88 >> .uleb128 0x1 >> .align 4 >> LECIE1: > > And now with gcc tr...@152230, I see that the generated CIE no longer has > any augmentation, particularly it doesn't point to the personality routine any > more: > >> LFE21: >> .section .eh_frame,"w" >> Lframe1: >> .long LECIE1-LSCIE1 >> LSCIE1: >> .long 0x0 >> .byte 0x1 >> .ascii "\0" >> .uleb128 0x1 >> .sleb128 -4 >> .byte 0x8 >> .byte 0xc >> .uleb128 0x4 >> .uleb128 0x4 >> .byte 0x88 >> .uleb128 0x1 >> .align 4 >> LECIE1: > > Is this intentional? Yes. If it doesn't need one it doesn't get one. Richard. > cheers, > DaveK > >
[gnu.org #263454] Re: Headsup: Rogue or hacked account spamming via RT? re: [gnu.org #263454]
Hi Dave. It was me who, in a mistake (I wanted to delete them) marked those tickets as "Opened". Sorry for the inconvenience. -- Jose E. Marchesi GNU Project http://www.gnu.org
Re: Dw2 CIE no longer contains personality routine augmentation?
Richard Guenther wrote: > On Thu, Oct 1, 2009 at 2:47 AM, Dave Korn wrote: >> I'm using g++.old-deja/g++.brendan/new3.C as a testcase to investigate a >> problem with dllimport at the moment, and noticed something a bit unusual: >> >> Here is the CIE data from new3.C as compiled with gcc-4.3.4 >> And now with gcc tr...@152230, I see that the generated CIE no longer has >> any augmentation, particularly it doesn't point to the personality routine >> any >> more: >> Is this intentional? > > Yes. If it doesn't need one it doesn't get one. Augh! That was actually so useful to me in making sure that my shared libstdc++ dll got linked into the executable even when all other references from the exe to the library got shunted aside by --wrap. Would it be reasonable to disable the optimisation on a target-specific basis? Either that or I'm going to have invent a modified version of --wrap, or just shove some other dummy reference to the library into object files unconditionally. (Actually that might turn out to be as simple as adding a -u option to the linker command line, so maybe it would even be better. Haven't tested that yet though.) cheers, DaveK
arm-elf multilib issues
Hi, I ran an experiment with arm-elf. I took every CPU model documented in the gcc.info file and passed it in via -mcpu=XXX. The following CPU models linked successfully: arm2 arm250 arm3 arm6 arm60 arm600 arm610 arm620 arm7 arm7m arm7d arm7dm arm7di arm7dmi arm70 arm700 arm700i arm710 arm710c arm7100 arm720 arm7500 arm7500fe arm7tdmi arm7tdmi-s arm710t arm720t arm740t strongarm strongarm110 strongarm1100 strongarm1110 arm8 arm810 arm9 arm920 arm920t arm922t arm940t arm9tdmi arm9e arm946e-s arm966e-s arm968e-s arm926ej-s arm10tdmi arm1020t arm1026ej-s arm10e arm1020e arm1022e arm1136j-s arm1136jf-s mpcore mpcorenovfp arm1156t2-s arm1176jz-s arm1176jzf-s cortex-a8 cortex-a9 cortex-r4 cortex-r4f cortex-m3 cortex-m1 xscale iwmmxt iwmmxt2 ep9312 The linker errors were similar for the failures. ../arm-elf/lib/libc.a(lib_a-fclose.o) does not use Maverick instructions, whereas a.out does ../arm-elf/lib/libc.a(lib_a-fclose.o) uses FPA instructions, whereas a.out does not ../arm-elf/lib/libc.a(lib_a-fclose.o) uses hardware FP, whereas a.out uses software FP I notice that a lot of multilib options are commented out in t-arm-elf. Do we want to enable more multilibs in arm-elf? I am guessing based upon the messages that it will require at least 3 more variants. Other architectures build more variants already so it isn't that big a deal. Thanks. -- Joel Sherrill, Ph.D. Director of Research & Development joel.sherr...@oarcorp.comOn-Line Applications Research Ask me about RTEMS: a free RTOS Huntsville AL 35805 Support Available (256) 722-9985
Re: i370 port - constructing compile script
> 2. If the normal way to do things is to parse the make -n output > with perl etc, that's fine, I'll do it that way. I was just wondering > if the proper way was to incorporate the logic into a Makefile > rule and get that rule repeatedly executed rather than just > having a simple "echo". It seems to me that having a generic > rule to execute an external script would be neater??? I'm not sure what you are suggesting here, but I do know that it wouldn't make sense for us to change the gcc Makefile to use a rule which executes an external script. I didn't mean use by default. The "normal way to do things" is to use GNU make. I think you are the first person trying to build gcc without it. It's also the first native MVS port. Anyway, since then I had another idea. I should be able to achieve the same thing by just changing the C compiler to be "echo" or the external script replacement. Then all I need is a consolidated stage 1 target. But today I spent my time fighting a different battle. I tried to get configure to use my provided minimal (ie all of C90, but no extensions) header files, using the --with-root option and --with-build-root. But it seemed to ignore those and use the ones on my Linux box, insisting that sys/types existed etc. Maybe I need to change my INCLUDE_PATH or something instead. Not the first - BSDs have been known to import GCC sources into their repositories and write their own build system using BSD make. No doubt this is a lot of work that needs repeating for each new version imported - that's the price you pay if you don't want to use the normal GCC build system. (And GCC didn't always require GNU make - but the BSDs replacing the build system are a much closer analogy here than ordinary builds of old versions with other make implementations before GNU make was required.) Yeah, make isn't available (environment variables aren't available in batch either), and even if it was, that's not what people want. People want SMP/E in fact. But I don't know SMP/E. I only know JCL, which is the normal (and much much simpler) rival for SMP. I don't think that doing a glorified "make -n" is a radical change to the existing methodology. Nor is a make target that just lists all the stage 1 object files. I think it would be a neat addition (even if it remains a patch forever). BFN. Paul.
Re: arm-elf multilib issues
> Do we want to enable more multilibs in arm-elf? Almost certainly not. As far as I'm concerned arm-elf is obsolete, and in maintenance only mode. You should be using arm-eabi. IMHO building lots of multilibs by default significantly increases toolchain size and build time for little actual benefit. ARM CPUs are generally backwards compatible and we only have one important ABI variant, so very few multilibs are required for a functional toolchain. Anybody who cares about optimized runtime libraries probably wants tuning for their exact setup, rather than whatever arbitrary selections you're going to choose. Paul
Re: i370 port - constructing compile script
> I am happy to construct all of this on a Unix system with the various > tools (m4 etc) available. > > But from the Unix system, I need to be able to generate the > above very simple compile script, which is a precursor to creating > very simple JCL steps (trust me, you don't want to see what > ST2CMP looks like). Note that the JCL has the filenames > truncated to 8 characters, listed twice, uppercased, and '-' > and '_' converted to '@'. Have you considered the obvious solution: Don't do that? i.e. use a cross compiler from a sane system. If you really want to a native compiler than I still suggest building it as a canadian cross. My guess is that getting gcc hosted in a bizarre environment is much easier than getting the gcc build system working. Trying to bootstrap gcc there seems like a lot of pain for no real benefit. Paul
Re: "Defaulted and deleting functions" with template members
On 09/15/2009 02:22 PM, Mark Zweers wrote: While experimenting with "Defaulted and deleting functions" on my brand-newly downloaded gcc-4.5 compiler, I've noticed the following: the order of '=default' and '=delete' matters with template member functions. I'm about to check in a fix for this, but in future please use http://gcc.gnu.org/bugzilla for bug reports. Jason
vta: Scheduler breaks var_location info (S/390 bootstrap failure)
Hi, on s390x I see broken debug info generated for the attached C++ testcase (compile with -O2 -g -fPIC). The debug info contains a symbol reference with a @GOTENT modifier what should not happen (and is not accepted by gas): .LLST3: .8byte .LVL3-.Ltext0 .8byte .LVL4-.Ltext0 .2byte 0xa .byte 0x9e .uleb128 0x8 .8byte _ztist12future_er...@gotent .8byte 0x0 .8byte 0x0 The problem is that the sched2 pass breaks the variable location information by moving an insn setting r1 over a var_location debug insn describing a variable location as being r1. in 202.split4: 29: var_location r10 33: var_location r13 + 8 34: var_location r10 30: r1 = &(A got entry) 31: r1 = [r1] 83: [r13] = r2 32: r1 = [r1] 35: var_location A => r1 <-- problematic location information 36: [r13 + 8] = r10 37: [r13 + 16] = r1 79: r1 = &(B got entry) 41: r3 = [r1] in 203.sched2: ... 32: r1 = [r1] 37: [r13 + 16] = r1 79: r1 = &(B got entry) <-- insn moved over 35 83: [r13] = r2 29: var_location r10 33: var_location r13 + 8 34: var_location r10 35: var_location r1 !!! the variable location gets corrupted since insn 79 has been moved over it 36: [r13 + 8] = r10 41: r3 = [r1] The variable locations are intended to stay right after the insn which does the relevant assignment by generating an ANTI dep between them but we also create deps between unrelated insns: sched-deps.c:2790 if (prev && NONDEBUG_INSN_P (prev)) add_dependence (insn, prev, REG_DEP_ANTI); This code creates a dependency between 83 and 29 (although the assignment is unrelated). This together with the fact that all debug insns are always been kept from being moved over each other makes all the debug insns to get stuck after insn 83. Although in order to keep the information correct insn 35 has to stay after 32. Bye, -Andreas- class exception { virtual const char *what () const throw (); }; namespace std { template < typename _Alloc > class allocator; template < class _CharT > struct char_traits; template < typename _CharT, typename _Traits = char_traits < _CharT >, typename _Alloc = allocator < _CharT > >class basic_string; typedef basic_string < char >string; template < typename _Tp > class allocator { }; template < typename _CharT, typename _Traits, typename _Alloc > class basic_string { public:basic_string (); basic_string (const _CharT * __s, const _Alloc & __a = _Alloc ()); }; class logic_error:public exception { public:logic_error (const string & __arg); }; class error_category { }; struct error_code { error_code (int __v, const error_category & __cat):_M_value (__v), _M_cat (&__cat) { } private:int _M_value; const error_category *_M_cat; }; enum future_errc { }; extern const error_category *const future_category; inline error_code make_error_code (future_errc __errc) { return error_code (static_cast < int >(__errc), *future_category); } class future_error:public logic_error { error_code _M_code; public: future_error (future_errc __ec):logic_error ("std::future_error"), _M_code (make_error_code (__ec)) { } }; void __throw_future_error (int __i) { throw future_error (future_errc (__i)); } }
Re: i370 port - constructing compile script
I am happy to construct all of this on a Unix system with the various tools (m4 etc) available. But from the Unix system, I need to be able to generate the above very simple compile script, which is a precursor to creating very simple JCL steps (trust me, you don't want to see what ST2CMP looks like). Note that the JCL has the filenames truncated to 8 characters, listed twice, uppercased, and '-' and '_' converted to '@'. Have you considered the obvious solution: Don't do that? i.e. use a cross compiler from a sane system. If you really want to a native compiler than I still suggest building it as a canadian cross. That's what this is. Or at least, replace ST2CMP with ST1CMP and it is the Canadian Cross. ST1CMP will run the assemblies using HLASM. Almost identical JCL will run a compile, then an assemble with HLASM. My guess is that getting gcc hosted in a bizarre environment is much easier than getting Not so bizarre when so many of the Fortune 500 use it. the gcc build system working. Trying to bootstrap gcc there seems like a lot of pain for no real benefit. The effort is mostly in the Canadian Cross. The changes to get it to bootstrap from that point are relatively small. The extra things required are: 1. header.gcc to remap includes. 2. scripts to rename includes. 3. 20 lines of JCL in the stage 2 procs, to do compiles. Here's the first of those from 3.4.6, so you can see the scope of the work: builtin-attrs.def builtina.h builtin-types.def builtint.h builtins.def builtind.h c-common.def ccommond.h diagnostic.def diagndef.h machmode.def machmodd.h params.def paramsd.h predict.def predictd.h rtl.def rtld.h stab.def stabd.h timevar.def timevard.h tree.def treed.h insn-constants.h i-constants.h langhooks-def.h langhdef.h hosthooks-def.h hosthdef.h gt-dwarf2asm.h gt-dwasm.h gcov-io.c gcovioc.h It's now very rare to have a problem on the MVS EBCDIC host that doesn't also occur on a Unix ASCII cross-compiler. So for that extra bit of work, mainframers are able to modify the C compiler on their native platform instead of having to mess around with a Unix system they know nothing about. Part of open source is making the source available and usable on the native environment, I think. Otherwise, the job of providing a free, open source C compiler on the mainframe hasn't really been done, I think. And I was dreaming of that way back in 1987 when I had a 3270 terminal plus a mainframe. Although admittedly I only wanted to use it, not build it. But the easier it is for a mainframer to access the code, the more likely it is that they will be inspired to add a PL/S or Cobol front-end to it. BFN. Paul.
Re: vta: Scheduler breaks var_location info (S/390 bootstrap failure)
I've opened bz #41535 for this problem. Bye, -Andreas-
Re: i370 port - constructing compile script
"Paul Edwards" writes: >> the gcc build system working. Trying to bootstrap gcc there seems >> like a lot >> of pain for no real benefit. > > The effort is mostly in the Canadian Cross. The changes to get it to > bootstrap from that point are relatively small. I think you are underestimating the work involved. Many years ago now, when Steve Chamberlain started porting the GNU tools to bootstrap on Windows, he realized that the best approach was to write (what is now known as) cygwin. It may sound crazy now, but bringing the Unix environment to yours is doable, and it has many ancillary benefits. Ian
Re: arm-elf multilib issues
Paul Brook wrote: Do we want to enable more multilibs in arm-elf? Almost certainly not. As far as I'm concerned arm-elf is obsolete, and in maintenance only mode. You should be using arm-eabi. IMHO building lots of multilibs by default significantly increases toolchain size and build time for little actual benefit. ARM CPUs are generally backwards compatible and we only have one important ABI variant, so very few multilibs are required for a functional toolchain. Anybody who cares about optimized runtime libraries probably wants tuning for their exact setup, rather than whatever arbitrary selections you're going to choose. We only want to provide one arm-rtems toolset binary and provide the minimal number of multilibs to support as many ARMs as possible. So ultimately this will go in arm/t-rtems but I wanted to see a non-OS target produce hello worlds that would run with arm-XXX-run. I will switch my testing to arm-eabi. But it uses the same t-arm-elf for variations. Since this will be arm-rtems multilib's when submitted, which variants and matches need to be added so we have the right libraries for all CPU model arguments. ld is complaining at least about libc.a mismatches for these variations. does not use Maverick instructions, whereas a.out does uses FPA instructions, whereas a.out does not uses hardware FP, whereas a.out uses software FP Suggestions welcomed. I promise this is for arm-rtems. :) --joel
Re: i370 port - constructing compile script
On Thu, Oct 1, 2009 at 11:59 AM, Paul Edwards wrote: >>> But from the Unix system, I need to be able to generate the >>> above very simple compile script, which is a precursor to creating >>> very simple JCL steps (trust me, you don't want to see what >>> ST2CMP looks like). Note that the JCL has the filenames >>> truncated to 8 characters, listed twice, uppercased, and '-' >>> and '_' converted to '@'. Paul, Why are you not making use of z/OS Unis System Services? GNU Make and other GNU tools are available and already built for z/OS. David
Re: Request for code review - (ZEE patch : Redundant Zero extension elimination)
Hi, I moved implicit-zee.c to config/i386. Can you please take another look ? * tree-pass.h (pass_implicit_zee): New pass. * testsuite/gcc.target/i386/zee.c: New test. * timevar.def (TV_ZEE): New. * common.opt (fzee): New flag. * config.gcc: Add implicit-zee.o for x86_64 target. * implicit-zee.c: New file, zero extension elimination pass. * config/i386/t-i386: Add rule for implicit-zee.o. * i386.c (optimization_options): Enable zee pass for x86_64 target. Thanks, -Sriraman. On Thu, Sep 24, 2009 at 9:34 AM, Sriraman Tallam wrote: > On Thu, Sep 24, 2009 at 1:36 AM, Richard Guenther > wrote: >> On Thu, Sep 24, 2009 at 8:25 AM, Paolo Bonzini wrote: >>> On 09/24/2009 08:24 AM, Ian Lance Taylor wrote: We already have the hooks, they have just been stuck in plugin.c when they should really be in the generic backend. See register_pass. (Sigh, every time I looked at this I said "the pass control has to be generic" but it still wound up in plugin.c.) >>> >>> Then I'll rephrase and say only that the pass should be in config/i386/. >> >> It should also be on by default on -O[23s] I think (didn't check if it >> already >> is). Otherwise it shortly will go the see lala-land. > > It is already on by default in O2 and higher. > >> >> Richard. >> >>> Paolo >>> >> > Index: tree-pass.h === --- tree-pass.h (revision 152385) +++ tree-pass.h (working copy) @@ -500,6 +500,7 @@ extern struct rtl_opt_pass pass_stack_ptr_mod; extern struct rtl_opt_pass pass_initialize_regs; extern struct rtl_opt_pass pass_combine; extern struct rtl_opt_pass pass_if_after_combine; +extern struct rtl_opt_pass pass_implicit_zee; extern struct rtl_opt_pass pass_partition_blocks; extern struct rtl_opt_pass pass_match_asm_constraints; extern struct rtl_opt_pass pass_regmove; Index: testsuite/gcc.target/i386/zee.c === --- testsuite/gcc.target/i386/zee.c (revision 0) +++ testsuite/gcc.target/i386/zee.c (revision 0) @@ -0,0 +1,13 @@ +/* { dg-do compile } */ +/* { dg-require-effective-target lp64 } */ +/* { dg-options "-O2 -fzee -S" } */ +/* { dg-final { scan-assembler-not "mov\[\\t \]+\(%\[\^,\]+\),\[\\t \]*\\1" } } */ +int mask[100]; +int foo(unsigned x) +{ + if (x < 10) +x = x * 45; + else +x = x * 78; + return mask[x]; +} Index: timevar.def === --- timevar.def (revision 152385) +++ timevar.def (working copy) @@ -182,6 +182,7 @@ DEFTIMEVAR (TV_RELOAD, "reload") DEFTIMEVAR (TV_RELOAD_CSE_REGS , "reload CSE regs") DEFTIMEVAR (TV_SEQABSTR , "sequence abstraction") DEFTIMEVAR (TV_GCSE_AFTER_RELOAD , "load CSE after reload") +DEFTIMEVAR (TV_ZEE , "zee") DEFTIMEVAR (TV_THREAD_PROLOGUE_AND_EPILOGUE, "thread pro- & epilogue") DEFTIMEVAR (TV_IFCVT2 , "if-conversion 2") DEFTIMEVAR (TV_COMBINE_STACK_ADJUST , "combine stack adjustments") Index: common.opt === --- common.opt (revision 152385) +++ common.opt (working copy) @@ -1099,6 +1099,10 @@ fsee Common Does nothing. Preserved for backward compatibility. +fzee +Common Report Var(flag_zee) Init(0) +Eliminate redundant zero extensions on targets that support implicit extensions. + fshow-column Common C ObjC C++ ObjC++ Report Var(flag_show_column) Init(1) Show column numbers in diagnostics, when available. Default on Index: config.gcc === --- config.gcc (revision 152385) +++ config.gcc (working copy) @@ -2569,6 +2569,12 @@ powerpc*-*-* | rs6000-*-*) tm_file="${tm_file} rs6000/option-defaults.h" esac +case ${target} in +x86_64-*-*) + extra_objs="${extra_objs} implicit-zee.o" + ;; +esac + # Support for --with-cpu and related options (and a few unrelated options, # too). case ${with_cpu} in Index: config/i386/implicit-zee.c === --- config/i386/implicit-zee.c (revision 0) +++ config/i386/implicit-zee.c (revision 0) @@ -0,0 +1,1029 @@ +/* Redundant Zero-extension elimination for targets that implicitly + zero-extend writes to the lower 32-bit portion of 64-bit registers. + Copyright (C) 2009 Free Software Foundation, Inc. + Contributed by Sriraman Tallam (tmsri...@google.com) and + Silvius Rus (r...@google.com) + +This file is part of GCC. + +GCC is free software; you can redistribute it and/or modify it under +the terms of the GNU General Public License as published by the Free +Software Foundation; either version 3, or (at your option) any later +version. + +GCC is distributed in the hope that it will be useful, but WITHOUT ANY +WARRANTY; without even the implie
Re: i370 port - constructing compile script
David Edelsohn wrote: On Thu, Oct 1, 2009 at 11:59 AM, Paul Edwards wrote: But from the Unix system, I need to be able to generate the above very simple compile script, which is a precursor to creating very simple JCL steps (trust me, you don't want to see what ST2CMP looks like). Note that the JCL has the filenames truncated to 8 characters, listed twice, uppercased, and '-' and '_' converted to '@'. Paul, Why are you not making use of z/OS Unis System Services? GNU Make and other GNU tools are available and already built for z/OS. Perhaps because he is a hacker in the good ol' sense of the word ? Mjam, MVS, JCL, the possibility of COBOL, perhaps even PL/1 ... [ Over a quarter of century ago I worked at the computer center of the Dutch Postal Service. One of my colleagues managed the IBM system group. He had an assistant to write the JCL jobs he needed for him. ] -- Toon Moene - e-mail: t...@moene.org - phone: +31 346 214290 Saturnushof 14, 3738 XG Maartensdijk, The Netherlands At home: http://moene.org/~toon/ Progress of GNU Fortran: http://gcc.gnu.org/gcc-4.5/changes.html
Re: arm-elf multilib issues
On Thu, 1 Oct 2009, Paul Brook wrote: > > Do we want to enable more multilibs in arm-elf? > > Almost certainly not. As far as I'm concerned arm-elf is obsolete, and in > maintenance only mode. You should be using arm-eabi. I'm possibly (probably?) wrong, but as far as I know, it forces alignment of 64-bit datum (namely, doubles and long longs) to 8 byte boundaries, which does not make sense on small 32-bit cores with 32-bit buses and no caches (e.g. practically all ARM7TDMI based chips). Memory is a scarce resource on those and wasting bytes for alignment with no performance benefit is something that makes arm-eabi less attractive. Also, as far as I know passing such datums to functions might cause some headache due to the 64-bit datums being even-register aligned when passing them to functions, effectively forcing arguments to be passed on the stack unnecessarily (memory access is rather expensive on a cache-less ARM7TDMI). If you have to write assembly routines that take long long or double arguments among other types, that forces you to shuffle registers and fetch data from the stack. You lose code space, data space and CPU cycles with absolutely nothing in return. For resource constrained embedded systems built around one of those 32-bit cores arm-elf is actually rather more attractive than arm-eabi. Zoltan
gcc-4.5-20091001 is now available
Snapshot gcc-4.5-20091001 is now available on ftp://gcc.gnu.org/pub/gcc/snapshots/4.5-20091001/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 4.5 SVN branch with the following options: svn://gcc.gnu.org/svn/gcc/trunk revision 152385 You'll find: gcc-4.5-20091001.tar.bz2 Complete GCC (includes all of below) gcc-core-4.5-20091001.tar.bz2 C front end and core compiler gcc-ada-4.5-20091001.tar.bz2 Ada front end and runtime gcc-fortran-4.5-20091001.tar.bz2 Fortran front end and runtime gcc-g++-4.5-20091001.tar.bz2 C++ front end and runtime gcc-java-4.5-20091001.tar.bz2 Java front end and runtime gcc-objc-4.5-20091001.tar.bz2 Objective-C front end and runtime gcc-testsuite-4.5-20091001.tar.bz2The GCC testsuite Diffs from 4.5-20090924 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-4.5 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Re: arm-elf multilib issues
> Since this will be arm-rtems multilib's when submitted, > which variants and matches need to be added so we have > the right libraries for all CPU model arguments. ld is > complaining at least about libc.a mismatches for these variations. > > does not use Maverick instructions, whereas a.out does > uses FPA instructions, whereas a.out does not > uses hardware FP, whereas a.out uses software FP Either the errors are genuine, or your toolchain is misconfigured. It's common for old binutils to use different defaults to gcc. Like I said before, switch to an EABI based toolchain and you shauldn't have any problems[1]. We have a proper object attribute system there, with corresponding directives for communication between compiler and assembler. This can't be a new problem, and I have no sympathy for anyone inventing new code/targets that are not EABI based. Paul [1] Maverick support may be a bit busted because noone's bothered defining or implementing the appropriate EABI bits, but AFAICT the maverick code has always been fairly busted so you're not really loosing anything.
Request for trunk freeze for LTO merge
There should not be any more functional changes to the LTO merge needed for the final merge into mainline. I am waiting for the final round of testing and finishing up documentation changes requested by Joseph. I expect this to be done by tomorrow. I would like to freeze trunk on Sat 3-Oct and do the final merge. Any objections to this plan? Thanks. Diego.
Re: arm-elf multilib issues
> > Almost certainly not. As far as I'm concerned arm-elf is obsolete, and in > > maintenance only mode. You should be using arm-eabi. > > I'm possibly (probably?) wrong, but as far as I know, it forces alignment > of 64-bit datum (namely, doubles and long longs) to 8 byte boundaries, > which does not make sense on small 32-bit cores with 32-bit buses and no > caches (e.g. practically all ARM7TDMI based chips). Memory is a scarce > resource on those and wasting bytes for alignment with no performance > benefit is something that makes arm-eabi less attractive. Also, as far as > I know passing such datums to functions might cause some headache due to > the 64-bit datums being even-register aligned when passing them to > functions, effectively forcing arguments to be passed on the stack > unnecessarily (memory access is rather expensive on a cache-less > ARM7TDMI). If you have to write assembly routines that take long long or > double arguments among other types, that forces you to shuffle registers > and fetch data from the stack. You lose code space, data space and CPU > cycles with absolutely nothing in return. Meh. Badly written code on antique hardware. I realise this sounds harsh, but in all seriousness if you take a bit of care (and common sense) you should get the alignment for free in pretty much all cases, and it can make a huge difference on ARMv5te cores. If you're being really pedantic then old-abi targets tend to pad all structures to a word boundary. I'd expect this to have much more detrimental overall effect than alignment of doubleword quantities, which in my experience are pretty rare to start with. Paul
Re: Request for trunk freeze for LTO merge
On Thu, 1 Oct 2009, Diego Novillo wrote: > There should not be any more functional changes to the LTO merge > needed for the final merge into mainline. I am waiting for the final > round of testing and finishing up documentation changes requested by > Joseph. > > I expect this to be done by tomorrow. I would like to freeze trunk on > Sat 3-Oct and do the final merge. > > Any objections to this plan? Works for me. I have fired another round of multi-arch testing and SPEC2006 compiles (the daily testers are also running, though with the speed of SPEC2006 they may lag behind a bit). Richard.
Re: i370 port - constructing compile script
the gcc build system working. Trying to bootstrap gcc there seems like a lot of pain for no real benefit. The effort is mostly in the Canadian Cross. The changes to get it to bootstrap from that point are relatively small. I think you are underestimating the work involved. ? Note that I have *already* ported both GCC 3.2.3 and GCC 3.4.6 such that they bootstrap on MVS, in a totally non-Unix environment, with nothing more than a C90 compiler (with zero (0) extensions). The trouble is that in the process of doing that (ie mainly for the Canadian Cross part), I had to: 1. Construct the config.h/auto-host by hand (having the stack direction go the wrong way was a lot of fun for sure). 2. Got the list of files to compile by hand. 3. Constructed the compile JCL by hand. I believe all of these things can be integrated into the existing build process on Unix with minimal fuss if you know where to put it. At the time I was doing the original porting, I didn't even have Unix, so it was easier to do it by hand. But now I'm interested in neat, minimal integration. With appropriate workarounds, GCC is very close to C90 already. Many years ago now, when Steve Chamberlain started porting the GNU tools to bootstrap on Windows, he realized that the best approach was to write (what is now known as) cygwin. It may sound crazy now, but bringing the Unix environment to yours is doable, and it has many ancillary benefits. Adding POSIX to MVS 3.8j is certainly a worthwhile project. However, I consider bashing GCC into C90-shape to be a worthwhile project too. For whenever you're on a non-Posix system, not just MVS. E.g. DOS/VS, or CMS, or MUSIC/SP, or TPF, or MVT, or MFT, or some of the others I have heard mentioned. :-) I'm really only interested in DOS/VS, CMS, MUSIC/SP. Maybe TPF as well. But bringing POSIX to them is a separate exercise to writing the 1000 lines (literally) of assembler that is required to get them to work. GCC 3.4.6 is 850,000 lines of (generated) assembler code. But the way things are structured, it only requires 1000 lines for each of those different targets (to do I/O). MVS and CMS are already done. MUSIC/SP is half done (the person doing the port died). DOS/VS is not done, although a couple of people started an attempt. Adding POSIX to all those environments may be done at some point in the future, but no-one has even started, and everyone wants native support regardless. Can you imagine if GCC was running on Unix with some sort of emulated-MVS I/O? You'd rip out that nonsense and replace it with native POSIX in an instant. CMS actually supports emulated MVS I/O too, and indeed, that's what the first port was. But someone has already spent the effort to replace it with native CMS I/O, which gets around various restrictions. I think that we have now reached the point where two quite different cultures meet. :-) People have been freaking out a bit on the MVS side too. :-) BFN. Paul.
Re: Prague GCC folks meeting summary report
Richard Guenther writes: > > The wish for more granular and thus smaller debug information (things like > -gfunction-arguments which would properly show parameter values > for backtraces) was brought up. We agree that this should be addressed at a > tools level, like in strip, not in the compiler. Is that really the right level? In my experience (very roughly) -g can turn gcc from CPU bound to IO bound (especially considering distributed compiling appraches), and dropping unnecessary information in external tools would make the IO penalty even worse. -Andi -- a...@linux.intel.com -- Speaking for myself only.
Re: i370 port - constructing compile script
But from the Unix system, I need to be able to generate the above very simple compile script, which is a precursor to creating very simple JCL steps (trust me, you don't want to see what ST2CMP looks like). Note that the JCL has the filenames truncated to 8 characters, listed twice, uppercased, and '-' and '_' converted to '@'. Why are you not making use of z/OS Unis System Services? GNU Make and other GNU tools are available and already built for z/OS. USS is not available for free, or even for a price on MVS 3.8j, and it is not native MVS, it is an expensive overhead. It's a bit like asking "why don't you use a JCL emulator instead of make on Unix?". :-) You know, even as a batch job with JCL, people then said to me that reading the C source from a file instead of "standard input" (ie stdin, ie //SYSIN DD) is really weird, and so I had to make a pretty small mod to GCC to allow "-" as the filename, so that the JCL at least looks like a normal MVS compiler. Perhaps because he is a hacker in the good ol' sense of the word ? Mjam, MVS, JCL, the possibility of COBOL, perhaps even PL/1 ... Both Cobol and PL/1 front-ends are already supported to some extent ... http://www.opencobol.org/ http://pl1gcc.sourceforge.net/ although we're not really at the stage of even attempting to get that onto native MVS. Actually, PL/1 basically requires GCC 4.x, which is my main interest in upgrading 3.4.6 to 4.x. :-) Someone else said he would like to see PL/S, and maybe if PL/1 was available, the super-secret PL/S language would start to be made available. But it all rests on getting the HLASM-generator working on a more modern GCC. :-) And C90 is the lingua franca. [ Over a quarter of century ago I worked at the computer center of the Dutch Postal Service. One of my colleagues managed the IBM system group. He had an assistant to write the JCL jobs he needed for him. ] Maybe for old times sake you'd like to load up: http://mvs380.sourceforge.net It comes with GCC so you can now do what you always wanted to do back then. :-) And of course, all perfectly usable on z/OS too. Natively. :-) 850,000 lines of assembler. Like wow, man. I wonder what GCC 4.4 will clock in as? 3.2.3 was 700,000, so we're probably up to a million lines of pure 370 assembler. :-) BFN. Paul.
Re: Prague GCC folks meeting summary report
On Thu, Oct 01, 2009 at 05:00:10PM -0700, Andi Kleen wrote: > Richard Guenther writes: > > > > The wish for more granular and thus smaller debug information (things like > > -gfunction-arguments which would properly show parameter values > > for backtraces) was brought up. We agree that this should be addressed at a > > tools level, like in strip, not in the compiler. > > Is that really the right level? In my experience (very roughly) -g can turn > gcc from > CPU bound to IO bound (especially considering distributed compiling > appraches), > and dropping unnecessary information in external tools would make the IO > penalty even > worse. Certainly life can suck when building large C++ apps with -g in an NFS environment. Assuming we can generate tons of stuff and strip it later might not be best.
Re: arm-elf multilib issues
On Thu, Oct 1, 2009 at 20:22, Paul Brook wrote: > > > > Almost certainly not. As far as I'm concerned arm-elf is obsolete, and in > > > maintenance only mode. You should be using arm-eabi. Is it now possible to build a 100% arm-eabi functional toolchain (including i.e. newlib) in multilib mode straight from gnu sources? Last time I tried, the only possibility was the codesourcery toolchain, which receives no (public) frequent updates. Something has changed since then? > > I'm possibly (probably?) wrong, but as far as I know, it forces alignment > > of 64-bit datum (namely, doubles and long longs) to 8 byte boundaries, > > which does not make sense on small 32-bit cores with 32-bit buses and no > > caches (e.g. practically all ARM7TDMI based chips). Memory is a scarce > > resource on those and wasting bytes for alignment with no performance > > benefit is something that makes arm-eabi less attractive. Also, as far as > > I know passing such datums to functions might cause some headache due to > > the 64-bit datums being even-register aligned when passing them to > > functions, effectively forcing arguments to be passed on the stack > > unnecessarily (memory access is rather expensive on a cache-less > > ARM7TDMI). If you have to write assembly routines that take long long or > > double arguments among other types, that forces you to shuffle registers > > and fetch data from the stack. You lose code space, data space and CPU > > cycles with absolutely nothing in return. > > Meh. Badly written code on antique hardware. > I realise this sounds harsh, but in all seriousness if you take a bit of care > (and common sense) you should get the alignment for free in pretty much all > cases, and it can make a huge difference on ARMv5te cores. > I have to agree with Paul, at least partially. Except for a few bytes wasted now and then regarding the parameter stacking misbehavior you suggested, the other requirements never attempted to bite me. Not even once. But I always had at least 2k of free ram to spare, to me that's a lot :-)
Re: arm-elf multilib issues
> Meh. Badly written code on antique hardware. > I realise this sounds harsh, but in all seriousness if you take a bit of care Yes, I think it does sound harsh, considering that, I believe, at least as many chips are sold with ARM7TDMI core as the nice fat chips with MMU, caches, 64 and 128 bit buses. > (and common sense) you should get the alignment for free in pretty much all > cases, and it can make a huge difference on ARMv5te cores. > If you're being really pedantic then old-abi targets tend to pad all > structures to a word boundary. I'd expect this to have much more > detrimental overall effect than alignment of doubleword quantities, > which in my experience are pretty rare to start with. Well, I have to agree with the above. Zoltan