Toolchain WG - 2010-07-30 minutes
The minutes from last Friday's standup call are at: https://wiki.linaro.org/WorkingGroups/ToolChain/Meetings/2010-07-30 -- Michael ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: Multiarch paths and toolchain implications
On 31/07/10 19:01, Ulrich Weigand wrote: > I've finally completed a first of draft the write-up of toolchain > implications of multiarch paths that we discussed in Prague. Sorry it took > a while, but it got a lot longer than I expected :-/ > > I'd appreciate any feedback and comments! Thanks Ulrich, that's an excellent document. :) You didn't mention anything about the HWCAP stuff, though? I think we need to capture the discussion we had about "multiarch" == "ABI", and "multiarch" != "hardware features". Andrew ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: Multiarch paths and toolchain implications
Andrew Stubbs wrote on 08/02/2010 01:35:01 PM: > On 31/07/10 19:01, Ulrich Weigand wrote: > > I've finally completed a first of draft the write-up of toolchain > > implications of multiarch paths that we discussed in Prague. Sorry it took > > a while, but it got a lot longer than I expected :-/ > > > > I'd appreciate any feedback and comments! > > Thanks Ulrich, that's an excellent document. :) > > You didn't mention anything about the HWCAP stuff, though? I think we > need to capture the discussion we had about "multiarch" == "ABI", and > "multiarch" != "hardware features". The second half of the section "Loading/running an executable" is about thw HWCAP stuff (look for "capability suffix"). In the summary I have this point: * If capability-optimized ISA/ABI-compatible library variants are desired, they can be build just as today, only under the (same) multiarch suffix. They could be packaged either within a single pacakge, or else using multiple packages (of the same multiarch type). If you feel this could be made clearer, I'd appreciate any suggestions :-) Mit freundlichen Gruessen / Best Regards Ulrich Weigand -- Dr. Ulrich Weigand | Phone: +49-7031/16-3727 STSM, GNU compiler and toolchain for Linux on System z and Cell/B.E. IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martin Jetter | Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen | Registergericht: Amtsgericht Stuttgart, HRB 243294 ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: Multiarch paths and toolchain implications
Loïc Minier wrote: > Awesome analysis! Thanks! > So I think you analyzed the upstream toolchain behavior Yes, that's true. > and I think > Debian/Ubuntu toolchains cheat in some areas; for some directories > which would use $(version) we use $(major).$(minor) instead, and we > have a $(version) -> $(major).$(minor) symlink. This doesn't really > relate to the multiarch topic, but it reminds me that we ought to fix > the distro divergences so that it's easier to swap an upstream > toolchain with a Debian/Ubuntu one and vice-versa. Agreed. Not sure what this particular divergence helps ... > > Executables built with the old ELF interpreter will not run on a > > system that *only* provides the multiarch install location. This > > is clearly *not* OK. To provide backwards compatibility, even a > > multiarch-capable system will need to install ELF interpreters > > at the old locations as well, possibly via symlinks. (Note that > > any given system can only be compatible in this way with *one* > > architecture, except for lucky circumstances.) > > I see two ways around this; we could patch the kernel to add a dynamic > prefix before the runtime-linker path depending on the executable > contents (typically depending on the arch), This seems awkward. The ELF interpreter location is encoded as full path, which is not interpreted in any way by the kernel. We'd either have to encode particular filesystem layout knowledge into the kernel here, or else add a prefix at the very beginning (or end?), which doesn't correspond to the scheme suggested for multiarch. If we go down that route, it might be easier to use tricks like bind- mounting the correct ld.so for this architecture at the default location during early startup or something ... However, I'd have thought the whole point of the multiarch scheme was to *avoid* having to play filename remapping tricks, but instead make all filenames explicit. > or more elegantly we could > have a generic loader which checks the architecture of the target ELF > file before calling the arch-specific loader. This loader would be > linked to from all the old locations. Well, but then what architecture would that generic loader be in? In the end, it has to be *something* the kernel understands to load natively. > The reason I'm thinking of patching the kernel is because binfmt_misc > is already out there and allows special behavior when encountering > binary files from other architectures (or any binary pattern really). But binfmt_misc only works because in the end it falls back to the built- in native ELF loader. (You can install arbitrary handlers, but the handlers themselves must in the end be something the kernel already knows how to load.) > > Option C: ld.so searches both the -rpath as is, and also with > > multiarch target string appended. > > This is a risk for cross-builds; the native version might be picked up. > While this doesn't seem much of a risk for cross-compilation to an > entirely different architecture (e.g. x86 to ARM), consider > cross-builds from x86-64 to x86, or from EABI + hard-float to EABI + > soft-float. That's one of the fundamental design questions: do we want to make sure only multiarch libraries for the correct arch can ever be found, or do we rather want to make sure on a default install, libraries that have not yet been converted to multiarch can also be found (even taking the chance that they might turn out be of the wrong architecture / variant) ... > BTW, the CodeSourcery patchset contains a "directory poisoning" feature > which seems quite useful to detect these cases early. Yes, that's during compile time. I understand the reason for this is more to catch bad include paths manually specified in packages. Not sure if during load time the same concerns apply. Mit freundlichen Gruessen / Best Regards Ulrich Weigand -- Dr. Ulrich Weigand | Phone: +49-7031/16-3727 STSM, GNU compiler and toolchain for Linux on System z and Cell/B.E. IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martin Jetter | Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen | Registergericht: Amtsgericht Stuttgart, HRB 243294 ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: Multiarch paths and toolchain implications
On 02/08/10 12:46, Ulrich Weigand wrote: > The second half of the section "Loading/running an executable" is about > thw HWCAP stuff (look for "capability suffix"). In the summary I have > this point: > > * If capability-optimized ISA/ABI-compatible library variants are desired, >they can be build just as today, only under the (same) multiarch >suffix. They could be packaged either within a single pacakge, >or else using multiple packages (of the same multiarch type). > > If you feel this could be made clearer, I'd appreciate any suggestions :-) OK, I'm clearly blind and incapable of performing a text search competently (I swear I did one)! It is buried a little deep, but it is there. I guess I'd like to see a flow of how a binary loads libraries: 1. User launches binary. 2. Kernel selects a suitable execution environment (native/qemu). 3. Kernel reads .interp and loads the multiarch dynamic linker: /lib/${mulitarch}/ld.so. 4. Dynamic linker uses HWCAP to find the most appropriate libc.so. Anyway, that's just my personal taste. The information is there, if I read it any time other than Monday morning, so I think the document is good. We should post it on the Linaro wiki, probably. Andrew ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: Multiarch paths and toolchain implications
Andrew Stubbs wrote: > It is buried a little deep, but it is there. I guess I'd like to see a > flow of how a binary loads libraries: > > 1. User launches binary. > > 2. Kernel selects a suitable execution environment (native/qemu). > > 3. Kernel reads .interp and loads the multiarch dynamic linker: > /lib/${mulitarch}/ld.so. > > 4. Dynamic linker uses HWCAP to find the most appropriate libc.so. I thought that's basically the flow of the "Loading/running an executable" sections ... I've added sub-section headers to maybe make it a bit clearer. > We should post it on the Linaro wiki, probably. It's now on: https://wiki.linaro.org/WorkingGroups/ToolChain/MultiarchPaths Mit freundlichen Gruessen / Best Regards Ulrich Weigand -- Dr. Ulrich Weigand | Phone: +49-7031/16-3727 STSM, GNU compiler and toolchain for Linux on System z and Cell/B.E. IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martin Jetter | Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen | Registergericht: Amtsgericht Stuttgart, HRB 243294 ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: -fremove-local-statics optimization
On 29/07/10 17:23, Andrew Stubbs wrote: > So basically we're left with this patch that does something we want, but > not in a way that can go upstream. :( > > The question is, should I merge this to Linaro, or not? Loic and I > agreed to hold off until I'd done a bit more research and/or tried to > upstream it again, but now I think we need to think again. Ping? Anybody have any thoughts on this. We need to make a decision. Andrew ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
GCC PRE patch; apply or not?
CS has this patch in SG++: http://gcc.gnu.org/ml/gcc-patches/2008-12/msg00199.html This patch improves code size in a useful, target independent way, but was not committed upstream. It's not clear why. Since the patch does not belong to CodeSourcery, we can't upstream it ourselves either. Is that patch a suitable candidate for Linaro GCC? It is not upstreamable due to copyright issues, but we have a policy that we can keep such patches, if we wish. The principle of not letting Linaro and SG++ diverge too far also suggests keeping it. Any thoughts? If nobody objects soon I shall merge it in. Andrew ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: -fremove-local-statics optimization
Andrew Stubbs wrote: > Some discussion later, they decided it would be better to implement the > optimization using inter-procedural dead store analysis: >http://gcc.gnu.org/ml/gcc-patches/2008-07/msg01602.html I agree that this would be a much nicer way ... > This doesn't seem to have actually been done. Not yet, anyway. Maybe this is something we should be working on then? > So basically we're left with this patch that does something we want, but > not in a way that can go upstream. :( > > The question is, should I merge this to Linaro, or not? Loic and I > agreed to hold off until I'd done a bit more research and/or tried to > upstream it again, but now I think we need to think again. The one concern I have is that the patch introduces a user-visible construct: the -fremove-local-statics command line option. If we add this now, and users add the flag to their Makefiles, and then it goes away later on, users' builds could break. On the other hand, we already have the flag in 4.4 anyway, so that risk is there in either case ... Mit freundlichen Gruessen / Best Regards Ulrich Weigand -- Dr. Ulrich Weigand | Phone: +49-7031/16-3727 STSM, GNU compiler and toolchain for Linux on System z and Cell/B.E. IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martin Jetter | Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen | Registergericht: Amtsgericht Stuttgart, HRB 243294 ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: GCC PRE patch; apply or not?
Andrew Stubbs wrote: > CS has this patch in SG++: > http://gcc.gnu.org/ml/gcc-patches/2008-12/msg00199.html > > This patch improves code size in a useful, target independent way, but > was not committed upstream. It's not clear why. Since the patch does not > belong to CodeSourcery, we can't upstream it ourselves either. Steven asked: "What do you folks think of this patch?" and the only answer was by Richi: "I think it's reasonable." So I'm not sure why it didn't go in. Maybe we should just ping Steven? > Is that patch a suitable candidate for Linaro GCC? > > It is not upstreamable due to copyright issues, but we have a policy > that we can keep such patches, if we wish. What copyright issues? Steven ought to have an assignment in place ... > The principle of not letting Linaro and SG++ diverge too far also > suggests keeping it. > > Any thoughts? If nobody objects soon I shall merge it in. Since there don't appear to be any fundamental objections to inclusion of the patch into mainline, and since the patch doesn't introduce any user-visible feature (like syntax extensions or command line options) that could cause build breaks for users if it were to go away after all, I would say this patch is fine for us ... Mit freundlichen Gruessen / Best Regards Ulrich Weigand -- Dr. Ulrich Weigand | Phone: +49-7031/16-3727 STSM, GNU compiler and toolchain for Linux on System z and Cell/B.E. IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martin Jetter | Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen | Registergericht: Amtsgericht Stuttgart, HRB 243294 ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: [gnu-linaro-tools] Re: GCC PRE patch; apply or not?
Daniel Jacobowitz wrote: > On Mon, Aug 02, 2010 at 04:31:16PM +0200, Ulrich Weigand wrote: > > Steven asked: "What do you folks think of this patch?" > > and the only answer was by Richi: "I think it's reasonable." > > > > So I'm not sure why it didn't go in. Maybe we should just ping Steven? > > It's been a while, but I think discussion elsewhere (a bug log maybe?) > indicated that Steven wasn't happy with it. Hmm, right, that's probably: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=38785 Latest status seems to be that the original patch no longer applies, but Joern did an updated (and somewhat modified) version. This seems to have triggered a more general discussion on how to correctly estimate the effect of adding PHI nodes on code size ... Mit freundlichen Gruessen / Best Regards Ulrich Weigand -- Dr. Ulrich Weigand | Phone: +49-7031/16-3727 STSM, GNU compiler and toolchain for Linux on System z and Cell/B.E. IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martin Jetter | Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen | Registergericht: Amtsgericht Stuttgart, HRB 243294 ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: Multiarch paths and toolchain implications
On Mon, Aug 02, 2010, Ulrich Weigand wrote: > Agreed. Not sure what this particular divergence helps ... So Matthias mentionned that gnat and gcc are not always using the same minor version, and so it helps bootstrap them to have the common bits installable without overly strict dependencies. I wonder whether we could do that properly upstream. We should chat with Matthias on the next occasion and write down a plan. > > I see two ways around this; we could patch the kernel to add a dynamic > > prefix before the runtime-linker path depending on the executable > > contents (typically depending on the arch), > > This seems awkward. Agreed > > or more elegantly we could > > have a generic loader which checks the architecture of the target ELF > > file before calling the arch-specific loader. This loader would be > > linked to from all the old locations. > > Well, but then what architecture would that generic loader be in? In the > end, it has to be *something* the kernel understands to load natively. Currently with binfmt_misc when the kernel loads a binary it will check whether it's the native architecture and if it is load the ELF dynamic linker referenced in the binary; if it matches one of the regexps from binfmt_misc, such as the binary pattern for ARM ELF binaries, it will call the binfmt interpreter instead, e.g. qemu-arm, and in this case qemu-arm will load the ELF runtime linker of the target binary to run the binary inside the CPU emulation. So I think this should just work; the kernel will call the native ELF loader of the current arch for binaries for the current arch, and will load QEMU which will load and emulate the ELF loader for the emulated arch in the other cases. Perhaps I should work with Steve at prototyping this to make sure this works. > > The reason I'm thinking of patching the kernel is because binfmt_misc > > is already out there and allows special behavior when encountering > > binary files from other architectures (or any binary pattern really). > > But binfmt_misc only works because in the end it falls back to the built- > in native ELF loader. (You can install arbitrary handlers, but the > handlers themselves must in the end be something the kernel already > knows how to load.) Is your point that we should disable the qemu loader for the native architecture? I certainly agree we need to! > Yes, that's during compile time. I understand the reason for this is more > to catch bad include paths manually specified in packages. Not sure if > during load time the same concerns apply. Ok; I kind of agree that runtime is a different story -- Loïc Minier ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: Multiarch paths and toolchain implications
Loïc Minier wrote on 08/02/2010 05:30:05 PM: > > > or more elegantly we could > > > have a generic loader which checks the architecture of the target ELF > > > file before calling the arch-specific loader. This loader would be > > > linked to from all the old locations. > > > > Well, but then what architecture would that generic loader be in? In the > > end, it has to be *something* the kernel understands to load natively. > > Currently with binfmt_misc when the kernel loads a binary it will check > whether it's the native architecture and if it is load the ELF dynamic > linker referenced in the binary; if it matches one of the regexps from > binfmt_misc, such as the binary pattern for ARM ELF binaries, it will > call the binfmt interpreter instead, e.g. qemu-arm, and in this case > qemu-arm will load the ELF runtime linker of the target binary to run > the binary inside the CPU emulation. Well, my point is that *qemu-arm* is itself an ELF binary, and the kernel must already know how to handle that. We can have user-space handlers to load secondary architectures that way -- but we cannot have a user-space handler required to load the *primary* architecture; how would that handler itself get loaded? > So I think this should just work; the kernel will call the native ELF > loader of the current arch for binaries for the current arch, and will > load QEMU which will load and emulate the ELF loader for the emulated > arch in the other cases. Maybe I misunderstood something else about your point then, so let's try and take a step back. Today, the location of the ELF loader is embedded into the executable itself, using a full pathname like /lib/ld.so.1. In a multiarch world, this pathname would violate packaging rules, because there are multiple different per-architecture versions of this file. Thus I assumed the straightforward multiarch solution would be to move this file to multiarch locations like /lib/$(multiarch)/ld.so.1, which would require this new location to be embedded into all binaries. I understood you to propose an alternative solution that would keep the old ELF interpreter name (/lib/ld.so.1) embedded in executables, and keep them working by installing some "common" loader at this location. This caused me to wonder what that "common" loader was supposed to be, given that the kernel (for *any* architecture) would be required to be able to load that loader itself natively ... Mit freundlichen Gruessen / Best Regards Ulrich Weigand -- Dr. Ulrich Weigand | Phone: +49-7031/16-3727 STSM, GNU compiler and toolchain for Linux on System z and Cell/B.E. IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martin Jetter | Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen | Registergericht: Amtsgericht Stuttgart, HRB 243294 ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: Multiarch paths and toolchain implications
On Mon, Aug 02, 2010, Ulrich Weigand wrote: > Maybe I misunderstood something else about your point then, so let's try > and take a step back. Today, the location of the ELF loader is embedded > into the executable itself, using a full pathname like /lib/ld.so.1. > In a multiarch world, this pathname would violate packaging rules, because > there are multiple different per-architecture versions of this file. > > Thus I assumed the straightforward multiarch solution would be to move > this file to multiarch locations like /lib/$(multiarch)/ld.so.1, which > would require this new location to be embedded into all binaries. Yes, I agree with this plan > I understood you to propose an alternative solution that would keep the > old ELF interpreter name (/lib/ld.so.1) embedded in executables, and > keep them working by installing some "common" loader at this location. Ah no, I intended us to move to /lib/$(multiarch)/ld.so.1, but for compatibility with executables from other distros and pre-multiarch world, we need to provide /lib/ld* loaders. And since the current /lib/ld* names clash across architectures, I was proposing to replace /lib/ld* with a clever wrapper that calls the proper /lib/$(multiarch)/ld.so.1 depending on the architecture of the ELF file to load. -- Loïc Minier ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: Multiarch paths and toolchain implications
Loïc Minier wrote: > > I understood you to propose an alternative solution that would keep the > > old ELF interpreter name (/lib/ld.so.1) embedded in executables, and > > keep them working by installing some "common" loader at this location. > > Ah no, I intended us to move to /lib/$(multiarch)/ld.so.1, but for > compatibility with executables from other distros and pre-multiarch > world, we need to provide /lib/ld* loaders. OK, I see. > And since the current > /lib/ld* names clash across architectures, I was proposing to replace > /lib/ld* with a clever wrapper that calls the proper > /lib/$(multiarch)/ld.so.1 depending on the architecture of the ELF file > to load. So now we get back to my original question: what file type would that "clever wrapper" be? The kernel can only load an ELF interpreter that is itself an ELF file of the native architecture, so that wrapper would have to be that. However, this means that we've once again violated multiarch rules ... If we have to install different native versions of the clever wrapper, we might just as well install the original native ELF interpreters -- that's neither better nor worse from a multiarch rules perspective. Mit freundlichen Gruessen / Best Regards Ulrich Weigand -- Dr. Ulrich Weigand | Phone: +49-7031/16-3727 STSM, GNU compiler and toolchain for Linux on System z and Cell/B.E. IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martin Jetter | Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen | Registergericht: Amtsgericht Stuttgart, HRB 243294 ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: Multiarch paths and toolchain implications
On 02.08.2010 14:00, Ulrich Weigand wrote: > Loïc Minier wrote: >> and I think >> Debian/Ubuntu toolchains cheat in some areas; for some directories >> which would use $(version) we use $(major).$(minor) instead, and we >> have a $(version) -> $(major).$(minor) symlink. This doesn't really >> relate to the multiarch topic, but it reminds me that we ought to fix >> the distro divergences so that it's easier to swap an upstream >> toolchain with a Debian/Ubuntu one and vice-versa. > > Agreed. Not sure what this particular divergence helps ... this is no "cheating". It makes the packages robust. Remember that some frontends are built from different source packages and that a gnat-4.4 (4.4.4) still needs to be buildable with a gnat-4.4 (4.4.3) and an already updated gcc-4.4 (4.4.4). The directory cannot just be changed because the name/version is still exposed with the -V option. There was some discussion to drop this one altogether, then something like a version_alias corresponding to the target_alias could be introduced. Of course linaro could build all frontends from one source, but then the two following issues have to be addressed: - gcj/libjava has to be built in arm mode even if gcc defaults to thumb mode. - build gnat from the linaro sources (this may be a problem with the bootstrap compiler, didn't investigate yet). Matthias ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: [gnu-linaro-tools] Re: GCC PRE patch; apply or not?
On 02/08/10 16:03, Ulrich Weigand wrote: > Latest status seems to be that the original patch no longer applies, > but Joern did an updated (and somewhat modified) version. This seems > to have triggered a more general discussion on how to correctly > estimate the effect of adding PHI nodes on code size ... Do you mean this one? http://gcc.gnu.org/ml/gcc-patches/2009-03/msg00250.html We have this patch applied also, so now there are two PRE patches to consider. Andrew ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: [gnu-linaro-tools] Re: GCC PRE patch; apply or not?
Andrew Stubbs wrote on 08/02/2010 06:47:36 PM: > On 02/08/10 16:03, Ulrich Weigand wrote: > > Latest status seems to be that the original patch no longer applies, > > but Joern did an updated (and somewhat modified) version. This seems > > to have triggered a more general discussion on how to correctly > > estimate the effect of adding PHI nodes on code size ... > > Do you mean this one? > > http://gcc.gnu.org/ml/gcc-patches/2009-03/msg00250.html Yes, that's the one. Mit freundlichen Gruessen / Best Regards Ulrich Weigand -- Dr. Ulrich Weigand | Phone: +49-7031/16-3727 STSM, GNU compiler and toolchain for Linux on System z and Cell/B.E. IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martin Jetter | Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen | Registergericht: Amtsgericht Stuttgart, HRB 243294 ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: Multiarch paths and toolchain implications
On Mon, Aug 02, 2010, Ulrich Weigand wrote: > So now we get back to my original question: what file type would that > "clever wrapper" be? The kernel can only load an ELF interpreter that is > itself an ELF file of the native architecture, so that wrapper would > have to be that. However, this means that we've once again violated > multiarch rules ... Oh absolutely, it would be native to the current architecture of the kernel, and would be installed in a multiarch directory too. > If we have to install different native versions of the clever wrapper, > we might just as well install the original native ELF interpreters -- > that's neither better nor worse from a multiarch rules perspective. Hmm right; doesn't give us anything more -- Loïc Minier ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: Multiarch paths and toolchain implications
Matthias Klose wrote on 08/02/2010 06:25:58 PM: > this is no "cheating". It makes the packages robust. Remember that some > frontends are built from different source packages and that a > gnat-4.4 (4.4.4) still needs to be buildable with a gnat-4.4 (4.4.3) > and an already updated gcc-4.4 (4.4.4). So the problem that is you want to support a setup where a "gcc" driver installed from a 4.4.4 build can still call and run a "gnat1" binary installed from a 4.4.3 build? That will most likely work. But it still seems a bit fragile to me; in general, there's no guarantee that if you intermix 4.4.4 and 4.4.3 components in that way, everything actually works (that's why they use different directories in the first place). If you want to have separate packages, a cleaner way would appear to be to make them fully self-contained, e.g. have them each provide their own driver that can be called separately. > Of course linaro could build all frontends from one source, but then the two > following issues have to be addressed: > > - gcj/libjava has to be built in arm mode even if gcc defaults > to thumb mode. > > - build gnat from the linaro sources (this may be a problem with the > bootstrap compiler, didn't investigate yet). These sound like problems that ought to be addessed in any case ... Mit freundlichen Gruessen / Best Regards Ulrich Weigand -- Dr. Ulrich Weigand | Phone: +49-7031/16-3727 STSM, GNU compiler and toolchain for Linux on System z and Cell/B.E. IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martin Jetter | Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen | Registergericht: Amtsgericht Stuttgart, HRB 243294 ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: Multiarch paths and toolchain implications
Loïc Minier wrote: > > If we have to install different native versions of the clever wrapper, > > we might just as well install the original native ELF interpreters -- > > that's neither better nor worse from a multiarch rules perspective. > > Hmm right; doesn't give us anything more OK, then we're all in agreement again :-) Now this point is where the suggestion to use something like a bind mount on startup comes in. That way, there would be no violation of the multiarch rules, because /lib/ld.so.1 would not be part of any package, and in fact not even part of any file system on disk, but simply be present in the in-memory mount table. Mit freundlichen Gruessen / Best Regards Ulrich Weigand -- Dr. Ulrich Weigand | Phone: +49-7031/16-3727 STSM, GNU compiler and toolchain for Linux on System z and Cell/B.E. IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martin Jetter | Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen | Registergericht: Amtsgericht Stuttgart, HRB 243294 ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: Multiarch paths and toolchain implications
On 02.08.2010 21:12, Ulrich Weigand wrote: > Matthias Klose wrote on 08/02/2010 06:25:58 PM: > >> this is no "cheating". It makes the packages robust. Remember that some >> frontends are built from different source packages and that a >> gnat-4.4 (4.4.4) still needs to be buildable with a gnat-4.4 (4.4.3) >> and an already updated gcc-4.4 (4.4.4). > > So the problem that is you want to support a setup where a "gcc" driver > installed from a 4.4.4 build can still call and run a "gnat1" binary > installed from a 4.4.3 build? That will most likely work. No, gnat (4.4.3) has still to work, if gcc (4.4.4) is already installed. > But it still seems a bit fragile to me; in general, there's no guarantee > that if you intermix 4.4.4 and 4.4.3 components in that way, everything > actually works (that's why they use different directories in the first > place). Then I would need to change this internal path with every source change. I don't see this as fragile as long as it is ensured that we ship with the different frontends built from the same patchsets/sources. Note that further restrictions are made by package dependencies. > If you want to have separate packages, a cleaner way would appear to be to > make them fully self-contained, e.g. have them each provide their own > driver that can be called separately. I don't understand that. I don't have a problem with the driver, but with the compiler (gnat1). Having the packages self-contained creates another problem in that you get file conflicts for files like collect2, various .o files etc. Matthias ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: [gnu-linaro-tools] Re: GCC PRE patch; apply or not?
On Mon, Aug 02, 2010 at 04:31:16PM +0200, Ulrich Weigand wrote: > Steven asked: "What do you folks think of this patch?" > and the only answer was by Richi: "I think it's reasonable." > > So I'm not sure why it didn't go in. Maybe we should just ping Steven? It's been a while, but I think discussion elsewhere (a bug log maybe?) indicated that Steven wasn't happy with it. -- Daniel Jacobowitz CodeSourcery ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: [gnu-linaro-tools] Re: -fremove-local-statics optimization
Ulrich Weigand wrote: > Andrew Stubbs wrote: > >> Some discussion later, they decided it would be better to implement the >> optimization using inter-procedural dead store analysis: >>http://gcc.gnu.org/ml/gcc-patches/2008-07/msg01602.html > > I agree that this would be a much nicer way ... > Maybe this is something we should be working on then? > >> So basically we're left with this patch that does something we want, but >> not in a way that can go upstream. :( For avoidance of doubt, we (CodeSourcery) aren't attached to the current implementation. If someone wants us to go do a better one, that would be fun. But, on the other hand, the current implementation works, and delivers a significant improvement on EEMBC. > The one concern I have is that the patch introduces a user-visible > construct: the -fremove-local-statics command line option. If we > add this now, and users add the flag to their Makefiles, and then > it goes away later on, users' builds could break. I think it's reasonable for the option to be named that, no matter what implementation it has. At worst, we could always have a Linaro-specific hack to accept the option and ignore it; certainly, CodeSourcery will support this command-line option in our tools going forward "forever". Thanks, -- Mark Mitchell CodeSourcery m...@codesourcery.com (650) 331-3385 x713 ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Re: Multiarch paths and toolchain implications
+++ Ulrich Weigand [2010-08-02 16:13 +0200]: > > We should post it on the Linaro wiki, probably. > > It's now on: > https://wiki.linaro.org/WorkingGroups/ToolChain/MultiarchPaths As this is actually a much wider issue than just Linaro, (and because it is better presented to other interested parties from the more neutral ground of Debian), I've moved it to http://wiki.debian.org/Multiarch/Spec (and deleted the original so we don't get two diverging versions). Hope that's not considered rude, Ulrich. (great bit of work capturing all that good stuff - thank you). Wookey -- Principal hats: Linaro, Emdebian, Wookware, Balloonboard, ARM http://wookware.org/ ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain
Toolchain WG - 2010-08-02 minutes
Minutes from the Toolchain WG weekly meeting are available at: https://wiki.linaro.org/WorkingGroups/ToolChain/Meetings/2010-08-02 A copy, along with activity reports, follow. -- Michael = Monday 2nd August 2010 = == This month's meetings == <> == Attendees == ||Name ||Email ||IRC Nick || || Andrew Stubbs || andrew.stu...@linaro.org || ams || || Julian Brown || jul...@codesourcery.com || jbrown || || Michael Hope || michael.h...@linaro.org || michaelh || || Richard Earnshaw || richard.earns...@arm.com || rearnshaw || || Ulrich Weigand || ulrich.weig...@linaro.org || uweigand || || Yao Qi || yao...@linaro.org || yao || == Agenda == * Review action items from last meeting * New public call-in number: code 263 441 7169 * Hardware availability * Next milestone * Outstanding changes that could be merged * GCC testsuite failures and what to do about them * http://ex.seabright.co.nz/helpers/testlog/gcc-linaro-4.4-93543/logs/armv7l-maverick-cbuild1-pavo1/gcc-test.txt ||Blueprint ||Assignee || || [[https://blueprints.launchpad.net/gcc-linaro/+spec/initial-4.4|Initial delivery of Linaro GCC 4.4]] || ams || || [[https://blueprints.launchpad.net/ubuntu/+spec/arm-m-cross-compilers|Cross Compiler Packages]] || hrw || == Action Items from this Meeting == * ACTION: Michael to organise and spread next Monday's call in number * ACTION: Richard to ask the GCC developers on IRC what the status of 4.4.5 is * ACTION: [[LP:500524]] Ulrich to backport to Linaro 4.4 * ACTION: Test failures: Michael to build FSF 4.4.4 as a baseline * ACTION: Test failures: Michael to compare FSF and Linaro GCC failures * ACTION: Test failures: All Linaro GCC failures to be ticketed * ACTION: LTO check on ARM: Michael to reproduce * ACTION: Ulrich to write up known GDB work * ACTION: [[LP:598147]]: Michael to reproduce the test failures * ACTION: [[LP:398403]]: Andrew to reproduce on his IGEP board (has more RAM) * ACTION: Michael to make sure merge requests are present for work that has been done == Action Items from Previous Meeting == * ACTION: Michael to organise and spread next Monday's call in number * DONE: Michael to re-check release dates and suggest something * Set to the second Tuesday of each month. Lines up with other Linaro releases quite well. * DONE: Michael to rename the intermediate milestone * Set to 2010.08-0. Created 2010.09-0 for the next. * DONE: Loic to find the Chrome OS contact name for the records * DONE: Ulrich to confirm PowerPC approach with Matthias * Confirmed that reverting the PowerPC changes is OK == Minutes == * Reviewed the action items from the previous meeting * Reviewed the high-priority tickets * Merging 4.4.5 * Michael is unhappy with merging for next weeks release unless 4.4.5 comes out in the next few days * Richard went through the announcements. Last was for 4.4.4 on April 30. * 4.4.5 is due around now * Expect that the GCC team are focused on 4.5.1 instead * ACTION: Richard to ask the GCC developers on IRC what the status of 4.4.5 is * [[LP:500524]] * Ulrich has done upstream in 4.4.5 * A small change, unlikely to cause correctness regressions, may cause performance regressions * ACTION: Ulrich to backport to Linaro 4.4 * GCC test suite * Michael asked what level of failures are acceptable * Richard said that the FSF GCC has never been completely clean on ARM * Failures may exist upstream * In the future, should investigate all and fix * For now, investigate regressions * ACTION: Michael to build FSF 4.4.4 as a baseline * ACTION: Michael to compare FSF and Linaro GCC failures * ACTION: All Linaro GCC failures to be ticketed * Regressions marked as 'test regression' with Medium severity * Existing failures to be marked as 'test failure' with Low severity * Wiki page [[Toolchain/UbuntuRegressions]] has triage from earlier * Ulrich noted that fixing [[LP:500524]] in Linaro will overlap with Ubuntu toolchain maintainer's current 4.4.4+svn patch * Michael said that it is the responsibility of the Ubuntu toolchain mainter to manage that * Linaro will notify them on the change * Discussed LTO * Michael sees LTO tests fail on gcc-linaro-4.5 on ARM with --enable-languages=c,c++ * Assumed it was binutils related * Ulrich: shouldn't be as GCC does most of the work * ACTION: Michael to reproduce * Future work * Focused on GCC at the moment * Need medium term plan * Focused on the core toolchain * GCC * Binutils * GDB * eglibc * New hire specialises in qemu and will focus on that * Multiarch is written up, unsure if we've agreed to do the work * ACTION: Ulrich to write up known GDB work * Medium severity tickets * [[LP:598147]] * May be three issues in one ticket * Issues due to hardening, now fixed * Stack crash due to invalid usage * Extra test failures * ACTION: Michael to reproduc
Re: Multiarch paths and toolchain implications
On Mon, Aug 02, 2010, Wookey wrote: > > https://wiki.linaro.org/WorkingGroups/ToolChain/MultiarchPaths > http://wiki.debian.org/Multiarch/Spec > (and deleted the original so we don't get two diverging versions). I've added a redirect so that people reading the list archives can follow the link -- Loïc Minier ___ linaro-toolchain mailing list linaro-toolchain@lists.linaro.org http://lists.linaro.org/mailman/listinfo/linaro-toolchain