Re: Patching the GCC build system to build MPICH and OpenCoarrays
On Sun, Apr 8, 2018 at 8:56 PM, Damian Rouson wrote: > On April 4, 2018 at 1:12:25 AM, Richard Biener > (richard.guent...@gmail.com(mailto:richard.guent...@gmail.com)) wrote: > >> In that case user programs compiled with -fcoarray=lib are but gfortran >> or libgfortran itself is not linked against OpenCoarrays? > > Yes. OpenCoarrays produces the parallel runtime library libcaf_mpi using MPI. > >> So if we consider OpenCoarrays part of the gfortran runtime then >> it makes sense to build it in-tree… > > Yes. Many gfortran users and developers who will be glad to see this both > because it enables the Fortran 2008/2018 parallel features and because it > facilitates building related tests. > >> ... but building an mpi library in-tree might not? > > OpenCoarrays requires an underlying parallel programming model. MPI is the > default model because it provides the broadest coverage of the required > features. OpenCoarrays also offers alternatives to MPI, but those are > experimental and support a more restricted subset of Fortran 2008/2018. > >> I'm still lacking an idea of what it takes to enable coarrays with gfortran > > We will mimic the OpenCoarrays build system, which installs “caf" and > “cafrun" scripts analogous to MPI's "mpifort" and “mpirun,” respectively. > These are used to compile and launch parallel programs: > > $ cat hello.f90 > print *,”Hello from image “,this_image(),” of “,num_images() > end > $ caf hello.f90 > $ cafrun -n 4 ./a.out > Hello from image 2 of 4 > Hello from image 1 of 4 > Hello from image 4 of 4 > Hello from image 3 of 4 I see. So it's more like OpenCoarrays is in control of everything rather than GCC... >> since install.texi doesn't talk about this at all, neither in the >> prerequesites >> section nor in a fortran/coarray specific section. > > If there are guidelines for modifying install.texi and invoke.texi, please > send a link. It appears install.texi is written in raw TeX. I use LaTeX > regularly but haven’t touched TeX in decades. I’ll give it a shot. I also > don’t understand how those files are used There are no guidelines - simply follow surrounding code. Note this is texinfo, not tex. We generate man pages, html documentation and pdf docs from these sources. See for example https://gcc.gnu.org/install/ >> In fact the only thing I find is in invoke.texi which says >> >> @item -fcoarray=@var{} >> @opindex @code{fcoarray} >> ... >> @item @samp{lib} >> Library-based coarray parallelization; a suitable GNU Fortran coarray >> library needs to be linked. >> @end table >> >> which suggests linking to the coarray library doesn't happen automatically >> but the user is supposed to link a suitable library? > > caf invokes gfortran and links against libcaf_mpi and the MPI libraries. The > OpenCoarrays build system customizes caf to ensure a consistent tool chain > (e.g., ensuring the employed MPI was built by the employed compiler). This > allows for reusing one gfortran installation with multiple parallel > programming models. > >> I'd love to "enable" coarray support for openSUSE but as said I have a hard >> time assessing what I'd need to do here. > > Until we figure out how to get the GCC build system to build MPICH and > OpenCoarrays, enabling coarray support requires downloading and building > MPICH and OpenCoarrays separately. Please see the instructions for GCC > developers: > https://github.com/sourceryinstitute/opencoarrays/blob/master/INSTALL. Thanks - I expected to find sth in the GCC manual or on the GCC wiki ;) I'll have a look when I have some spare cycles. > Thanks for your feedback. I’m hopeful that your advice will be helpful for > Daniel in figuring out how to modify the GCC build system. As said I'm still not convinced building those libraries in-tree is the best way forward. I won't stand in the way of making that work though. Richard. > Damian > >
Re: GCC changes for Fedora + riscv64
On 04/08/2018 08:22 AM, Jeff Law wrote: On 03/31/2018 12:27 PM, Richard W.M. Jones wrote: I'd like to talk about what changes we (may) need to GCC in Fedora to get it working on 64-bit RISC-V, and also (more importantly) to ask your advice on things we don't fully understand yet. However, I don't know even what venue you'd prefer to discuss this in. A discussion here is fine with me. I know of a few issues. I have a work-in-progress --with-multilib-list patch in PR 84797 but it isn't quite right yet, and needs to work more like the patch in PR 85142, which isn't OK to check in. There is a problem with atomics. We only have builtins for the ones that can be implemented with a single instruction. Adding -latomic unconditionally might fix it, but won't work for gcc builds and the gcc testsuite unless we also add paths pointing into the libatomic build dir. I'm also concerned that this might cause build problems, if we end up trying to link with libatomic before we have built it. The simplest solution might be to just add expanders for all of the missing atomics, even if they require multiple instructions, just like how all of the mainstream linux targets currently work. There is a problem with the linker not searching the right set of dirs by default. That is more a binutils problem than a gcc problem, but the linker might need some help from gcc to fix it, as the linker doesn't normally take -march and -mabi options. There is a problem with libffi, which has RISC-V support upstream, but not in the FSF GCC copy. This is needed for go language support. There was also a dispute about go something naming, as to whether it should be riscv64 or riscv, with one person doing a port choosing the former and another person doing another port choosing the latter. Those are all of the Linux specific ones I can remember at the moment. I might have missed some. Jim
Which compiler version should we use to compile Cadence Compilers
Hi everyone, I goal is to discuss if we should use the compiler available in the build machine (for example, in RHEL 6.5 it is GCC v4.4.7) or if we should use the previous version of Cadence GCC (for example v4.8.3). Currently, we use the lastest officially supported Cadence GCC version. But we are using the configuration parameter “enable-bootstrap”, which steps are described below, I think we could use the compiler available in the build machine. * Build tools necessary to build the compiler. * Perform a 3-stage bootstrap of the compiler. This includes building three times the target tools for use by the compiler such as binutils (bfd, binutils, gas, gprof, ld, and opcodes) if they have been individually linked or moved into the top level GCC source tree before configuring. * Perform a comparison test of the stage2 and stage3 compilers. * Build runtime libraries using the stage3 compiler from the previous step. I want to hear more opinions about this topic, before bring it to next compilers meeting. Regards, -- Rogerio
Re: Which compiler version should we use to compile Cadence Compilers
On 9 April 2018 at 20:29, Rogerio de Souza Moraes wrote: > Hi everyone, > > I goal is to discuss if we should use the compiler available in the build > machine (for example, in RHEL 6.5 it is GCC v4.4.7) or if we should use the > previous version of Cadence GCC (for example v4.8.3). > > Currently, we use the lastest officially supported Cadence GCC version. > > But we are using the configuration parameter “enable-bootstrap”, which steps > are described below, I think we could use the compiler available in the build > machine. When you do a bootstrap the final result is built by the new GCC. It doesn't matter which version you start with, the final result will be the same.
Re: GCC changes for Fedora + riscv64
On 04/09/2018 12:04 PM, Jim Wilson wrote: > On 04/08/2018 08:22 AM, Jeff Law wrote: >> On 03/31/2018 12:27 PM, Richard W.M. Jones wrote: >>> I'd like to talk about what changes we (may) need to GCC in >>> Fedora to get it working on 64-bit RISC-V, and also (more >>> importantly) to ask your advice on things we don't fully >>> understand yet. However, I don't know even what venue you'd >>> prefer to discuss this in. > > A discussion here is fine with me. I know of a few issues. > > I have a work-in-progress --with-multilib-list patch in PR 84797 but it > isn't quite right yet, and needs to work more like the patch in PR > 85142, which isn't OK to check in. > > There is a problem with atomics. We only have builtins for the ones > that can be implemented with a single instruction. Adding -latomic > unconditionally might fix it, but won't work for gcc builds and the gcc > testsuite unless we also add paths pointing into the libatomic build > dir. I'm also concerned that this might cause build problems, if we end > up trying to link with libatomic before we have built it. The simplest > solution might be to just add expanders for all of the missing atomics, > even if they require multiple instructions, just like how all of the > mainstream linux targets currently work. > > There is a problem with the linker not searching the right set of dirs > by default. That is more a binutils problem than a gcc problem, but the > linker might need some help from gcc to fix it, as the linker doesn't > normally take -march and -mabi options. > > There is a problem with libffi, which has RISC-V support upstream, but > not in the FSF GCC copy. This is needed for go language support. There > was also a dispute about go something naming, as to whether it should be > riscv64 or riscv, with one person doing a port choosing the former and > another person doing another port choosing the latter. > > Those are all of the Linux specific ones I can remember at the moment. I > might have missed some. Are you guys using qemu user mode emulation for testing purposes? When I've set up a suitable riscv64 rootfs and try to do anything nontrivial in it with qemu user mode emulation it immediately complains that my kernel is too old -- which is quite odd as I've got a dozen or so of these kinds of environments set up for testing which don't issue that complaint. I'd like to avoid full system emulation just from a cost standpoint.. Jeff
Re: GCC changes for Fedora + riscv64
On Mon, Apr 9, 2018 at 6:37 PM, Jeff Law wrote: > On 04/09/2018 12:04 PM, Jim Wilson wrote: >> On 04/08/2018 08:22 AM, Jeff Law wrote: >>> On 03/31/2018 12:27 PM, Richard W.M. Jones wrote: I'd like to talk about what changes we (may) need to GCC in Fedora to get it working on 64-bit RISC-V, and also (more importantly) to ask your advice on things we don't fully understand yet. However, I don't know even what venue you'd prefer to discuss this in. >> >> A discussion here is fine with me. I know of a few issues. >> >> I have a work-in-progress --with-multilib-list patch in PR 84797 but it >> isn't quite right yet, and needs to work more like the patch in PR >> 85142, which isn't OK to check in. >> >> There is a problem with atomics. We only have builtins for the ones >> that can be implemented with a single instruction. Adding -latomic >> unconditionally might fix it, but won't work for gcc builds and the gcc >> testsuite unless we also add paths pointing into the libatomic build >> dir. I'm also concerned that this might cause build problems, if we end >> up trying to link with libatomic before we have built it. The simplest >> solution might be to just add expanders for all of the missing atomics, >> even if they require multiple instructions, just like how all of the >> mainstream linux targets currently work. >> >> There is a problem with the linker not searching the right set of dirs >> by default. That is more a binutils problem than a gcc problem, but the >> linker might need some help from gcc to fix it, as the linker doesn't >> normally take -march and -mabi options. >> >> There is a problem with libffi, which has RISC-V support upstream, but >> not in the FSF GCC copy. This is needed for go language support. There >> was also a dispute about go something naming, as to whether it should be >> riscv64 or riscv, with one person doing a port choosing the former and >> another person doing another port choosing the latter. >> >> Those are all of the Linux specific ones I can remember at the moment. I >> might have missed some. > Are you guys using qemu user mode emulation for testing purposes? When > I've set up a suitable riscv64 rootfs and try to do anything nontrivial > in it with qemu user mode emulation it immediately complains that my > kernel is too old -- which is quite odd as I've got a dozen or so of > these kinds of environments set up for testing which don't issue that > complaint. > > I'd like to avoid full system emulation just from a cost standpoint.. That error is produced by glibc if uname() returns a kernel version older than the oldest with arch support, which for riscv is 4.15.0. qemu-user does not fake uname(), so qemu-user for riscv will only work if the host is running 4.15.0 or newer. -s
Re: GCC changes for Fedora + riscv64
On 04/09/2018 04:47 PM, Stef O'Rear wrote: > On Mon, Apr 9, 2018 at 6:37 PM, Jeff Law wrote: >> On 04/09/2018 12:04 PM, Jim Wilson wrote: >>> On 04/08/2018 08:22 AM, Jeff Law wrote: On 03/31/2018 12:27 PM, Richard W.M. Jones wrote: > I'd like to talk about what changes we (may) need to GCC in > Fedora to get it working on 64-bit RISC-V, and also (more > importantly) to ask your advice on things we don't fully > understand yet. However, I don't know even what venue you'd > prefer to discuss this in. >>> >>> A discussion here is fine with me. I know of a few issues. >>> >>> I have a work-in-progress --with-multilib-list patch in PR 84797 but it >>> isn't quite right yet, and needs to work more like the patch in PR >>> 85142, which isn't OK to check in. >>> >>> There is a problem with atomics. We only have builtins for the ones >>> that can be implemented with a single instruction. Adding -latomic >>> unconditionally might fix it, but won't work for gcc builds and the gcc >>> testsuite unless we also add paths pointing into the libatomic build >>> dir. I'm also concerned that this might cause build problems, if we end >>> up trying to link with libatomic before we have built it. The simplest >>> solution might be to just add expanders for all of the missing atomics, >>> even if they require multiple instructions, just like how all of the >>> mainstream linux targets currently work. >>> >>> There is a problem with the linker not searching the right set of dirs >>> by default. That is more a binutils problem than a gcc problem, but the >>> linker might need some help from gcc to fix it, as the linker doesn't >>> normally take -march and -mabi options. >>> >>> There is a problem with libffi, which has RISC-V support upstream, but >>> not in the FSF GCC copy. This is needed for go language support. There >>> was also a dispute about go something naming, as to whether it should be >>> riscv64 or riscv, with one person doing a port choosing the former and >>> another person doing another port choosing the latter. >>> >>> Those are all of the Linux specific ones I can remember at the moment. I >>> might have missed some. >> Are you guys using qemu user mode emulation for testing purposes? When >> I've set up a suitable riscv64 rootfs and try to do anything nontrivial >> in it with qemu user mode emulation it immediately complains that my >> kernel is too old -- which is quite odd as I've got a dozen or so of >> these kinds of environments set up for testing which don't issue that >> complaint. >> >> I'd like to avoid full system emulation just from a cost standpoint.. > > That error is produced by glibc if uname() returns a kernel version > older than the oldest with arch support, which for riscv is 4.15.0. > qemu-user does not fake uname(), so qemu-user for riscv will only work > if the host is running 4.15.0 or newer. Hmm, makes sense since the hosts are running 4.10-ish I'll hack around it somehow. I'd really like to add riscv64 to the tester and don't want to mess around with kernel updates. Thanks, jeff
Re: GCC changes for Fedora + riscv64
Hi Jeff: You can use -r option (e.g. ./qemu-riscv64 -r 4.15) or set QEMU_UNAME environment variable to change the uname for qemu. On Tue, Apr 10, 2018 at 6:50 AM, Jeff Law wrote: > On 04/09/2018 04:47 PM, Stef O'Rear wrote: >> On Mon, Apr 9, 2018 at 6:37 PM, Jeff Law wrote: >>> On 04/09/2018 12:04 PM, Jim Wilson wrote: On 04/08/2018 08:22 AM, Jeff Law wrote: > On 03/31/2018 12:27 PM, Richard W.M. Jones wrote: >> I'd like to talk about what changes we (may) need to GCC in >> Fedora to get it working on 64-bit RISC-V, and also (more >> importantly) to ask your advice on things we don't fully >> understand yet. However, I don't know even what venue you'd >> prefer to discuss this in. A discussion here is fine with me. I know of a few issues. I have a work-in-progress --with-multilib-list patch in PR 84797 but it isn't quite right yet, and needs to work more like the patch in PR 85142, which isn't OK to check in. There is a problem with atomics. We only have builtins for the ones that can be implemented with a single instruction. Adding -latomic unconditionally might fix it, but won't work for gcc builds and the gcc testsuite unless we also add paths pointing into the libatomic build dir. I'm also concerned that this might cause build problems, if we end up trying to link with libatomic before we have built it. The simplest solution might be to just add expanders for all of the missing atomics, even if they require multiple instructions, just like how all of the mainstream linux targets currently work. There is a problem with the linker not searching the right set of dirs by default. That is more a binutils problem than a gcc problem, but the linker might need some help from gcc to fix it, as the linker doesn't normally take -march and -mabi options. There is a problem with libffi, which has RISC-V support upstream, but not in the FSF GCC copy. This is needed for go language support. There was also a dispute about go something naming, as to whether it should be riscv64 or riscv, with one person doing a port choosing the former and another person doing another port choosing the latter. Those are all of the Linux specific ones I can remember at the moment. I might have missed some. >>> Are you guys using qemu user mode emulation for testing purposes? When >>> I've set up a suitable riscv64 rootfs and try to do anything nontrivial >>> in it with qemu user mode emulation it immediately complains that my >>> kernel is too old -- which is quite odd as I've got a dozen or so of >>> these kinds of environments set up for testing which don't issue that >>> complaint. >>> >>> I'd like to avoid full system emulation just from a cost standpoint.. >> >> That error is produced by glibc if uname() returns a kernel version >> older than the oldest with arch support, which for riscv is 4.15.0. >> qemu-user does not fake uname(), so qemu-user for riscv will only work >> if the host is running 4.15.0 or newer. > Hmm, makes sense since the hosts are running 4.10-ish > > I'll hack around it somehow. I'd really like to add riscv64 to the > tester and don't want to mess around with kernel updates. > > Thanks, > jeff
Re: GCC changes for Fedora + riscv64
On Mon, Apr 09, 2018 at 04:37:30PM -0600, Jeff Law wrote: > Are you guys using qemu user mode emulation for testing purposes? When > I've set up a suitable riscv64 rootfs and try to do anything nontrivial > in it with qemu user mode emulation it immediately complains that my > kernel is too old -- which is quite odd as I've got a dozen or so of > these kinds of environments set up for testing which don't issue that > complaint. > > I'd like to avoid full system emulation just from a cost standpoint.. We're using full system emulation with upstream qemu. It's surprisingly simple to set up, it supports multiple virtual CPUs up to ‘-smp 8’, and provided you have a fast enough host hardware isn't too bad for development. Of course compiling GCC with bootstrapping is still going to take a long time. For instructions see the readme here: https://fedorapeople.org/groups/risc-v/disk-images/ Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com libguestfs lets you edit virtual machines. Supports shell scripting, bindings from many languages. http://libguestfs.org