Dear Richard, On 12.08.25 08:46, Richard Biener wrote: > On Mon, 11 Aug 2025, Frank Scheiner wrote: >> On 11.08.25 12:59, Richard Biener wrote: >>> On Mon, 11 Aug 2025, Sam James wrote: >>>> Frank Scheiner <frank.schei...@web.de> writes: >>>>> On 11.08.25 09:49, Richard Biener wrote: >>>>>> On Sun, 10 Aug 2025, Jeff Law wrote: >>>>>>> On 8/10/25 3:24 PM, Andrew Pinski wrote: >>>>>>>> I just looked and the last testsuite results for ia64 was back in June >>>>>>>> 2024. There has been no movement since. Can we again make ia64 >>>>>>>> obsolete? >>>>> >>>>> Is there any current issue that needs tackling with ia64? Or does it >>>>> create extra work for others? >>>> >>>> * Incomplete C23 support (needs BitInt ABI defined and implemented, see >>>> PR117585) >>>> * I think it's the only target requiring (?) selective scheduling and >>>> selective >>>> scheduling isn't in a great state (PR85099) >>>> * A bunch of random ICEs and wrong-code bugs specific to ia64 that >>>> nobody has debugged or fixed (e.g. PR87281, PR105215, PR105445) >>>> >>>> There isn't any maintenance being done for the target, monitoring of >>>> (new) bugs, etc. >>> >>> The most important thing is that there's no listed maintainer of the >>> target. >> >> Well, we can't directly control that. But yeah, I had hoped for more, also >> from me. Making time for that is an issue though on all sides. But also >> it's not like nothing happened on our side. It's built, it's used for kernel >> and userland builds and it works for most things tried. Visibility of what >> has been done is of course limited. But will it really make a difference if >> I spam the testresults list with successful cross-compiler and kernel >> builds? > > Well, it's an indication the port is "alive".
Yeah, ok, I guess nobody is really subscribed to that list but rather uses it for reference only. >>>> I suspect that more issues would be found if someone was running the >>>> testsuite regularly (and doing bootstraps w/ full checking enabled) and >>>> looking at new failures. >>>> >>>>> >>>>> (1) While I haven't produced any new testsuite results for ia64 since >>>>> last year*, I'm building and using an ia64 cross-compiler from GCC >>>>> snapshots every week (most of the time) to build Linux mainline RCs >>>>> or pre-RCs for testing on real machines and Ski during merge windows. >>>>> See for example [1]. >>>>> >>>> >>>> Not having support in upstream glibc or the kernel also means nobody can >>>> really test it. >> >> I don't understand that point. We have - granted, out-of-tree - support for >> both Linux and glibc for ia64. And this is there since 6.7 ([4]) and >> 2.39 ([5]) respectively, and taking this history into account, the prospect >> is, that it is not going away any time soon. That no distribution is using >> it but T2 and EPIC Slack, that's indeed sad. But if we can use it, it >> shouldn't be a problem for others, too - if they want to test it. But I >> asssume the __want__ is the problem. >> >> [4]: https://github.com/johnny-mnemonic/linux-ia64/ >> >> [5]: https://github.com/linux-ia64/glibc-ia64/ >> >>>> I think the requirements for a target that isn't >>>> freestanding and is for a specific libc, yet that libc doesn't support >>>> it (and there's no sign of such support (returning)) are stronger than >>>> an upcoming port or one where it's freestanding. >>> >>> That's of course an issue, though a ia64-elf freestanding target could >>> be tested with a cross compiler and using a simulator (I'm not sure >>> how the state of simulators of IA64 is). >> >> Ski is actively maintained and gradually improved (see [6] and [7]) >> >> [6]: https://github.com/trofi/ski >> >> [7]: https://github.com/linux-ia64/ski >> >> Ski support through hp-sim target is back to (out-of-tree) Linux mainline >> since early 2024, see [8]. >> >> [8]: http://epic-linux.org/#!/machines/hp-sim/index.md >> >> Performance is rather limited and userland emulation is missing too >> many syscalls to be of use for more complex stuff. Networking is possible >> in system mode (in the future also with tun/tap devices thanks to old >> patches from Mikael Pettersson I'm currently integrating), so it's >> possible to inject compiled programs during runtime via NFS for example. >> >>>>> [1]: http://epic-linux.org/#!testing-effort/log.md >>>>> >>>>> *) Frankly, I'm missing the time to also handle this in addition to >>>>> maintaining Linux and glibc for ia64. Also, we're still missing a >>>>> constantly running ia64 system to perform those ca. 10+ hour runs >>>>> for building and running the testsuite regularly. >>>>> >>>>> (2) I've also set up an autobuilder which every night cross-builds the >>>>> latest available glibc, binutils and GCC snapshots, see [2]. >>>>> >>>>> [2]: >>>>> https://github.com/johnny-mnemonic/toolchain-autobuilds/actions/runs/16870641887/job/47784774192 >>>>> >>>>> (3) I'm natively building a selection of Slackware packages for my >>>>> unofficial ia64 port (EPIC Slack ([3])) regularly. Though this uses the >>>>> versions Slackware uses in -current, so 15.1 ATM. >>>> >>>> Are you running testsuites for packages and making sure there's not >>>> wrong-code issues arising? >> >> I consider building with an always up-to-date environment both native >> (EPIC Slack) and cross (T2) test enough for the state of the ia64 port >> with its current user base. If interest in the ia64 port grows this >> can be professionalized more, sure. We don't have the resources to >> target maximum goals. And that is also not needed to keep a port alive. >> >> Nobody will complain to GCC devs if something doesn't work for ia64. > > Well. That's not our standards of quality. Or it shouldn't be. That's fine and much obliged by users of GCC (including me). I'm sure, if we had the backing, and the countless systems running non-stop free-of-charge we could do some really great things, albeit we have to work with what we have available (incl. manpower). But to be sure, you mean the GCC testsuite(s) here, not the testsuites of each and every userland part that offers it? AFAIK even my upstream distro (Slackware) doesn't do the latter by default. Also running the testsuites of userland parts would be primarily a quality measure of the respective distro IMHO. >>>>> [3]: http://epic-slack.org/ >>>>> >>>>> If there were any hard issues during this continuous testing I'd have >>>>> created a bug report to handle that. >>>> >>>> I see http://epic-slack.org/#!index.md#2025-07-11 mentions an ICE with >>>> cryptsetup. Is there a bug for that? If one is filed, who is going to >>>> take care of it? >> >> This specific issue is new and happens both natively (even with older >> GCC versions) and cross. Personally I don't think cryptsetup is vital >> enough for ia64, but when this is needed by users I'm sure they will >> find a way to fix that issue. >> >> But exactly, why should we file a bug for ia64 if we can't provide >> the solution, too. > > Bug reports also exist to get visibility on issues. Well, if these are not used as argument against ia64 in the GCC, that's fine. :-) >>>>>>> Yes, please. Better to get it in place now so that nobody's surprised >>>>>>> come >>>>>>> next spring. >>>>>> >>>>>> Also let's discuss better documentation (or changed) requirements on >>>>>> what we expect from targets to stay in GCC, be primary or secondary >>>>>> targets. >>>>> >>>>> Yes, I'd really appreciate that. Because using and testing it for >>>>> Linux builds seems not enough? >>>>> >>>> >>>> (See my above remarks wrt requirements; do think there's more to be >>>> discussed here but don't want to repeat the same parts here too.) >>> >>> I'd like to see build bots for each target. At least those we >>> consider primary and secondary. >> >> I'd like those, too, but the reality is, we don't have them. >> At least not for native building. For cross-bootstrap-builds see [2]. >> >> But also ia64 is neither primary nor secondary, or? > > Right. So until now, no constraints were actually violated. >> So what would be adequate for a tertiary target? > > There are currently no official constraints for "other targets" > (other than primary or secondary). Which is why I suggested to > improve documentation and think about what's reasonable. Ok, so [10] mentions the following: ``` Our release criteria for the secondary platforms are: The compiler bootstraps successfully, and the C++ runtime library builds. The DejaGNU testsuite has been run, and a substantial majority of the tests pass. ``` ...for secondary platforms and limiting it further above to "C, C++, and C++ runtime library". [10]: https://www.gnu.org/software/gcc/gcc-15/criteria.html I believe ia64 will fulfill these release criteria, though the last test results submitted ([11]) were from a pre-GCC-15.1-release. [11]: https://gcc.gnu.org/pipermail/gcc-testresults/2024-June/817268.html I know that GCC 15.1 bootstraps on EPIC Slack (for C, C++ and Fortran, I only build these) and I'm currently bootstrapping 15.2, because Slackware switched to this version on Saturday. Testsuite results are missing and I didn't keep the build directory for 15.1. But the testsuite results on [11] were anyhow made from T2, so for comparison it would be better to reuse that environment for the native bootstrap builds and testsuite runs for 15.1 and 15.2. Can I limit these builds and testsuite runs to C and C++ as per the release criteria to speed things up on real hardware? I don't know exactly how much time this would save, but I seem to remember that the Fortran build and testsuite takes quite some time extra, Objc not so much IIRC. >>> I think non-bootstrap builds are OK, >>> but running the testsuite is important, so this might involve >>> building a runtime. Testing with a simulator should be OK. >> >> I wouldn't dare to run the testsuite in Ski, as performance is >> lousy ([9]). And if it already takes more than 5 hours just for the >> testsuite on an rx2800 i2 with 1 x 9320 and 8 hardware threads, I'd >> expect a full run to take weeks inside Ski. In [9] I found that a >> single Madison 1.3 GHz 3M is about 21 times faster than Ski (running >> on a 4 GHz Haswell) for a simple package build of the dash shell. > > Hmm. So I guess it's the lack of usermode emulation, with qemu > if you have to do system emulation that's also 10x slower than > with user emulation. The only possible "positive" is that x86 > and arm hosts scale to 100s of CPU cores, so massive parallelization > of testing can still make a difference. Yeah. Usermode emulation could at least scale to the number of hardware threads on the host. But I have no idea how much work is needed to implement the missing syscalls for Ski. Cross-bootstrap and executing only testsuite programs on real hardware could be closer - if possible. >> [9]: http://epic-linux.org/#!index.md#Ski_-_the_undiscovered_country >> >> But maybe there could be a solution found that involves both cross >> compiling and only executes test programs on real ia64 hardware to >> speed things up considerably. I assume this is possible, and I >> believe Rene has looked into that. I just don't know where we are >> with that right now. But it could be a solution to provide more >> testing. > > I think for IA-64 a problem that you'll face sooner or later is that > hardware will break down and it's impossible to get replacement > parts or repairs. So robust (and scalable) emulation is important. > But then, given the architecture is truly dead, I wonder whether, > at such point, there's a point in keeping it alive? How far are > we from this point? Can't say for sure. But from my experience with my entire collection of vintage hardware across architectures and practically zero breakdowns from age I conclude there's much more life left in these machines than anybody would expect from them. E.g. for my ia64 gear only the rx4640 is sometimes flaky, which I attribute to one or multiple of its 16 (or 32 - I don't remember currently) memory modules. But when those boot errors are happening, they are usually cleared with one or multiple reboot(s), so not much reason to invest the time to go through all of the modules with memtest in some x86 machine. Apart from that, looking at the number of ia64 systems that are available to me on [12] and others in the community. It'll take a long time until all of them are used up (incl. the extra hardware in stock like PSUs). And even then it will be possible to "operate" some ia64 blade and continue with that for years. Especially when newer systems like i4 or i6 become available, with the latter still listed in HPE's webshop ([13]). [12]: http://epic-linux.org/#!/machines/index.md [13]: https://buy.hpe.com/de/de/compute/hpe-blade-servers/c/c001018 Cheers, Frank