Hi Maxim, Thank you very much for issuing these runs!
I was able to reproduce the failed tests, but the failures seem to be unrelated to my patch. If I rebase my patch on today's ToT, the tests pass. Thank you again for your help, it's really much appreciated. Cheers, Ricardo On 3/18/25 11:58, Maxim Kuvyrkov wrote: > Hi Ricardo, > > Test results for the pull request are at [1]. Details on the failed tests > are at [2]. > > [1] > https://ci.linaro.org/job/tcwg_flang_test--main-aarch64-Ofast-sve_vls-lto-lld-build/1756/artifact/artifacts/notify/mail-body.txt/*view*/>> > [2] > https://ci.linaro.org/job/tcwg_flang_test--main-aarch64-Ofast-sve_vls-lto-lld-build/1756/artifact/artifacts/00-sumfiles/llvm-test-suite.1.json.xz/*view*/>> > > -- > Maxim Kuvyrkov > https://www.linaro.org/>> >> On Mar 15, 2025, at 04:39, Ricardo Jesus <r...@nvidia.com> wrote: >> >> Hi Maxim, >> >> Thank you very much for your offer, and sorry for the delay getting back >> to you. I was waiting for the PR to be approved. >> >> The issue was a bug in the LLVM sources which my original PR exposed. It >> should be fixed now. >> >> The PR is here: https://github.com/llvm/llvm-project/pull/130625>>> >> If you could give it a try and let me know if everything works as >> expected, that would be much appreciated. :) >> >> Thanks, >> Ricardo >> >> On 3/10/25 23:05, Maxim Kuvyrkov wrote: >>> Hi Ricardo, >>> >>> I can test your PR. Please send me the link. >>> >>> Thanks, >>> >>> -- >>> Maxim Kuvyrkov >>> https://www.linaro.org/>>>>> On Mar 10, 2025, at 21:50, Ricardo Jesus >>> <r...@nvidia.com> wrote: >>>> >>>> Hi Maxim, >>>> >>>> Thank you very much for confirming! >>>> >>>> I think I was able to identify the problem. I'm planning to open a PR to >>>> reapply the patch with the fix today. >>>> >>>> I've inspected some of the tests that were failing manually and the new >>>> patch seems to fix them. However, I don't have access to hardware with >>>> SVE 256, so I'm not able to test the whole suite. >>>> >>>> Is it possible to launch one-off runs of the bot against specific PRs to >>>> confirm that the whole tests pass before merging? >>>> >>>> Thanks again, >>>> Ricardo >>>> >>>> On 3/9/25 23:00, Maxim Kuvyrkov wrote: >>>>> Hi Ricardo, >>>>> >>>>> Thanks for looking into this! >>>>> >>>>> The tests are run on AWS Graviton3 hardware (m7g AWS instance), which >>>>> seems to match -msve-vector-bits=256. >>>>> >>>>> Let me know if you need any help in reproducing the failures. >>>>> >>>>> Kind regards, >>>>> >>>>> -- >>>>> Maxim Kuvyrkov >>>>> https://www.linaro.org/ On Mar 8, 2025, at 02:15, Ricardo Jesus >>>>> <r...@nvidia.com> wrote: >>>>>> >>>>>> Hi, >>>>>> >>>>>> Sorry for this issue, I've reverted this in: >>>>>> https://github.com/llvm/llvm-project/commit/21610e3ecc8bc727f99047e544186b35b1291bcd >>>>>> Before I start digging into it further, could you please let me know >>>>>> what hardware is used to run these tests? I just thought I'd >>>>>> double-check that the requested -msve-vector-bits=256 does match the >>>>>> hardware SVE length. >>>>>> >>>>>> Thanks very much, >>>>>> Ricardo >>>>>> >>>>>> On 3/7/25 09:03, ci_not...@linaro.org wrote: >>>>>>> [You don't often get email from ci_not...@linaro.org. Learn why this is >>>>>>> important at https://aka.ms/LearnAboutSenderIdentification ] >>>>>>> >>>>>>> Dear contributor, >>>>>>> >>>>>>> Our automatic CI has detected problems related to your patch(es). >>>>>>> Please find some details below. >>>>>>> >>>>>>> In tcwg_flang_test/main-aarch64-Ofast-sve_vls-lto-lld, after: >>>>>>> | commit llvmorg-21-init-4020-gf01e760c0836 >>>>>>> | Author: Ricardo Jesus <r...@nvidia.com> >>>>>>> | Date: Thu Mar 6 09:27:07 2025 +0000 >>>>>>> | >>>>>>> | [AArch64][SVE] Improve fixed-length addressing modes. (#129732) >>>>>>> | >>>>>>> | When compiling VLS SVE, the compiler often replaces VL-based >>>>>>> offsets >>>>>>> | with immediate-based ones. This leads to a mismatch in the allowed >>>>>>> | addressing modes due to SVE loads/stores generally expecting >>>>>>> immediate >>>>>>> | ... 29 lines of the commit log omitted. >>>>>>> >>>>>>> Produces 6235 regressions: >>>>>>> | >>>>>>> | regressions.sum: >>>>>>> | Running test-suite:Fujitsu/C/0000 ... >>>>>>> | FAIL: test-suite :: Fujitsu/C/0000/Fujitsu-C-0000_0003.test >>>>>>> | FAIL: test-suite :: Fujitsu/C/0000/Fujitsu-C-0000_0004.test >>>>>>> | FAIL: test-suite :: Fujitsu/C/0000/Fujitsu-C-0000_0008.test >>>>>>> | FAIL: test-suite :: Fujitsu/C/0000/Fujitsu-C-0000_0009.test >>>>>>> | ... and 6513 more >>>>>>> | # "FAIL" means : the execution of the compiled binary failed / output >>>>>>> of the binary differs from the expected one >>>>>>> >>>>>>> Used configuration : >>>>>>> * Toolchain : cmake -G Ninja ../llvm/llvm >>>>>>> "-DLLVM_ENABLE_PROJECTS=clang;lld;flang;openmp;clang-tools-extra" >>>>>>> -DCMAKE_BUILD_TYPE=Release -DLLVM_ENABLE_ASSERTIONS=True >>>>>>> -DCMAKE_INSTALL_PREFIX=../llvm-install >>>>>>> "-DLLVM_TARGETS_TO_BUILD=AArch64" -DCLANG_DEFAULT_LINKER=lld >>>>>>> * Testsuite : export >>>>>>> LD_LIBRARY_PATH=$\WORKSPACE/llvm-install/lib/aarch64-unknown-linux-gnu$\{LD_LIBRARY_PATH:+:$\LD_LIBRARY_PATH} >>>>>>> cmake -GNinja -DCMAKE_C_COMPILER="$\WORKSPACE/llvm-install/bin/clang" >>>>>>> -DCMAKE_CXX_COMPILER="$\WORKSPACE/llvm-install/bin/clang++" >>>>>>> -DCMAKE_Fortran_COMPILER="$\WORKSPACE/llvm-install/bin/flang-new" >>>>>>> -DCMAKE_BUILD_TYPE=Release -DCMAKE_C_FLAGS= -DCMAKE_CXX_FLAGS= >>>>>>> -DCMAKE_Fortran_FLAGS= -DCMAKE_C_FLAGS_RELEASE="-O3 -ffast-math >>>>>>> -march=armv8.4-a+sve -msve-vector-bits=256 -mllvm >>>>>>> -treat-scalable-fixed-error-as-warning=false -flto -fuse-ld=lld >>>>>>> -DNDEBUG" -DCMAKE_CXX_FLAGS_RELEASE="-O3 -ffast-math >>>>>>> -march=armv8.4-a+sve -msve-vector-bits=256 -mllvm >>>>>>> -treat-scalable-fixed-error-as-warning=false -flto -fuse-ld=lld >>>>>>> -DNDEBUG" -DCMAKE_Fortran_FLAGS_RELEASE="-O3 -ffast-math >>>>>>> -march=armv8.4-a+sve -msve-vector-bits=256 -mllvm >>>>>>> -treat-scalable-fixed-error-as-warning=false -flto -fuse-ld=lld >>>>>>> -DNDEBUG" -DTEST_SUITE_FORTRAN=ON >>>>>>> -DTEST_SUITE_SUBDIRS=Fujitsu "$\WORKSPACE/test/test-suite" >>>>>>> >>>>>>> We track this bug report under >>>>>>> https://linaro.atlassian.net/browse/LLVM-1592 Please let us know if you >>>>>>> have a fix. >>>>>>> >>>>>>> If you have any questions regarding this report, please ask on >>>>>>> linaro-toolchain@lists.linaro.org mailing list. >>>>>>> >>>>>>> -----------------8<--------------------------8<--------------------------8<-------------------------- >>>>>>> >>>>>>> The information below contains the details of the failures, and the >>>>>>> ways to reproduce a debug environment: >>>>>>> >>>>>>> You can find the failure logs in *.log.1.xz files in >>>>>>> * >>>>>>> https://ci.linaro.org/job/tcwg_flang_test--main-aarch64-Ofast-sve_vls-lto-lld-build/1699/artifact/artifacts/00-sumfiles/ >>>>>>> The full lists of regressions and improvements as well as configure >>>>>>> and make commands are in >>>>>>> * >>>>>>> https://ci.linaro.org/job/tcwg_flang_test--main-aarch64-Ofast-sve_vls-lto-lld-build/1699/artifact/artifacts/notify/ >>>>>>> The list of [ignored] baseline and flaky failures are in >>>>>>> * >>>>>>> https://ci.linaro.org/job/tcwg_flang_test--main-aarch64-Ofast-sve_vls-lto-lld-build/1699/artifact/artifacts/sumfiles/xfails.xfail >>>>>>> >>>>>>> Fujitsu testsuite : https://github.com/fujitsu/compiler-test-suite/ >>>>>>> Current build : >>>>>>> https://ci.linaro.org/job/tcwg_flang_test--main-aarch64-Ofast-sve_vls-lto-lld-build/1699/artifact/artifacts >>>>>>> Reference build : >>>>>>> https://ci.linaro.org/job/tcwg_flang_test--main-aarch64-Ofast-sve_vls-lto-lld-build/1698/artifact/artifacts >>>>>>> >>>>>>> Instruction to reproduce the build : >>>>>>> https://git-us.linaro.org/toolchain/ci/interesting-commits.git/plain/llvm/sha1/f01e760c08365426de95f02dc2c2dc670eb47352/tcwg_flang_test/main-aarch64-Ofast-sve_vls-lto-lld/reproduction_instructions.txt >>>>>>> >>>>>>> Full commit : >>>>>>> https://github.com/llvm/llvm-project/commit/f01e760c08365426de95f02dc2c2dc670eb47352>>>> >> > _______________________________________________ linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org