https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118891
--- Comment #28 from GCC Commits <cvs-commit at gcc dot gnu.org> --- The trunk branch has been updated by Richard Sandiford <rsand...@gcc.gnu.org>: https://gcc.gnu.org/g:ec54a14239b12d03c600c14f3ce9710e65cd33f1 commit r16-2052-gec54a14239b12d03c600c14f3ce9710e65cd33f1 Author: Richard Sandiford <richard.sandif...@arm.com> Date: Mon Jul 7 09:10:38 2025 +0100 vect: Fix VEC_WIDEN_PLUS_HI/LO choice for big-endian [PR118891] In the tree codes and optabs, the "hi" in a vector hi/lo pair means "most significant" and the "lo" means "least significant", with sigificance following GCC's normal endian expectations. Thus on big-endian targets, the hi part handles the first half of the elements in memory order and the lo part handles the second half. For tree codes, supportable_widening_operation first chooses hi/lo pairs based on little-endian order and then uses: if (BYTES_BIG_ENDIAN && c1 != VEC_WIDEN_MULT_EVEN_EXPR) std::swap (c1, c2); to adjust. However, the handling for internal functions was missing an equivalent fixup. This led to several execution failures in vect.exp on aarch64_be-elf. If the hi/lo code fails, the internal function handling goes on to try even/odd. But I couldn't see anything obvious that would put the even/ odd results back into the right order later, so there might be a latent bug there too. gcc/ PR tree-optimization/118891 * tree-vect-stmts.cc (supportable_widening_operation): Swap the hi and lo internal functions on big-endian targets.