https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101187
--- Comment #3 from rguenther at suse dot de <rguenther at suse dot de> --- On Thu, 24 Jun 2021, jakub at gcc dot gnu.org wrote: > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101187 > > Jakub Jelinek <jakub at gcc dot gnu.org> changed: > > What |Removed |Added > ---------------------------------------------------------------------------- > CC| |jakub at gcc dot gnu.org > > --- Comment #2 from Jakub Jelinek <jakub at gcc dot gnu.org> --- > Do we really want that for vectors with int or larger elements though? > Shouldn't it be done for char/short elements only? > For non-common targets where char and/or short could have the same precision > as > int, maybe best would be to do it only for elements with precision smaller > than > precision of integer_type_node. > The advantage of doing it only for the char/short cases is that we can catch > it > later in warnings, ubsan etc. > We should verify what we diagnose with ubsan if we say char/short element > vector shifts are well defined. > Also, we should do that only if the shift count is smaller than precision of > integer_type_node, i.e. optimize vector char >> 8 to >> 31 but not >> 32 and > more. > For signed vectors >> should be optimized to shift by element precision - 1. But why restrict it? CCP will optimize unsigned int >> 32 as well (but yes, we diagnose that). Unless there was this OpenCL compatibility thing which leaves large shift semantics up to the target?