https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111794
--- Comment #5 from Robin Dapp <rdapp at gcc dot gnu.org> --- Disregarding the reasons for the precision adjustment, for this case here, we seem to fail at: /* We do not handle bit-precision changes. */ if ((CONVERT_EXPR_CODE_P (code) || code == VIEW_CONVERT_EXPR) && ((INTEGRAL_TYPE_P (TREE_TYPE (scalar_dest)) && !type_has_mode_precision_p (TREE_TYPE (scalar_dest))) || (INTEGRAL_TYPE_P (TREE_TYPE (op)) && !type_has_mode_precision_p (TREE_TYPE (op)))) /* But a conversion that does not change the bit-pattern is ok. */ && !(INTEGRAL_TYPE_P (TREE_TYPE (scalar_dest)) && INTEGRAL_TYPE_P (TREE_TYPE (op)) && (TYPE_PRECISION (TREE_TYPE (scalar_dest)) > TYPE_PRECISION (TREE_TYPE (op))) && TYPE_UNSIGNED (TREE_TYPE (op)))) { if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "type conversion to/from bit-precision " "unsupported.\n"); return false; } for the expression patt_156 = (<signed-boolean:1>) _2; where _2 (op) is of type _Bool (i.e. TYPE_MODE QImode) and patt_156 (scalar_dest) is signed-boolean:1. In that case the mode's precision (8) does not match the type's precision (1) for both op and _scalar_dest. The second part of the condition I don't fully get. When does a conversion change the bit pattern? When the source has higher precision than the dest we would need to truncate which we probably don't want. When the dest has higher precision that's considered ok? What about equality? If both op and dest have precision 1 the padding could differ (or rather the 1 could be at different positions) but do we even support that? In other words, could we relax the condition to TYPE_PRECISION (TREE_TYPE (scalar_dest)) >= TYPE_PRECISION (TREE_TYPE (op)) (>= instead of >)? FWIW bootstrap and testsuite unchanged with >= instead of > on x86, aarch64 and power10 but we might not have a proper test for that?