https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107617
--- Comment #3 from Richard Biener <rguenth at gcc dot gnu.org> --- I suppose it's the + MEM <vector(4) integer(kind=4)> [(integer(kind=4) *)_13] = { -1, 1, -1, 1 }; ... + .LEN_STORE (vectp.75_247, 64B, 11, { 255, 255, 255, 255, 0, 0, 0, 1, 255, 255, 255, 255, 0, 0, 0, 1 }, -1); .. + MEM <vector(2) integer(kind=8)> [(integer(kind=8) *)&a] = { -1, 1 }; + MEM <vector(2) integer(kind=8)> [(integer(kind=8) *)&a + 16B] = { -1, 1 }; + a[4] = 1; + a[5] = -1; + a[6] = 1; you are talking about where we elide the scalar loads from _13 stored into a[]. A gimple testcase would be something like typedef unsigned char v16qi __attribute__((vector_size(16))); int a[4]; void __GIMPLE (ssa) foo (void *p) { int v; __BB(2): .LEN_STORE (p_1(D), _Literal (int *) 64, 11, _Literal (v16qi) { _Literal (unsigned char) 255, _Literal (unsigned char) 255, _Literal (unsigned char) 255, _Literal (unsigned char) 255, _Literal (unsigned char) 0, _Literal (unsigned char) 0, _Literal (unsigned char) 0, _Literal (unsigned char) 1, _Literal (unsigned char) 255, _Literal (unsigned char) 255, _Literal (unsigned char) 255, _Literal (unsigned char) 255, _Literal (unsigned char) 0, _Literal (unsigned char) 0, _Literal (unsigned char) 0, _Literal (unsigned char) 1 }, _Literal (signed char) -1); v_2 = __MEM <int> ((int *)p_1(D)); v_3 = __MEM <int> ((int *)p_1(D) + 4); v_4 = __MEM <int> ((int *)p_1(D) + 8); v_5 = __MEM <int> ((int *)p_1(D) + 12); a[0] = v_2; a[1] = v_3; a[2] = v_4; a[3] = v_5; return; } which produces a[0] = 1; a[1] = _Literal (int) -1; a[2] = 1; a[3] = v_5; changing the len to 15 and thus folding the .LEN_STORE to a full store changes that to a[0] = _Literal (int) -1; a[1] = 1; a[2] = _Literal (int) -1; a[3] = 1; which I assume is correct? I think we'd need to feed a negative pd.rhs_off into native_encode_expr but that's not supported there (and it treats -1 special). Still .LEN_STORE covers bytes p + [0..11] here, correct? But the stored value is interpreted wrongly here and the new rhs_off assumes little-endian adjustment.