https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68835
--- Comment #9 from Jakub Jelinek <jakub at gcc dot gnu.org> --- Testcase that is miscompiled starting with r210113 at -O2: __attribute__((noinline, noclone)) unsigned __int128 foo (void) { unsigned __int128 x = (unsigned __int128) 0xffffffffffffffffULL; struct { unsigned __int128 a:65; } w; w.a = x; w.a += x; return w.a; } int main () { unsigned __int128 x = foo (); if ((unsigned long long) x != 0xfffffffffffffffeULL || (unsigned long long) (x >> 64) != 1) __builtin_abort (); return 0; } The function foo returns 0xfffffffffffffffffffffffffffffffe instead of 0x1fffffffffffffffe. So, how do we want to represent such large unsigned values? Force in that case get_len () > 1 representation, or tweak all consumers including tree-pretty-print.c and other spots that if the representation is negative number and the type is unsigned non-power of two precision > HOST_BITS_PER_WIDE_INT, treat negative values differently?