https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108580
--- Comment #5 from Andrew Pinski <pinskia at gcc dot gnu.org> --- You are still confused as the assembly code is correct. Let's start over here. Take: char* a = (char*)malloc(1 << bits); 1 << bits is done in int type as the literal 1 has the type of int (because that is the definition of it without any suffix) and there is no promption going on as 1 is already an int type. so 1 << bits is done in 32bits (as x86_64 is LP64I32 [linux/elf] Or LLP64IL32 [windows] target and x86 is a ILP32 target). And then it gets casted to size_t as it is an argument to malloc. This casting is a sign extend from 32bit to 64bit (on x86_64 as size_t is unsigned 64bit type, unsigned long on Linux and unsigned long long on Windows) as defined on by the C standard and https://gcc.gnu.org/onlinedocs/gcc-12.2.0/gcc/Integers-implementation.html#Integers-implementation . Also since this is -O0 the expressions are done without optimizations and you get extra load and stores to the stack so bits is on the stack. If you want the original expression 1<<bits done in unsigned 64bits, use either 1ul<<bits or 1ull<<bits (or even ((size_t)1)<<bits or ((intptr_t)1)<<bits ). There is still no bug with the compilers you tried, you missed that 1 is of type int.