https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94548

--- Comment #2 from fabrice salvaire <fabrice.salvaire at orange dot fr> ---
Yes I missed this important point on 8-bit architecture ...

This line doesn't also work for some reasons

const unsigned long int f0 = (8*(10ULL)^(6ULL)) / (1000*256ULL);

but this one works

const unsigned long int f0 = (8*1000000ULL) / (1000*256ULL);

It means the compiler computation of constants is a bit error prone on such
architectures versus x64.  There is no way in C to say compute the right value
in infinite-bit then store the result in 16-bit.

I am surprised by the fact the compiler doesn't warn for overflow ?  Since I
found the warning option "-Wno-overflow — Do not warn about compile-time
overflow in constant expressions" I should get a warning ???

Reply via email to