http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46619

--- Comment #9 from Eskil Steenberg <eskil at obsession dot se> 2010-11-23 
20:20:51 UTC ---
Hi

> typedef unsigned short VBigDig;
>    uv = x[1 + j] * x[1 + i];
>    high = (uv & 0x80000000u) != 0;
>
> Is really
>    uv = (int)x[1 + j] * (int)x[1 + i];
>    high = (uv & 0x80000000u) != 0;
>
> So overflow is undefined.

Wait... From where does it get int?

I could imagine it doing:

   uv = (unsigned int)x[1 + j] * (unsigned int)x[1 + i];

since uv is unsigned int, or:

   uv = (unsigned int)((unsigned short)x[1 + j] * (unsigned short)x[1 + i]);

because x is a pointer to unsigned short. In this case I can understand
that uv could not get bigger then max unsigned short, but it does go all
the way up to max singed int, since max signed int lower then 0x80000000u
i can understand that it optimizes away the later line.

The line contains 2 types, yet the compiler decides to do math in a third!

Both input and output are unsingend, from where does the compiler get the
idea to go anything signed?

Cheers

E

Reply via email to