On Aug 23, 2013, at 8:02 AM, Richard Sandiford <rdsandif...@googlemail.com> 
wrote:
>>  * When a constant that has an integer type is converted to a
>>    wide-int it comes in with precision 0.  For these constants the
>>    top bit does accurately reflect the sign of that constant; this
>>    is an exception to the normal rule that the signedness is not
>>    represented.  When used in a binary operation, the wide-int
>>    implementation properly extends these constants so that they
>>    properly match the other operand of the computation.  This allows
>>    you write:
>> 
>>               tree t = ...
>>               wide_int x = t + 6;
>> 
>>    assuming t is a int_cst.
> 
> This seems dangerous.  Not all code that uses "unsigned HOST_WIDE_INT"
> actually wants it to be an unsigned value.  Some code uses it to avoid
> the undefinedness of signed overflow.  So these overloads could lead
> to us accidentally zero-extending what's conceptually a signed value
> without any obvious indication that that's happening.  Also, hex constants
> are unsigned int, but it doesn't seem safe to assume that 0x80000000 was
> meant to be zero-extended.
> 
> I realise the same thing can happen if you mix "unsigned int" with
> HOST_WIDE_INT, but the point is that you shouldn't really do that
> in general, whereas we're defining these overloads precisely so that
> a mixture can be used.

So, I don't like penalizing all users, because one user might write incorrect 
code.  We have the simple operators so that users can retain some of the 
simplicity and beauty that is the underlying language.  Those semantics are 
well known and reasonable.  We reasonably match those semantics.  At the end of 
the day, we have to be able to trust what the user writes.  Now, a user that 
doesn't trust themselves, can elect to not use these functions; they aren't 
required to use them.

Reply via email to