On Sun, Oct 7, 2012 at 3:11 PM, Kenneth Zadeck <[email protected]> wrote:
>
> On 10/07/2012 08:47 AM, Richard Guenther wrote:
>>>>
>>>> len seems redundant unless you want to optimize encoding.
>>>> >>len == (precision + HOST_BITS_PER_WIDE_INT - 1) /
>>>> >> HOST_BITS_PER_WIDE_INT.
>>>
>>> >
>>> >that is exactly what we do. However, we are optimizing time, not
>>> > space.
>>> >the value is always stored in compressed form, i.e the same
>>> > representation
>>> >as is used inside the CONST_WIDE_INTs and INT_CSTs. this makes the
>>> >transformation between them very fast. And since we do this a lot, it
>>> >needs to be fast. So the len is the number of HWIs needed to represent
>>> > the
>>> >value which is typically less than what would be implied by the
>>> > precision.
>>
>> But doesn't this require a sign? Otherwise how do you encode TImode
>> 0xffffffff?
>> Would len be 1 here? (and talking about CONST_WIDE_INT vs CONST_INT,
>> wouldn't CONST_INT be used anyway for all ints we can encode "small"? Or
>> is it (hopefully) the goal to replace CONST_INT with CONST_WIDE_INT
>> everywhere?) Or do you say wide_int is always "signed', thus TImode
>> 0xffffffff
>> needs len == 2 and an explicit zero upper HWI word?
>
> the compression of this has len ==2 with the explicit upper being 0.
>
>
>> Or do you say wide_int
>> is always "unsigned", thus TImode -1 needs len == 2? Noting that
>> double-ints
>> were supposed to be twos-complement (thus, 'unsigned') numbers having
>> implicit non-zero bits sounds error-prone.
>>
>> That said - I don't see any value in optimizing storage for short-lived
>> (as you say) wide-ints (apart from limiting it to the mode size). For
>> example mutating operations on wide-ints are not really possible
>> if you need to extend storage.
>
> the compression used is independent of whether it is sign or unsigned. but
> the compression "looks" signed. i.e. you do not represent the upper hwi if
> they would just be a sign extension of the top hwi that is represented.
> However, this does not imply anything about the math that was used to set
> those bits that way, and you can always go back to the full rep independent
> of the sign.
>
> I do not have any long term plans to merge CONST_INT into CONST_WIDE_INT.
> It would be a huge change in the ports and would be fairly space
> inefficient. My guess is that in real code, less than 1% of real constants
> will have a length greater than 1 even on a 32 bit host. CONST_INT is very
> space efficient. This could have been mentioned as part of Richard's
> response to your comment about the way we represent the CONST_WIDE_INTs.
> In practice, there are almost none of them anyway.
>
> In fact, you could argue that the tree level did it wrong (not that i am
> suggesting to change this). But it makes me think what was going on when
> the decision to make TYPE_PRECISION be an INT_CST rather than just a HWI was
> made. For that there is an implication that it could never take more than
> a HWI since no place in the code even checks TREE_INT_CST_HIGH for these.
Well - on the tree level we now always have two HWIs for all INTEGER_CSTs. If
we can, based on the size of the underlying mode, reduce that to one
HWI we already
win something. If we add an explicit length to allow a smaller
encoding for larger modes
(tree_base conveniently has an available 'int' for this ...) then we'd
win in more cases.
Thus, is CONST_INT really necessarily better than optimized CONST_WIDE_INT
storage?
Richard.
>>>> >>+ enum Op {
>>>> >>+ NONE,
>>>> >>
>>>> >>we don't have sth like this in double-int.h, why was the double-int
>>>> >>mechanism
>>>> >>not viable?
>>>
>>> >
>>> >i have chosen to use enums for things rather than passing booleans.
>>
>> But it's bad to use an enum with 4 values for a thing that can only
>> possibly
>> expect two. You cannot optimize tests as easily. Please retain the bool
>> uns
>> parameters instead.
>
> I am breaking it into two enums.