Richard Biener <richard.guent...@gmail.com> writes:
>> But that means that wide_int has to model a P-bit operation as a
>> "normal" len*HOST_WIDE_INT operation and then fix up the result
>> after the fact, which seems unnecessarily convoluted.
>
> It does that right now.  The operations are carried out in a loop
> over len HOST_WIDE_INT parts, the last HWI is then special-treated
> to account for precision/size.  (yes, 'len' is also used as optimization - the
> fact that len ends up being mutable is another thing I dislike about
> wide-int.  If wide-ints are cheap then all ops should be non-mutating
> (at least to 'len')).

But the point of having a mutating len is that things like zero and -1
are common even for OImode values.  So if you're doing someting potentially
expensive like OImode multiplication, why do it to the number of
HOST_WIDE_INTs needed for an OImode value when the value we're
processing has only one significant HOST_WIDE_INT?

>>  I still don't
>> see why a full-precision 2*HOST_WIDE_INT operation (or a full-precision
>> X*HOST_WIDE_INT operation for any X) has any special meaning.
>
> Well, the same reason as a HOST_WIDE_INT variable has a meaning.
> We use it to constrain what we (efficiently) want to work on.  For example
> CCP might iterate up to 2 * HOST_BITS_PER_WIDE_INT times when
> doing bit-constant-propagation in loops (for TImode integers on a x86_64 
> host).

But what about targets with modes wider than TImode?  Would double_int
still be appropriate then?  If not, why does CCP have to use a templated
type with a fixed number of HWIs (and all arithmetic done to a fixed
number of HWIs) rather than one that can adapt to the runtime values,
like wide_int can?

> Oh, and I don't necessary see a use of double_int in its current form
> but for an integer representation on the host that is efficient to manipulate
> integer constants of a target dependent size.  For example the target
> detail that we have partial integer modes with bitsize > precision and that
> the bits > precision appearantly have a meaning when looking at the
> bit-representation of a constant should not be part of the base class
> of wide-int (I doubt it belongs to wide-int at all, but I guess you know more
> about the reason we track bitsize in addition to precision - I think it's
> abstraction at the wrong level, the tree level does fine without knowing
> about bitsize).

TBH I'm uneasy about the bitsize thing too.  I think bitsize is only
tracked for shift truncation, and if so, I agree it makes sense
to do that separately.

But anyway, this whole discussion seems to have reached a stalemate.
Or I suppose a de-facto rejection, since you're the only person in
a position to approve the thing :-)

Richard

Reply via email to