On Wed, 30 Oct 2013, Richard Sandiford wrote:

> Kenneth Zadeck <zad...@naturalbridge.com> writes:
> > On 10/30/2013 07:01 AM, Richard Sandiford wrote:
> >> Kenneth Zadeck <zad...@naturalbridge.com> writes:
> >>> On 10/29/2013 06:37 PM, Richard Sandiford wrote:
> >>>> This patch tries to update the main wide_int comment to reflect the 
> >>>> current
> >>>> implementation.
> >>>>
> >>>> - bitsizetype is TImode on x86_64 and others, so I don't think it's
> >>>>     necessarily true that all offset_ints are signed.  (widest_int are
> >>>>     though.)
> >>> i am wondering if this is too conservative an interpretation.    I
> >>> believe that they are ti mode because that is the next thing after di
> >>> mode and so they wanted to accommodate the 3 extra bits. Certainly there
> >>> is no x86 that is able to address more than 64 bits.
> >> Right, but my point is that it's a different case from widest_int.
> >> It'd be just as valid to do bitsizetype arithmetic using wide_int
> >> rather than offset_int, and those wide_ints would have precision 128,
> >> just like the offset_ints.  And I wouldn't really say that those wide_ints
> >> were fundamentally signed in any way.  Although the tree layer might "know"
> >> that X upper bits of the bitsizetype are always signs, the tree-wide_int
> >> interface treats them in the same way as any other 128-bit type.
> >>
> >> Maybe I'm just being pedantic, but I think offset_int would only be like
> >> widest_int if bitsizetype had precision 67 or whatever.  Then we could
> >> say that both offset_int and widest_int must be wider than any inputs,
> >> meaning that there's at least one leading sign bit.
> > this was of course what mike and i wanted, but we could not really 
> > figure out how to pull it off.
> > in particular, we could not find any existing reliable marker in the 
> > targets to say what the width of the widest pointer on any 
> > implementation.   We actually used the number 68 rather than 67 because 
> > we assumed 64 for the widest pointer on any existing platform, 3 bits 
> > for the bits and 1 bit for the sign.
> 
> Ah yeah, 68 would be better for signed types.
> 
> Is the patch OK while we still have 128-bit bitsizetypes though?
> I agree the current comment would be right if we ever did switch
> to sub-128 bitsizes.

The issue with sub-128bit bitsizetype is code generation quality.
We do generate code for bitsizetype operations (at least from Ada),
so a power-of-two precision is required to avoid a lot of masking
operations.

Richard.

Reply via email to