On Thu, 17 Dec 2015, Jakub Jelinek wrote:

> On Thu, Dec 17, 2015 at 12:33:22PM +0100, Richard Biener wrote:
> > > @@ -1245,11 +1245,9 @@ static unsigned int
> > >  get_int_cst_ext_nunits (tree type, const wide_int &cst)
> > >  {
> > >    gcc_checking_assert (cst.get_precision () == TYPE_PRECISION (type));
> > > -  /* We need an extra zero HWI if CST is an unsigned integer with its
> > > -     upper bit set, and if CST occupies a whole number of HWIs.  */
> > > -  if (TYPE_UNSIGNED (type)
> > > -      && wi::neg_p (cst)
> > > -      && (cst.get_precision () % HOST_BITS_PER_WIDE_INT) == 0)
> > > +  /* We need extra HWIs if CST is an unsigned integer with its
> > > +     upper bit set.  */
> > > +  if (TYPE_UNSIGNED (type) && wi::neg_p (cst))
> > >      return cst.get_precision () / HOST_BITS_PER_WIDE_INT + 1;
> > >    return cst.get_len ();
> > >  }
> > 
> > I don't see why this is necessary - if there are enough bits to
> > always have a zero MSB for unsigned types then we don't need an
> > extra element.
> 
> But there aren't enough bits in that case.
> Let's consider how is -2 casted to unsigned type with precision 64, 65, 127,
> 128 and 129 is encoded.
> cst.get_len () is in each case 1, the first element is -2.
> So, what get_int_cst_ext_nunits before my change does is that for
> precision 64 and 128 return 2 and 3, for all other precisions (talking just
> about < 129) returns 1.

Doh, yes.

> So, before my patch we have:
> precision     TREE_INT_CST_NUNITS     TREE_INT_CST_EXT_NUNITS         
> TREE_INT_CST_ELT
> 64            1                       2                               -2, 0
> 65            1                       1                               -2
> 127           1                       1                               -2
> 128           1                       3                               -2, -1, > 0
> 129           1                       1                               -2
> and with the patch we have:
> 64            1                       2                               -2, 0
> 65            1                       2                               -2, 1
> 127           1                       2                               -2, 
> -1ULL / 2
> 128           1                       3                               -2, -1, > 0
> 129           1                       3                               -2, -1, 
> 1
> 
> Thus, before the patch there was nothing that in the ext units would tell
> the users the value is actually large unsigned one rather than negative.
> I really don't see what is so special on precisions divisible by
> HOST_BITS_PER_WIDE_INT.  The > TREE_INT_CST_NUNITS elements aren't
> guaranteed to be all 0s anyway, they are all -1s and for the precisions
> divisible by HOST_BITS_PER_WIDE_INT the last one is 0.  That is what is
> special on those, with my patch for the other precisions the last one is
> simply always some power of two - 1 (other than -1).
> 
> Now, if the value is not in between maximum and maximum - 
> 0x8000000000000000ULL
> TREE_INT_CST_NUNITS will not be 1, but say 2 or more, and perhaps again
> the len is still smaller than precision / HOST_BITS_PER_WIDE_INT + 1, but
> if it is again negative, we want to have some way to add the ext elements
> (a series of -1, followed by 0 or other power of two - 1 other than -1).
> 
> > 
> > > @@ -1266,7 +1264,8 @@ build_new_int_cst (tree type, const wide
> > >    if (len < ext_len)
> > >      {
> > >        --ext_len;
> > > -      TREE_INT_CST_ELT (nt, ext_len) = 0;
> > > +      TREE_INT_CST_ELT (nt, ext_len)
> > > + = zext_hwi (-1, cst.get_precision () % HOST_BITS_PER_WIDE_INT);
> > 
> > isn't this enough?
> 
> No.  This is the len < ext_len case, without the first hunk
> if len < ext_len then cst.get_precision () % HOST_BITS_PER_WIDE_INT is
> guaranteed to be 0, and thus zext_hwi will return 0.

Patch is ok.

Thanks,
Richard.

Reply via email to