Hi Kenny,

This is the first time I've looked at the implementation of wide-int.h
(rather than just looking at the rtl changes, which as you know I like
in general), so FWIW here are some comments on wide-int.h.  I expect
a lot of them overlap with Richard B.'s comments.

I also expect many of them are going to be annoying, sorry, but this
first one definitely will.  The coding conventions say that functions
should be defined outside the class:

    http://gcc.gnu.org/codingconventions.html

and that opening braces should be on their own line, so most of the file
needs to be reformatted.  I went through and made that change with the
patch below, in the process of reading through.  I also removed "SGN
must be SIGNED or UNSIGNED." because it seemed redundant when those are
the only two values available.  The patch fixes a few other coding standard
problems and typos, but I've not made any actual code changes (or at least,
I didn't mean to).

Does it look OK to install?

I'm still unsure about these "infinite" precision types, but I understand
the motivation and I have no objections.  However:

>     * Code that does widening conversions.  The canonical way that
>       this is performed is to sign or zero extend the input value to
>       the max width based on the sign of the type of the source and
>       then to truncate that value to the target type.  This is in
>       preference to using the sign of the target type to extend the
>       value directly (which gets the wrong value for the conversion
>       of large unsigned numbers to larger signed types).

I don't understand this particular reason.  Using the sign of the source
type is obviously right, but why does that mean we need "infinite" precision,
rather than just doubling the precision of the source?

>   * When a constant that has an integer type is converted to a
>     wide-int it comes in with precision 0.  For these constants the
>     top bit does accurately reflect the sign of that constant; this
>     is an exception to the normal rule that the signedness is not
>     represented.  When used in a binary operation, the wide-int
>     implementation properly extends these constants so that they
>     properly match the other operand of the computation.  This allows
>     you write:
>
>                tree t = ...
>                wide_int x = t + 6;
>
>     assuming t is a int_cst.

This seems dangerous.  Not all code that uses "unsigned HOST_WIDE_INT"
actually wants it to be an unsigned value.  Some code uses it to avoid
the undefinedness of signed overflow.  So these overloads could lead
to us accidentally zero-extending what's conceptually a signed value
without any obvious indication that that's happening.  Also, hex constants
are unsigned int, but it doesn't seem safe to assume that 0x80000000 was
meant to be zero-extended.

I realise the same thing can happen if you mix "unsigned int" with
HOST_WIDE_INT, but the point is that you shouldn't really do that
in general, whereas we're defining these overloads precisely so that
a mixture can be used.

I'd prefer some explicit indication of the sign, at least for anything
other than plain "int" (so that the compiler will complain about uses
of "unsigned int" and above).

>   Note that the bits above the precision are not defined and the
>   algorithms used here are careful not to depend on their value.  In
>   particular, values that come in from rtx constants may have random
>   bits.

I have a feeling I'm rehashing a past debate, sorry, but rtx constants can't
have random bits.  The upper bits must be a sign extension of the value.
There's exactly one valid rtx for each (value, mode) pair.  If you saw
something different then that sounds like a bug.  The rules should already
be fairly well enforced though, since something like (const_int 128) --
or (const_int 256) -- will not match a QImode operand.

This is probably the part of the representation that I disagree most with.
There seem to be two main ways we could hande the extension to whole HWIs:

(1) leave the stored upper bits undefined and extend them on read
(2) keep the stored upper bits in extended form

The patch goes for (1) but (2) seems better to me, for a few reasons:

* As above, constants coming from rtl are already in the right form,
  so if you create a wide_int from an rtx and only query it, no explicit
  extension is needed.

* Things like logical operations and right shifts naturally preserve
  the sign-extended form, so only a subset of write operations need
  to take special measures.

* You have a public interface that exposes the underlying HWIs
  (which is fine with me FWIW), so it seems better to expose a fully-defined
  HWI rather than only a partially-defined HWI.

E.g. zero_p is:

  HOST_WIDE_INT x;

  if (precision && precision < HOST_BITS_PER_WIDE_INT)
    x = sext_hwi (val[0], precision);
  else if (len == 0)
    {
      gcc_assert (precision == 0);
      return true;
    }
  else
    x = val[0];

  return len == 1 && x == 0;

but I think it really ought to be just:

  return len == 1 && val[0] == 0;

>   When the precision is 0, all the bits in the LEN elements of
>   VEC are significant with no undefined bits.  Precisionless
>   constants are limited to being one or two HOST_WIDE_INTs.  When two
>   are used the upper value is 0, and the high order bit of the first
>   value is set.  (Note that this may need to be generalized if it is
>   ever necessary to support 32bit HWIs again).

I didn't understand this.  When are two HOST_WIDE_INTs needed for
"precision 0"?

> #define addr_max_bitsize (64)
> #define addr_max_precision \

These should either be lower-case C++ constants or upper-case macros.

>  /* VAL is set to a size that is capable of computing a full
>     multiplication on the largest mode that is represented on the
>     target.  Currently there is a part of tree-vrp that requires 2x +
>     2 bits of precision where x is the precision of the variables
>     being optimized.  */
>  HOST_WIDE_INT val[WIDE_INT_MAX_ELTS];

This comment seems redundant with the one above WIDE_INT_MAX_ELTS
and likely to get out of date.

>     So real hardware only looks at a small part of the shift amount.
>     On IBM machines, this tends to be 1 more than what is necessary
>     to encode the shift amount.  The rest of the world looks at only
>     the minimum number of bits.  This means that only 3 gate delays
>     are necessary to set up the shifter.

I agree that it makes sense for wide_int to provide a form of shift
in which the shift amount is truncated.  However, I strongly believe
wide-int.h should not test SHIFT_COUNT_TRUNCATED directly.  It should
be up to the callers to decide when truncation is needed (and to what width).

We really need to get rid of the #include "tm.h" in wide-int.h.
MAX_BITSIZE_MODE_ANY_INT should be the only partially-target-dependent
thing in there.  If that comes from tm.h then perhaps we should put it
into a new header file instead.

> /* Return THIS as a signed HOST_WIDE_INT.  If THIS does not fit in
>    PREC, the information is lost. */
> HOST_WIDE_INT
> to_shwi (unsigned int prec = 0) const

Just dropping the excess bits seems dangerous.  I think we should assert
instead, at least when prec is 0.

> /* Return true if THIS is negative based on the interpretation of SGN.
>    For UNSIGNED, this is always false.  This is correct even if
>    precision is 0.  */
> inline bool
> wide_int::neg_p (signop sgn) const

It seems odd that you have to pass SIGNED here.  I assume you were doing
it so that the caller is forced to confirm signedness in the cases where
a tree type is involved, but:

* neg_p kind of implies signedness anyway
* you don't require this for minus_one_p, so the interface isn't consistent
* at the rtl level signedness isn't a property of the "type" (mode),
  so it seems strange to add an extra hoop there

> /* Return true if THIS fits in an unsigned HOST_WIDE_INT with no
>    loss of precision.  */
> inline bool
> wide_int_ro::fits_uhwi_p () const
> {
>   return len == 1 || (len == 2 && val[1] == 0);
> }

This doesn't look right, since len == 1 could mean that you have a
gazillion-bit all-ones number.  Also, the val[1] == 0 check seems
to assume the upper bits are defined when the precision isn't a multiple
of the HWI size (although as above I that's a good thing and should be
enforced).

sign_mask has:

>   gcc_unreachable ();
> #if 0
>   return val[len - 1] >> (HOST_BITS_PER_WIDE_INT - 1);
> #endif

Maybe remove this?

The inline div routines do:

>   if (overflow)
>     *overflow = false;

but then just pass overflow to divmod_internal.  Seems better to initialise
*overflow there instead.

div_floor has:

>     return divmod_internal (true, val, len, p1, s, cl, p2, sgn,
>                           &remainder, false, overflow);
>
>     if (quotient.neg_p (sgn) && !remainder.zero_p ())
>       return quotient - 1;
>     return quotient;
>   }

where the last bit is unreachable.

> /* Divide DIVISOR into THIS producing the remainder.  The result is
>    the same size as the operands.  The sign is specified in SGN.
>    The output is floor truncated.  OVERFLOW is set to true if the
>    result overflows, false otherwise.  */
> template <typename T>
> inline wide_int_ro
> wide_int_ro::mod_floor (const T &c, signop sgn, bool *overflow = 0) const

It's really the quotient that's floor-truncated, not the output
(the remainder).  I was a bit confused at first why you'd need to
truncate an integer remainder.  Same for the other functions.

> #ifdef DEBUG_WIDE_INT
>   debug_vwa ("wide_int_ro:: %d = (%s == %s)\n", result, *this, s, cl, p2);
> #endif

I think these are going to bitrot quickly if we #if 0 then out.
I think we should either use:

  if (DEBUG_WIDE_INT)
    debug_vwa ("wide_int_ro:: %d = (%s == %s)\n", result, *this, s, cl, p2);

or drop them.

The implementations of the general to_shwi1 and to_shwi2 functions look
identical.  I think the comment should explain why two functions are needed.

> /* Negate THIS.  */
> inline wide_int_ro
> wide_int_ro::operator - () const
> {
>   wide_int_ro r;
>   r = wide_int_ro (0) - *this;
>   return r;
> }
>
> /* Negate THIS.  */
> inline wide_int_ro
> wide_int_ro::neg () const
> {
>   wide_int_ro z = wide_int_ro::from_shwi (0, precision);
>
>   gcc_checking_assert (precision);
>   return z - *this;
> }

Why do we need both of these, and why are they implemented slightly
differently?

> template <int bitsize>
> inline bool
> fixed_wide_int <bitsize>::multiple_of_p (const wide_int_ro &factor,
>                                        signop sgn,
>                                        fixed_wide_int *multiple) const
> {
>   return wide_int_ro::multiple_of_p (factor, sgn,
>                                    reinterpret_cast <wide_int *> (multiple));
> }

The patch has several instances of this kind of reinterpret_cast.
It looks like an aliasing violation.

The main thing that's changed since the early patches is that we now
have a mixture of wide-int types.  This seems to have led to a lot of
boiler-plate forwarding functions (or at least it felt like that while
moving them all out the class).  And that in turn seems to be because
you're trying to keep everything as member functions.  E.g. a lot of the
forwarders are from a member function to a static function.

Wouldn't it be better to have the actual classes be light-weight,
with little more than accessors, and do the actual work with non-member
template functions?  There seems to be 3 grades of wide-int:

  (1) read-only, constant precision  (from int, etc.)
  (2) read-write, constant precision  (fixed_wide_int)
  (3) read-write, variable precision  (wide_int proper)

but we should be able to hide that behind templates, with compiler errors
if you try to write to (1), etc.

To take one example, the reason we can't simply use things like
std::min on wide ints is because signedness needs to be specified
explicitly, but there's a good reason why the standard defined
std::min (x, y) rather than x.min (y).  It seems like we ought
to have smin and umin functions alongside std::min, rather than
make them member functions.  We could put them in a separate namespace
if necessary.

I might have a go at trying this last part next week, unless Richard is
already in the process of rewriting things :-)

Thanks,
Richard


Attachment: reformat-wide-int.diff.bz2
Description: BZip2 compressed data

Reply via email to