On Mon, Aug 2, 2021 at 12:43 PM Richard Sandiford
<richard.sandif...@arm.com> wrote:
>
> Richard Biener via Gcc-patches <gcc-patches@gcc.gnu.org> writes:
> > On Fri, Jul 30, 2021 at 5:59 PM Richard Sandiford via Gcc-patches
> > <gcc-patches@gcc.gnu.org> wrote:
> >>
> >> This patch adds a simple class for holding A/B fractions.
> >> As the comments in the patch say, the class isn't designed
> >> to have nice numerial properties at the extremes.
> >>
> >> The motivating use case was some aarch64 costing work,
> >> where being able to represent fractions was much easier
> >> than using single integers and avoided the rounding errors
> >> that would come with using floats.  (Unlike things like
> >> COSTS_N_INSNS, there was no sensible constant base factor
> >> that could be used.)
> >>
> >> Tested on aarch64-linux-gnu and x86_64-linux-gnu.  OK to install?
> >
> > Hmm, we use the sreal type for profiles.  I don't see any overflow/underflow
> > handling in your class - I suppose you're going to use it on integer types
> > given we're not allowed to use native FP?
>
> Yeah, I'm going to use it on integer types.  And it's not designed
> to have nice properties at extremes, including handling underflow and
> overflow.

So maybe assert that it doesn't?  In particular nominator/denominator
are prone to overflowing in fractional representations.

There's the option to round or ICE.  Or rather than the only option
is to round (or use a more expensive arbitrary precision representation).

So the question is whether the fractional behavior is better in more
cases than the sreal behavior (I can easily believe it is).

> I want to use it in costing code, where we already happily multiply
> and add “int”-sized costs without worrying about overflow.  I'll be
> using uint64_t for the fractions though, just in case. :-)
>
> sreal doesn't help because it's still significand/exponent.  That matters
> because…
>
> > I mean, how exactly does
> > the class solve the problem of rounding errors?
>
> …I wanted something that represented the results exactly (barring any of
> integer ops overflowing).  This makes it meaningful to compare costs for
> equality.  It also means we can use ordered comparisons without having
> to introduce a fudge factor to cope with one calculation having different
> intermediate rounding from the other.

I think you're underestimating how quickly your denominator will overflow?
So I suppose all factors of all possible denominators are known, in fact
whats your main source for the divisions?  The VF?

> E.g. aarch64 has code like:
>
>       if (scalar_cycles_per_iter < sve_estimate)
>         {
>           unsigned int min_cost
>             = orig_body_cost * estimated_poly_value (BYTES_PER_SVE_VECTOR);
>           if (body_cost < min_cost)
>             {
>               if (dump_enabled_p ())
>                 dump_printf_loc (MSG_NOTE, vect_location,
>                                  "Increasing body cost to %d because the"
>                                  " scalar code could issue within the limit"
>                                  " imposed by predicate operations\n",
>                                  min_cost);
>               body_cost = min_cost;
>               should_disparage = true;
>             }
>         }
>
> I want to be able to keep this while making scalar_cycles_per_iter and
> sve_estimate non-integral.
>
> Thanks,
> Richard

Reply via email to