Martin Jambor <mjam...@suse.cz> writes:

> Hi,
>
> Because the simplified way of extracting value ranges from functions
> does not look at scalar constants (as one of the versions had been
> doing before) but instead rely on the value range within the jump
> function already capturing the constant, I have added a verifier that
> it is indeed so.  After the fixes in the previous patches, the
> verifier passes bootstrap and testsuite.
>
> The verifier is however a bit lenient when it comes to pranges.  When
> a prange for an ADDR_EXPR of a DECL results in a non-NULL prange
> adjusted with known alignment, but this is then converted to an
> integer because that is what the compiled source does as is the case
> for example in testcase gcc.c-torture/compile/pr71109.c.  While as
> long as we track the address in a prange we do capture the
> non-nullness:
>
>   [prange] void (*<Txxxx>) (void) [1, +INF] MASK 0xfffffffffffffffe VALUE 0x0
>
> when it the range is folded into an integer we get
>
>   Value range: [irange] int [-INF, +INF] MASK 0xfffffffe VALUE 0x0
>
> which can contain zero.  I have not looked into detail whether there
> is anything we can do about this case or what it would be.

Let's ignore the mask for a moment.  This range:

[prange] void (*<Txxxx>) (void) [1, +INF] MASK 0xfffffffffffffffe VALUE 0x0

...casted to an int *does* include 0.  For example, 0x100000000UL casted
to a signed 32-bit int is 0.

However, when we take into account the mask, 0 is clearly not possible
as the lowest bit is known to be 0.  This is an inherent problem we have
with sub-ranges and value/mask pairs not always being kept in sync.  The
reason we don't is because at one time there was a 5% slowdown in VRP
for doing so.  See this comment:

  // The mask inherent in the range is calculated on-demand.  For
  // example, [0,255] does not have known bits set by default.  This
  // saves us considerable time, because setting it at creation incurs
  // a large penalty for irange::set.  At the time of writing there
  // was a 5% slowdown in VRP if we kept the mask precisely up to date
  // at all times.  Instead, we default to -1 and set it when
  // explicitly requested.  However, this function will always return
  // the correct mask.
  //
  // This also means that the mask may have a finer granularity than
  // the range and thus contradict it.  Think of the mask as an
  // enhancement to the range.  For example:
  //
  // [3, 1000] MASK 0xfffffffe VALUE 0x0
  //
  // 3 is in the range endpoints, but is excluded per the known 0 bits
  // in the mask.

We haven't tested the penalty recently, but I doubt it's cheap to keep
these bits in sync at all times.

set_range_from_bitmask() is supposed to adjust ranges based on known bit
masks, and handles the common cases.  However, it seems we don't have a
case for excluding 0 from the range when the mask excludes the
lowestmost bit.  Adding this should fix the problem.

Aldy

Reply via email to