https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77898

--- Comment #7 from Marc Glisse <glisse at gcc dot gnu.org> ---
(In reply to Martin Sebor from comment #6)
> I meant a subrange of the i variable (i.e., a subrange of int).  The range
> of every variable is necessarily bounded by its type so returning a range of
> [INT_MIN, INT_MAX] for an int isn't terribly helpful.

Do you actually get that range when calling get_range_info on the int i_6 (as
opposed to _2 or i.0_1, which are different variables of different types)? If
so, it looks like a bug.

> I understand that the range is the most accurate it can be at this stage
> (after EVRP and before VRP1) and the get_range_info() function doesn't have
> enough smarts to indicate whether there's a chance the range might improve
> (e.g., after inlining or even with LTO).

I can't think of many cases where you know there is no chance of improving the
range, but the result is not constant.

> I suspect your suggestion is what I'm going to have to go with.  What
> bothers me about it is that it means embedding assumptions into the
> tree-object-size pass about the number of times it runs, and throwing away
> possibly optimal results computed in the initial runs of the pass and only
> using the last one, even if the last one is no better than the first.
> 
> In general this approach also denies downstream clients of a pass the
> benefit of gradually refining their results based on gradual refinements of
> the results provided by it.  In this case, it means preventing the
> tree-object-size pass from returning a potentially useful if not optimal
> size of an object based on the type (but not the value) of the offset. 
> Instead, the tree-object-size pass must return a "don't know" value even
> though it "knows" that the value is in some range.

Hmm, now it sounds like you want to set a value range on the lhs of the __bos
call. Is there some reason not to do that?

Reply via email to