https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65875
--- Comment #9 from rguenther at suse dot de <rguenther at suse dot de> --- On Tue, 28 Apr 2015, jakub at gcc dot gnu.org wrote: > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65875 > > --- Comment #8 from Jakub Jelinek <jakub at gcc dot gnu.org> --- > (In reply to Richard Biener from comment #4) > > For h we get into the loop PHI handling code which drops to INF-1 if it > > iterates > > "too much". The rest probably ripples down from that. > > > > I can't see where that [1, 0x7ffffff] issue happens. > > Current trunk, -O2 -fdump-tree-vrp-details on the testcase has in vrp1 dump: > <bb 2>: > g.0_9 = g; > if (g.0_9 < 0) > goto <bb 3>; > else > goto <bb 9>; > > <bb 3>: > _12 = -g.0_9; > i_13 = (long int) _12; > goto <bb 9>; > > and > > Visiting statement: > _12 = -g.0_25; > Found new range for _12: [1, +INF(OVF)] > marking stmt to be not simulated again > > Visiting statement: > i_13 = (long int) _12; > Found new range for i_13: [1, +INF(OVF)] > marking stmt to be not simulated again > > The point was that the cast from 32-bit signed to 64-bit signed also should > imply that the value is not bigger than INT_MAX, and that is what we would do > if range for _12 was say [1, 0x7fffffff]. Yeah, but we _explicitely_ special-case the +INF(OVF) case in the source range assuming "arbitrary" overflow and thus use +INF(OVF) in the destination range as well. Probably for warnings or whatever (I don't like that OVF stuff anyway). > And for h, the point was that if only constants are assigned to the > variable in a loop, then no matter how many iterations the loop has, the > resulting value will only be one of the constants (thus smallest range > covering those). Or in this particular case, as the h = 1 assignments is > only in endless loop, we could have computed just [0, 0] (but that is > probably too rare to care about). But h also gets subtracted 1 as well. It is the PHI node h_2 = PHI <0(7), h_21(19)> that causes the "issue" via the /* To prevent infinite iterations in the algorithm, derive ranges when the new value is slightly bigger or smaller than the previous one. We don't do this if we have seen a new executable edge; this helps us avoid an overflow infinity for conditionals which are not in a loop. If the old value-range was VR_UNDEFINED use the updated range and iterate one more time. */ if (edges > 0 && gimple_phi_num_args (phi) > 1 && edges == old_edges && lhs_vr->type != VR_UNDEFINED) code as we go from Visiting PHI node: h_2 = PHI <0(7), h_21(19)> Argument #0 (7 -> 8 executable) 0: [0, 0] Argument #1 (19 -> 8 executable) h_21: [0, 0] Meeting [0, 0] and [0, 0] to [0, 0] to Simulating statement (from ssa_edges): h_2 = PHI <0(7), h_21(19)> Visiting PHI node: h_2 = PHI <0(7), h_21(19)> Argument #0 (7 -> 8 executable) 0: [0, 0] Argument #1 (19 -> 8 executable) h_21: [0, 1] Meeting [0, 0] and [0, 1] to [0, 1] Intersecting [0, 9223372036854775806] and [-INF, +INF] to [0, 9223372036854775806] Found new range for h_2: [0, 9223372036854775806] as the range grows we "drop" to +INF - 1 (to give the next iteration the chance to compute wheter it will overflow - previously we dropped to +INF(OVF) immediately). Yes, we can do some more iterating or instead of dropping right away to +INF - 1 we could go towards +INF in log (+INF) steps. It's all a question about compile-time vs. accuracy in rare(?) cases. Yes, if we have a way to statically compute a good range estimate (like we try with adjust_range_with_scev) then that's of course even better. I don't see anything obvious here though.