https://gcc.gnu.org/bugzilla/show_bug.cgi?id=85315
--- Comment #12 from Andrew Macleod <amacleod at redhat dot com> --- Maybe I'm a little dense. if we are presuming that &x + (a + b) implies a + b == 0, then we also should assume that &x + a implies a == 0 and if we can make those assumptions, then &x + 1 is garbage because we can assume 1 == 0. And if a and b are both unsigned, then I guess we can also assume a == b == MAX_UINT / 2 ? Now, if we decided to actually do this... I see IL: <bb 2> : x.0_1 = x; y = x.0_1; a.1_2 = a; b.2_3 = b; _4 = a.1_2 + b.2_3; _5 = (long unsigned int) _4; _6 = _5 * 4; _7 = &y + _6; The clear implications is that _6 == 0 in this expression? If we implemented that in the operator_pointer_plus::op1_range routine, and then were to back substitute, we'd get (_6)[0,0] = _5 * 4 -> _5 = [0,0] (_5)[0,0] = (long unsigned int) _4; -> _4 == [0,0] (_4)[0,0] = a.1_2 + b.2_3 which gives us nothing additional... Other than a potential relationship to track I suppose a.1_2 == -B.2_3 for signed, but it would record that _4 is [0,0] when we calculate an outgoing range. but regardless, its seems that another straightforward place to do this would be in statement folding? Isn't the basic assumption: _7 = &y + _6; implies _6 is always 0, which would enable us to fold this to _7 = &y then _6 is unused and the other statements would ultimately just go away. So why not make folding simply throw away the "+ _6" part because it is now being forced to be 0? We can't really assume that it is [0,0], but then not use that information to optimize?