https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98834
--- Comment #7 from Richard Biener <rguenth at gcc dot gnu.org> --- So for the missed optimization we run into /* 5) For aggregate copies translate the reference through them if the copy kills ref. */ else if (data->vn_walk_kind == VN_WALKREWRITE ... /* Adjust *ref from the new operands. */ ao_ref rhs1_ref; ao_ref_init (&rhs1_ref, rhs1); if (!ao_ref_init_from_vn_reference (&r, ao_ref_alias_set (&rhs1_ref), ao_ref_base_alias_set (&rhs1_ref), vr->type, vr->operands)) return (void *)-1; /* This can happen with bitfields. */ if (maybe_ne (ref->size, r.size)) return (void *)-1; because the IL looks like __xD.2835 = __xD.2753._M_dataD.2625; __xx_11 = MEM <intD.9> [(struct _TupleD.2456 *)&__xD.2835]; and we try to express the load in terms of the RHS of the aggregate copy but we end up with __xD.2753._M_dataD.2625 itself (there's no subsetting component ref on the original load) but that loads 64 bytes, not 32 as requested. The code tries to handle variable index accesses and thus doesn't simply try to compute base + offset and a corresponding MEM_REF to look up. The following seems to work but is otherwise untested: diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c index e3806e55457..c47bd19a1fa 100644 --- a/gcc/tree-ssa-sccvn.c +++ b/gcc/tree-ssa-sccvn.c @@ -3306,7 +3306,17 @@ vn_reference_lookup_3 (ao_ref *ref, tree vuse, void *data_, return (void *)-1; /* This can happen with bitfields. */ if (maybe_ne (ref->size, r.size)) - return (void *)-1; + { + /* If the access lacks some subsetting simply apply that by + shortening it. That in the end can only be successful + if we can pun the lookup result which in turn requires + exact offsets. */ + if (known_eq (r.size, r.max_size) + && known_lt (ref->size, r.size)) + r.size = r.max_size = ref->size; + else + return (void *)-1; + } *ref = r; /* Do not update last seen VUSE after translating. */