https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168

--- Comment #5 from hubicka at kam dot mff.cuni.cz ---
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103168
> 
> --- Comment #4 from Richard Biener <rguenth at gcc dot gnu.org> ---
> (In reply to Jan Hubicka from comment #3)
> > This is simple (and fairly common) case we don't optimize
> > struct a {
> >         int a;
> >         static __attribute__ ((noinline))
> >         int ret (int v) {return v;}
> > 
> >         __attribute__ ((noinline))
> >         int inca () {return a++;}
> > };
> > int
> > test()
> > {
> >         struct a av;
> >         av.a=1;
> >         int val = av.ret (0) + av.inca();
> >         av.a=2;
> >         return val + av.ret(0) + av.inca();
> > }
> > 
> > what ret is const however becaue it is in COMDAT group we only detect it as
> > pure which makes GVN to give up on proving that its value did not change
> > over av.a=2 store.  We could easily work this out from modref summary (which
> > says that function makes no memory load)
> 
> This case is a bit different since it just exposes we do not perform any
> sort of alias walking for calls in VN.  In fact even with modref we'd need
> to perform multiple walks with all stored "pieces" ao_refs.  At least that
> should be doable though.

Yep, it was my original intention to point out this :)
> 
> If you can provide a cut&paste place to walk & create those ao_refs I could
> see to cook up the relevant VN bits.  But for next stage1 of course.
The following should work (pretty much same loop is in dse_optimize_call
but for stores instead of loads.)
  bool unknown_memory_access = false;
  if (summary = get_modref_function_summary (stmt, NULL))
    {
      /* First search if we can do someting useful.
         Like for dse it is easy to precompute in the summary now
         and will be happy to implement that.  */
      for (auto base_node : summary->loads->bases)
        if (base_node->all_bases || unknown_memory_access)
          {
            unknown_memory_access = true;
            break;
          }
        else
          for (auto ref_node : base_node->refs)
            if (ref_node->all_refs)
              {
                unknown_memory_access = true;
                break;
              }

        /* Do the walking.  */
        if (!unknown_memory_access)
          for (auto base_node : summary->loads->bases)
            for (auto ref_node : base_node->refs)
              if (ref_node->all_refs)
                unknown_memory_access = true;
              else
                for (auto access_node : ref_node->accesses)
                  if (access_node.get_ao_ref (stmt, &ref)
                    {
                      ref.base_alias_set = base_node->base;
                      ref.ref_alias_set = ref_node->base;
                      ... do refwalking
                    }
                  else if (get_call_arg (stmt))
                    ... we do not know offset but can still
                    walk using ptr_deref_may_alias_ref_p_1
                  else
                    unknown_memory_access = true;
                    ... unlikely to happen (for example when number
                    of args in call stmt mismatches the actual function body)
     }
  if (unknown_memory_access)
    ... even here walking makes sense to skip stores to
    that passes !ref_maybe_used_by_call_p
  ... do walk for function arguments passed by value
> 
> -- 
> You are receiving this mail because:
> You reported the bug.

Reply via email to