jdoerfert added a comment.

@rjmccall Thanks for the quick response! I tried to describe my view below. To 
avoid confusion on my side, could you maybe describe what you think when/which 
attribute should be created, derived, and used during the optimization pipeline?

In D69498#1724232 <https://reviews.llvm.org/D69498#1724232>, @rjmccall wrote:

> The kind of tracking that you're doing of convergent operations is one 
> example of a basically limitless set of "taint" analyses, of which half a 
> dozen already exist in LLVM.


Yes, there are a lot, and to make things worse I added another one to rule them 
all. So at least in the middle-term there should be only one "(non)convergent" 
analysis.

> I object to the idea that an arbitrary number of unrelated tests need to be 
> changed every time somebody invents a new taint analysis.

Maybe the problem is that I don't understand why the effect on the test is so 
bad, given that it is the same as any new default attribute has (= very minimal 
and mechanical).
Or maybe I have less of an issue with these changes because I expect other 
attributes to be added as default soon and they will have a similar effect on 
the tests.
(To mention one with a made up name: `forward_process_guarantee` for functions 
that carry this property instead of pretending all do throughout the middle-end 
and hoping the front-ends deal with it accordingly.)

> I also object to the idea that frontends have a responsibility to proactively 
> compensate for every new weird target-specific representation change just to 
> get the optimizer to perform standard transformations that fully respect the 
> documented semantics of LLVM IR.

I generally agree and this has to be announced properly, but I don't think will 
require "much change" (there are default attribute sets already I suppose). 
It is also, arguably, in the interest of many/all frontends to be able to 
produce correct (GPU) parallel code by minimizing the chance of accidental 
errors.
Finally, we basically have that situation you object to already, every time we 
fix an "error" in the IR semantics which allowed a transformation in the 
general case (forward process comes to mind again, dereferenceable_globally as 
well D61652 <https://reviews.llvm.org/D61652>, ...).

In D69498#1724245 <https://reviews.llvm.org/D69498#1724245>, @rjmccall wrote:

> If you can come up with a way to make this change that doesn't require 
> changes to non-GPU frontends or tests, I agree that treating functions as 
> convergent by default makes sense for GPU targets.


Do you object `noconvergent` as an alternative to `convergent`?
If not, the one "problem" that pops up with having it not as a default 
attribute is that the middle-end would then need to check the target in 
addition to the `noconvergent` attribute, right?


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D69498/new/

https://reviews.llvm.org/D69498



_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to