1. while it is true that tensor is the bottleneck, the high level code dictate 
what tensor will get computed. For example, you can do dead code elimination to 
remove unnecessary tensor operation(this is a real need rn), or 
deforestation(which require an effect analysis) to make tensor operation into 
one graph, which can then be further optimize by all graph-level pass. right 
now, we have dead code elimination but it is simply incorrect - it assume there 
is no effect, and @jroesch need it fixed.
2. I dont think there is much value in CFA. However, we do need pointer 
analysis because the AD pass create reference (including reference of a closure 
that modify reference) everywhere, and AAM bring pointer analysis for free.
3. Again, AAM bring pointer analysis for free because AAM map every variable to 
a location (which is what pointer analysis do). Also dataflow framework do not 
do closure, or we will stick with it. For backward analysis, I dont see a 
problem: the idea of AAM is to map every variable to a finite location, and you 
can do you backward transition on those locations just similarly.
Happy new year!

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4468#issuecomment-570034759

Reply via email to