Apologies for not following the conversation in detail in real time. Here are 
some thoughts on how we can make sure an UMA-integrated accelerator is also a 
Collage-supported 'backend'.

 - The registration of patterns will need to support the existing triple of 
(pattern name, pattern, predicate) since the predicates are necessary to 
control support based on dtypes, shapes, backend version, etc. No big deal.
 - I'm assuming those triples will continue to end up in either the global 
pattern table registry, or can be otherwise retrieved by a system like Collage 
which wishes to bypass the 'eager' UMA partitioning with it's own search. But 
again no big deal, just need to know where to look.
 - Though not significant to Collage, I assume the order of application of the 
partitioning patterns matches the registration order?
 - Collage requires external codegen compiler names to be 1:1 with already 
registered target kinds with the same kind name. It also requires instances of 
those targets to be provided in the build targets list, even if those instances 
are nothing other than Target("my_backend") with no extra attributes. But the 
target kinds may also support additional attributes, and the various 
transitions into external codegen code have been changed to ensure the matching 
Target instance has been pushed as the Target.current() so that codegen can 
retrieve and extract any attributes to guide compilation. I think that matches 
some of the conversation above, except that the attributes can be fetched by 
Target.current().get_attr("foo"), but I might have missed the point in that 
sub-thread.
 - Collage assumes a regular build of an IRModule will respect any existing 
"Compiler" attributed functions already in the module. I think all that means 
is that the UMA partitioner should respect existing partitions, but otherwise 
trigger the appropriate custom downstream compilation, and given the 
partitioner uses the existing passes I think that should all Just Work.
 - Collage assumes it can do it's partitioning before any other 
backend-specific passes. I'm assuming however that some of the Relay pass 
phases mentioned can be before partitioning. If so I'm guessing we'd need to 
first apply those pre-partitioning phases in deterministic order in the hope 
that they sensibly compose, then partition using Collage, then run the 
post-partitioning phases as usual.
 - Collage uses the list of available Targets to guide it's search, but if I 
understand correctly UMA uses the registration of backends to enforce a fixed 
partitioning order. Perhaps this suggests the Collage partitioner should be 
integrated as a user-controlled alternative to the default  'eager' partitoner 
supplied by UMA (presumably as a loop of the usual Relay 
MergeComposite/AnnotateTarget/MergeCompilerRegions?/PartitionGraph passes for 
each backend). That way the user can use the same 
construct-and-register-backends-of-interest API.
 - I'm surprised by the emphasis on going via TIR. Are we explicitly saying any 
BYOC integrations which don't need/want to go via TIR don't fall under the UMA 
integration API? If so that will make Collage/UMA integration harder since 
Collage would have to account for both UMA-style and original-style 
integrations.

Thanks,
-m

-- 
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/60#issuecomment-1130614619
You are receiving this because you are subscribed to this thread.

Message ID: <apache/tvm-rfcs/pull/60/c1130614...@github.com>

Reply via email to