Thanks for the discussions. I think it is a good opportunty to discuss how can 
we flow target information through the compilation. Putting some fruits of 
thoughts

## How to flow target information through compilation

One of the main design goal that we want to move towards to is this ability to 
incrementally transform the code(some of the transformations may not be done in 
the official build pipeline). Take BYOC as an example, in the future we might 
invoke a custom pass that slices out a subgraph and generate a function that 
requires a specific target lowering(e.g. CUDA). The diagram below from TensorIR 
blitz course shows one example of such a flow:


![](upload://wn3kf0wLrHHJxQBel74YV6RpY1L.png)

In summary, there can be two goals: 
- G0: Ability to config a single standard compilation path.
- G1: Ability to enable incremental customization (via python API), attach 
constraints(such as BYOC) and then send back to the build function for further 
lowering.

G0 is certainly sufficient for some of the usecases like tvmc. However, it is 
also important for us to take inspiration, and think more about making G1 as a 
first class citizen. A natural consequence of G1 is that we will need to 
preserve certain "target-constraint" information in IRModule(so previous 
transformations's decision are self-contained), either as attr of a 
function(e.g. this function have to be compiled in CUDA), or IRModule. 

It would be great for us to collectively think about a way on how to 
standardize for G1 while still have the ability to support G0.

## CompilationConfig and Composite Target

Back to the CompilationConfig itself. I agree with @zxybazh that it looks quite 
like a special case of composite target and it is useful to discuss whether or 
not we can simply merge it as a structured Target.

Coming back to the definition of target, if we look at LLVM's target triple,  
`-arch-subarch-os-vendor-env-object format`. We can find that it also contains 
runtime choice information like ABI for the libc, OS type and so on. So one 
could argue that choices like `tvm_runtime` type, packed function API can be 
part of  a composite target (although they do not need to be in the leaf "c").

The advantage of having a CompilationOption class:
- It is a structured class with explicit fields

The advantage of having making CompilationOption as a composite Target 
- We still have structured fields with target configurations
- We get the benefit of being able to tag, and record target
- CompilationOption can appear as a field of sub-target of something else. 
Imagine that we need to offload a subgraph to another customized compilation, 
which may needs its own specification of the heterogenous  "targets".
- Same API argument(Target) for both graph level compilation and operator level 
compilation.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/pre-rfc-compilation-configuration-representation/11372/5)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/7fbee6ea4169030cc1223a098866ff7f58eed1719574566c08731607fd6e4c22).

Reply via email to