hi @aca88 @ds1231h @cgerum,

thanks for your comments! first off, it looks like I had some code committed 
that compiled on my mac but maybe not more broadly. the fix seems to be simple 
(use ostringstream instead of stringstream), but please pull again from the 
brnach and give it a try to see if that compiles now for you. I just retested 
the branch and it does seem to work for me if you execute `python 
test_graph.py` after compiling.

@aca88 says:
> Also looking at the example output provided, I see [sid_2 being allocated 
> ](https://github.com/areusch/incubator-tvm/blob/aot-experiment/sample-output.txt#L125)
>  but never being used.

correct. in this prototype, it actually replaces sid_2 with p0_param, a 
statically allocated tensor! a TODO cleanup is to omit the sid_2 allocation, as 
it's not needed here. I plan to merge support for linked parameters in [PR 
6917](https://github.com/apache/incubator-tvm/pull/6917).

> I have a question about the return values of the function calls. More 
> specifically, the output of [ `fused_layout_transform_2` 
> ](https://github.com/areusch/incubator-tvm/blob/aot-experiment/sample-output.txt#L139).
>  Is the return tensor [ `values[1]` 
> ](https://github.com/areusch/incubator-tvm/blob/aot-experiment/sample-output.txt#L130)
>  or is it [ `subcall_ret_value` 
> ](https://github.com/areusch/incubator-tvm/blob/aot-experiment/sample-output.txt#L136)?

interesting point. so right now if an operator function returns something, 
you're right we don't support that and will throw it away. fortunately, 
operator functions only operate on their parameters. the typical "return value" 
of an operator function seems to be `subcall_values[-1]` (python index). `rv` 
is meant to catch low-level errors from e.g. `TVMBackendAllocWorkspace`, which 
indicate bigger problems like out-of-memory. `subcall_ret_value` would be the 
value to look at if we did support a PackedFunc-style return. this is an open 
question going forward.

@ds1231h says:
>BTW, does it mean the operators in the graph or sub-graph will be described in 
>C++ in a graph AOT compiler?

using TIR would mean we are agnostic to the TVM backend, though in practice we 
would likely mean LLVM or C, I believe. really, any backend that is suitable to 
be used from `target_host`. even with this prototype, the operator impls can be 
generated by the LLVM backend.

> And will the operator still program in python DLS or in C++ if we want to add 
> new operators?

what do you mean by "python DLS"? I don't think this should affect adding new 
operators.

@cgerum says:

> Is there a migration Path from your work to P1 or would one need to implement 
> it from scratch.

Somewhere between the two. The overall design can be reused, but everything 
here needs to be ported to generate TIR instead of generating C++. although 
this prototype doesn't show it, eventually the AOT runtime needs to generate an 
implementation of the [Module-based model runtime 
interface](https://discuss.tvm.apache.org/t/discuss-module-based-model-runtime-interface/5025).
 To that end, a limitation of TIR is its inability to return values from 
functions. I believe this is being worked on with user-defined data type 
support in TIR.

Hope this helps!
Andrew





---
[Visit Topic](https://discuss.tvm.apache.org/t/guideline-relay-aot/5977/7) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/417e5f4a7f83f69d16bcd77d7dd174ed5bb6a6c518a14dd993ace64014d85e0c).

Reply via email to