Hello everyone I have a question about compiling Pytorch 1.9
retinanet_resnet50_fpn model, more specific while compiling this line
(github.com/pytorch/vision/blob/v0.10.0/torchvision/models/detection/_utils.py#L205)
**Traced jit graph:**
**aten::slice: Tensor slice(const Tensor& self, i
Hi @areusch ,
Thanks a lot. Your reply makes it clear.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-tvm-codegen-arm-instruction/10598/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tv
In my mind, this part in Ansor is almost similar to AutoTVM, I'm not sure if
this has been explained in these two papers.
Recently there's also another work about the cost model of Ansor:
https://openreview.net/forum?id=aIfp8kLuvc9
cc @merrymercy
---
[Visit
Topic](https://discuss.tvm.ap
Ansor uses its XGBoost based cost model in an advanced manner. Each prediction
is a sum of several XGBoost calls. To train the model, a "pack-sum" loss is
used.
Training a cost model in this way seems interesting. Can anyone explain the
mechanism in detail? The only thing I got is the source
The compiler output is a tree of `runtime.Module`. DSO-exportable means a
`runtime.Module` in that tree whose `type_key` is `c` or `llvm`. TVM links
directly against LLVM and invokes the LLVM APIs to generate code.
When you call `export_library`, TVM traverses the tree of `runtime.Module`.
W