Dear All,
I am wondering how the execution order of operators is defined at runtime in
TVM?
For example, in the following example, add1 and add2 are parallel, and how the
TVM runtime schedules these on hardware? (Surely, it depends on target HW, but
assuming we have a HW that its capable of
why use
> const constexpr
instead of
> constexpr
in
> TVM_DECLARE_FINAL_OBJECT_INFO
---
[Visit
Topic](https://discuss.tvm.ai/t/why-use-const-constexpr-instead-of-constexpr-in-tvm-declare-final-object-info/6571/1)
to respond.
You are receiving this because you enabled mailing list mod
I could be wrong (and I don't always have access to cuda to check), but my
impression was that the library you pass to graph_runtime has a specialization
to the precise schedule.
---
[Visit
Topic](https://discuss.tvm.ai/t/how-to-see-actual-cuda-file-generated-by-tvm/6562/4)
to respond.
Have you solved the issue??
---
[Visit
Topic](https://discuss.tvm.ai/t/solved-onnx-error-when-deploying-on-raspberry-pi-4/3382/6)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubsc