This is where the problem lies. You need to give the target context by
relay.build with the target argument.
But there is no requirement to do this in TVM for flexibility. In some
tutorials, compute graph is not build by using op strategy but directly using
the wrap function(cfunc&sfunc).
For
It seems right. Maybe your target is error. Use this to see your current target.
```
print(tvm.target.Target.current(allow_none=False)
```
---
[Visit
Topic](https://discuss.tvm.ai/t/schedule-not-registered-for-mytarget/6675/2) to
respond.
You are receiving this because you enabled mailing
I think I solved this problem on vta target. It is a simple pass bug causing
extern function ‘sort’ error.
https://discuss.tvm.ai/t/vta-a-workaround-for-deploying-faster-r-cnn-on-target-ext-dev-vta-and-arm-cpu/6516
---
[Visit
Topic](https://discuss.tvm.ai/t/meet-an-error-when-deploy-nms-on
It seems that annotate_target tutorial has not been uploaded. I am wondering
how one runtime communicates with another runtime. Something like a host
runtime and a kernel runtime?Host runtime uses jit to generate kernel's code?
Or they use other synchronization mechanisms?
![微信截图_2020050619291
I guess external_mods is the mods needed external codegen tools. The official
document shows the codegen sequence.
https://docs.tvm.ai/dev/codebase_walkthrough.html
```
/*!
* \brief Lower the external function using external codegen tools.
* \return The runtime moduels for each needed ex