@t-vi Thanks for reply :) what i mean is , as described in [relay quick
start](https://docs.tvm.ai/tutorials/relay_quick_start.html), by using
graph_runtime.create() we can get the module, then we can use
module.get_output() get the result. But, I want to know what the excatly
schedule optim
Given that it happens after 60 steps, this might not be ROCm but rather the
xgboost module. In that case, upgrading to the pre-release or downgrading helps.
https://github.com/apache/incubator-tvm/issues/4953#issuecomment-619255802
That said we also fixed a potential segfault in the AMDGPU llvm
You can get the code from the device module as in the [Tensor Expression
tutorial](https://docs.tvm.ai/tutorials/tensor_expr_get_started.html#inspect-the-generated-code).
Best regards
Thomas
---
[Visit
Topic](https://discuss.tvm.ai/t/how-to-see-actual-cuda-file-generated-by-tvm/6562/2)
Hi, I'm new to TVM. I'd like to know how to see actual cuda file generated by
tvm?
---
[Visit
Topic](https://discuss.tvm.ai/t/how-to-see-actual-cuda-file-generated-by-tvm/6562/1)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails,