Now,I woulde like to use te.extern to add an extern op level library ,like
cuDNN. For complex ops, like conv2d or dense , we can add new dispatch rules
in op strategy,like @softmax_strategy.register(["cuda", "gpu"]), and then
redefine the compute with te.extern. But for some injective or broadc
Hi @xqdan, when you say "not in MindSpore for now", do you mean AKG is still a
standalone codegen toolkit? Or it currently has already been integrated into
your internal TensorFlow/PyTorch versions?
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-
Is there any details about the TVM + MKLDNN BERT integration work?
I would like to take a look to see its potential connection with Ansor.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-ansor-an-auto-scheduler-for-tvm-autotvm-v2-0/7005/24)
to respond.
You are receiving this because you e
The proposal looks good. notably, the config will need to evolve as we migrate
to ansor, so perhaps we could try to keep it opaque, or find a way to upgrade
later.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-canonicalizing-autotvm-log-format/7038/11)
to respond.
You are receiving thi
I've thought about this some more, and I'm changing my stance with respect to
ProtoBuf. While adding a Python class schema is a less invasive change than
introducing ProtoBuf and allows us to stick to the current log format exactly,
protos do have the added benefit of being language-neutral. A
We don't have vim plugin for FFI navigator now, but you are welcome to
contribute one :slight_smile:
---
[Visit
Topic](https://discuss.tvm.ai/t/language-server-tool-to-navigate-across-packedfunc-ffi-for-ides-like-vscode-and-emacs/5226/28)
to respond.
You are receiving this because you en
Is there some guide for vim ?
---
[Visit
Topic](https://discuss.tvm.ai/t/language-server-tool-to-navigate-across-packedfunc-ffi-for-ides-like-vscode-and-emacs/5226/27)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here
@tobegit3hub thanks for this great work. I am trying to export an autotuned
model with TVMDSOOp. Now, I am stuck at how to register the func_name to the
tf_op.
```
with tvm.transform.PassContext(opt_level=3):
graph, lib, params = relay.build_module.build(
mod, target=target, params=