Hello, to get started I have an example code of intrinsic function like this:
```python
from __future__ import absolute_import, print_function
import tvm
from tvm import te
import numpy as np
from tvm.topi.utils import get_const_tuple
ctx = tvm.context("cpu", 0)
M = 8
factor = 4
A = te.placeho
Have you figured out this question? I have just met the same problem.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/buffer-bind-scope-mismatch/9570/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://di
I have done these:
extern_mod = relay.transform.AnnotateTarget(['nuc_fpga'])(mod)
extern_mod = relay.transform.MergeCompilerRegions()(extern_mod)
extern_mod = relay.transform.PartitionGraph()(extern_mod)
print("extern_mod:", extern_mod)
output is
%0 = nn.conv2d(%input0, %
>From your example it's hard to judge whether `nuc_fpga_conv2d` is invoked
>correctly. You may first check the partitioned graph to see if `nn.conv2d` is
>partitioned to a function with kCompiler="nuc_fpga".
---
[Visit
Topic](https://discuss.tvm.apache.org/t/question-byoc-replace-nn-conv2
Thank for your reply.
But when I run the model:
```
rlib = tvm.runtime.module.load_module(dso_path)
ctx = tvm.cpu()
rt_mod = graph_executor.GraphModule(rlib['default'](ctx))
```
The net is not use nuc_fpga_conv2d, I don’t see the nuc_fpga_conv2d() 's
output “Calling From nuc_fpga_conv2d”,It
The replacement happens in the codegen, which is launched during the build
process, so it hasn't happend yet at the line you printed `extern_mod`.
In addition, you should not see `nuc_fpga_conv2d` in Relay graph anyways,
because `nuc_fpga_conv2d` is not a Relay op. The implementation of
`nuc_
I use the latest version TVM from github. Now I want I want to customize the
operator instead of nn.conv2d.
I have changed these places:
**1.src/relay/backend/contrib/nuc_fpga/codegen.cc**
GenerateBodyOutput GenerateOpCall(const CallNode* call) {
const auto* op_node = call->op.as
Sorry for that. Though we have a strong will to support TensorCore in Ansor,
currently I don't have extra bandwidth to work on this topic.
As far as I know, some guys are working on the new TensorIR, and based on
which, TVM will get a new infrastructure to combine the current AutoTVM and
Auto