Starting from this IR:
```python
# from tvm.script import ir as I
# from tvm.script import tir as T
@I.ir_module
class Module:
@T.prim_func
def main(A: T.Buffer((T.int64(1), T.int64(784)), "float32"), B:
T.Buffer((T.int64(16), T.int64(784)), "float32"),
T_matmul: T.Buffer((
After starting fresh it seems that the last error is coming from some
previously unsynced modifications so I think you can forget about it.
Now I am getting this error `Failed to find the codegen tool for
relay.ext.ccompiler` (full below).
The codegen is registered with
`TVM_REGISTER_GLOBAL("
Hi,
I am planning to integrate a new accelerator to TVM.
I have been following the the BYOC
[doc](https://tvm.apache.org/docs/dev/how_to/relay_bring_your_own_codegen.html)
and [blog
post](https://tvm.apache.org/2020/07/15/how-to-bring-your-own-codegen-to-tvm)
thinking this was sufficient to
It seems like I am missing some bits in what I said above, here is where I am
at now when running a build:
```
DEBUG:autotvm:Finish loading 35 records
Traceback (most recent call last):
File "pt_relay_conv2d.py", line 69, in
mod = graph_executor.GraphModule(lib['default'](dev))
File
"/
Right, that was my initial thoughts.
Although my intial question was: *can the generated code not bother about
filling the output tensor with some values?*
But I realise that the question is a stupid as `out` is already allocated,
whether its content is relevant or not shouldn't create any maj
Thank you for answering. this helps quite a bit.
> I don’t understand this question, but a Relay program must have an output,
> and you cannot print message inside the Relay function.
Let's say for a a `relay.nn.conv2d` function, we produce a C function that
prints some information. The motiv
Hi,
I am following the BYOC example for the C codegen, I have a few questions:
1. During testing, do I need to recompile the whole TVM everytime I make a
modification to the codegen? (provided I added the flag for my codegen in the
tvm cmake file)
2. Once I compiled tvm with my C codegen, how
Can we get an update on how this should be done?
There is an example in the doc [reading model from a
file](https://tvm.apache.org/docs/how_to/compile_models/from_tensorflow.html#sphx-glr-how-to-compile-models-from-tensorflow-py)
but I would like to know the way of doing it with a model object