Yes. target = "llvm -mcpu=cascadelake -libs=mkldnn"
In this case MKL_VERBOSE=1 also works. In my opinion, It seems that mkl and
mkldnn are not completely separated but have some overlapping parts.
By the way..I can not understand why MKLDNN_VERBOSE=1 doesn't work.
In relay build time, i saw t
We had discussion before regarding whether to put layout as part of type
system. At the time the conclusion was not to complicate type, but have layout
a separate property during the layout inference pass. Part of the reasons is
layout is not a must have, and is not quite well-defined, e.g., w
Did you add `-libs=mkl,mkldnn` in your target?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/mkldnn-verbose-doesnt-work/9315/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/em
Hi @fantasyRqg,
Thanks for your reply on this. I quickly look at the reference link and I am
not sure TVM+ Vitis workflow uses the entire TVM native optimisation and FPAG
CUs scheduling and all.
@jtuyls/ @mak , Please comment on this?
Thanks and Regards,
Raju
---
[Visit
Topic](https:
Since we don't define layout attribute for such operators, nor the layout
attribute in the tensor, there's no symbolic representation for their tensor
layouts. In other words, you cannot get something like
```
%1 = nn.dense(...); /* output layout = NC */
```
The way I can think of is just inf
You're on the right track. Generally, the way to run something on Hexagon is
to run an app on the CPU, and have it offload code to Hexagon via the FastRPC
mechanism. If your Hexagon code has function `foo`, and you want to call it
from the CPU, you create the IDL description of `foo`s interf
@cali I am not sure if there is a better way to achieve it. Maybe you can add a
bool member **drop_init** in **CommReducerNode**. Once it is true you are safe
to drop it in the MakeReduction function.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/disable-initialization-in-te-compute/
Hi,
I use TVM for an accelerator that works only with OpenCL. I would like to
import an external library (containing my code) or an OpenCL file (containing
my code) with TVM and the target OpenCL.
Could you give me an idea of how to do that?
I can already use an external library on x86 with
Thank you for your answer. The goal is to avoid doing an initialization if I
give as argument a tensor already initialized to zero.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/disable-initialization-in-te-compute/9252/3)
to respond.
You are receiving this because you enabled maili
Hi,
I would like to use tensorize to replace all the code, but I have "segmentation
fault". My tensorize work if I use it when I split. But when I try to replace
all the code I have this error. Do you know what this can come from?
Below my code I am using the tensorize_all variable to switch
Here is my guss:
[ How to optimize GEMM on CPU ---
Parallel](https://tvm.apache.org/docs/tutorials/optimize/opt_gemm.html).
Follow above link you will find out how TVM optimize computation by
**`Parallel`**.
Maybe use two cores is same way.
---
[Visit
Topic](https://discuss.tvm.apac
Hi.
I have a question about USE_MKLDNN.
My build options :
LLVM ON
BLAS none
USE_MKL /opt/intel/mkl
USE_MKLDNN ON
Even though I set MKLDNN_VERBOSE=1, any outputs about MKLDNN are not printed
while tvm relay build and module run...
tvm uses mkl(mkldnn) for dense layer.
But why this happen?
12 matches
Mail list logo