[Apache TVM Discuss] [Questions] Mkldnn verbose doesn't work

2021-03-05 Thread Gnupdev via Apache TVM Discuss
Yes. target = "llvm -mcpu=cascadelake -libs=mkldnn" In this case MKL_VERBOSE=1 also works. In my opinion, It seems that mkl and mkldnn are not completely separated but have some overlapping parts. By the way..I can not understand why MKLDNN_VERBOSE=1 doesn't work. In relay build time, i saw t

[Apache TVM Discuss] [Questions] [Relay][topi/nn] Ways to get layout of each operator

2021-03-05 Thread Yizhi Liu via Apache TVM Discuss
We had discussion before regarding whether to put layout as part of type system. At the time the conclusion was not to complicate type, but have layout a separate property during the layout inference pass. Part of the reasons is layout is not a must have, and is not quite well-defined, e.g., w

[Apache TVM Discuss] [Questions] Mkldnn verbose doesn't work

2021-03-05 Thread Haichen Shen via Apache TVM Discuss
Did you add `-libs=mkl,mkldnn` in your target? --- [Visit Topic](https://discuss.tvm.apache.org/t/mkldnn-verbose-doesnt-work/9315/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/em

[Apache TVM Discuss] [Questions] Vitis-AI Integration: How to use both DPU cores?

2021-03-05 Thread venkataraju koppada via Apache TVM Discuss
Hi @fantasyRqg, Thanks for your reply on this. I quickly look at the reference link and I am not sure TVM+ Vitis workflow uses the entire TVM native optimisation and FPAG CUs scheduling and all. @jtuyls/ @mak , Please comment on this? Thanks and Regards, Raju --- [Visit Topic](https:

[Apache TVM Discuss] [Questions] [Relay][topi/nn] Ways to get layout of each operator

2021-03-05 Thread Cody H. Yu via Apache TVM Discuss
Since we don't define layout attribute for such operators, nor the layout attribute in the tensor, there's no symbolic representation for their tensor layouts. In other words, you cannot get something like ``` %1 = nn.dense(...); /* output layout = NC */ ``` The way I can think of is just inf

[Apache TVM Discuss] [Questions] Implementation of Hexagon Runtime for Target

2021-03-05 Thread Krzysztof Parzyszek via Apache TVM Discuss
You're on the right track. Generally, the way to run something on Hexagon is to run an app on the CPU, and have it offload code to Hexagon via the FastRPC mechanism. If your Hexagon code has function `foo`, and you want to call it from the CPU, you create the IDL description of `foo`s interf

[Apache TVM Discuss] [Questions] Disable initialization in te.compute

2021-03-05 Thread leeexyz via Apache TVM Discuss
@cali I am not sure if there is a better way to achieve it. Maybe you can add a bool member **drop_init** in **CommReducerNode**. Once it is true you are safe to drop it in the MakeReduction function. --- [Visit Topic](https://discuss.tvm.apache.org/t/disable-initialization-in-te-compute/

[Apache TVM Discuss] [Questions] Export an external library with OpenCL

2021-03-05 Thread cali via Apache TVM Discuss
Hi, I use TVM for an accelerator that works only with OpenCL. I would like to import an external library (containing my code) or an OpenCL file (containing my code) with TVM and the target OpenCL. Could you give me an idea of how to do that? I can already use an external library on x86 with

[Apache TVM Discuss] [Questions] Disable initialization in te.compute

2021-03-05 Thread cali via Apache TVM Discuss
Thank you for your answer. The goal is to avoid doing an initialization if I give as argument a tensor already initialized to zero. --- [Visit Topic](https://discuss.tvm.apache.org/t/disable-initialization-in-te-compute/9252/3) to respond. You are receiving this because you enabled maili

[Apache TVM Discuss] [Questions] Use Tensorize to replace all code

2021-03-05 Thread cali via Apache TVM Discuss
Hi, I would like to use tensorize to replace all the code, but I have "segmentation fault". My tensorize work if I use it when I split. But when I try to replace all the code I have this error. Do you know what this can come from? Below my code I am using the tensorize_all variable to switch

[Apache TVM Discuss] [Questions] Vitis-AI Integration: How to use both DPU cores?

2021-03-05 Thread Eye via Apache TVM Discuss
Here is my guss: [ How to optimize GEMM on CPU --- Parallel](https://tvm.apache.org/docs/tutorials/optimize/opt_gemm.html). Follow above link you will find out how TVM optimize computation by **`Parallel`**. Maybe use two cores is same way. --- [Visit Topic](https://discuss.tvm.apac

[Apache TVM Discuss] [Questions] Mkldnn verbose doesn't work

2021-03-05 Thread Gnupdev via Apache TVM Discuss
Hi. I have a question about USE_MKLDNN. My build options : LLVM ON BLAS none USE_MKL /opt/intel/mkl USE_MKLDNN ON Even though I set MKLDNN_VERBOSE=1, any outputs about MKLDNN are not printed while tvm relay build and module run... tvm uses mkl(mkldnn) for dense layer. But why this happen?