Hi.
First Question
my target is "llvm -mcpu=cascadelake"
In this situation, do tvm runtime(compile?) use avx 512 unit?? (not use
autotvm, scheduler.)
Second Question
1. 2 core(16threads) 2.3ghz
2. 4 core(16threads) 2.8ghz
set TVM_NUM_THREADS=16, and run benchmarks.
2. is little slower th
In pytorch, graph converts a mkldnn_graph(use to_mkldnn).
then compile the graph.
In tvm, is the same process as pytorch included in relay.build??
normal graph -> mkldnn graph.
if it's right, where can i find a graph(dense layer) applied with mkldnn.
Im already know that tvm only applies mkl
Oh
If i do not use autoTVM for tuning my graph, does mkldnn not be applied???
autoTVM : tuning my graph operation like 'for' loop (using tvm schedule
primitives), it is what i know..
then mkldnn or -libs options are used like tvm schedule primitives??
---
[Visit
Topic](https://discus
Yes. target = "llvm -mcpu=cascadelake -libs=mkldnn"
In this case MKL_VERBOSE=1 also works. In my opinion, It seems that mkl and
mkldnn are not completely separated but have some overlapping parts.
By the way..I can not understand why MKLDNN_VERBOSE=1 doesn't work.
In relay build time, i saw t
Hi.
I have a question about USE_MKLDNN.
My build options :
LLVM ON
BLAS none
USE_MKL /opt/intel/mkl
USE_MKLDNN ON
Even though I set MKLDNN_VERBOSE=1, any outputs about MKLDNN are not printed
while tvm relay build and module run...
tvm uses mkl(mkldnn) for dense layer.
But why this happen?
Thanks you for replying!
using mkl_verbose=1, i found that tvm_num_threads do not affect threads used by
mkl.
so i used mkl_num_threads and resolved problems(fluctuation, slow)
with and without -libs=mkl, the inference time is measured approximately the
same.
While searching for the reason,