Oh
If i do not use autoTVM for tuning my graph, does mkldnn not be applied???
autoTVM : tuning my graph operation like 'for' loop (using tvm schedule
primitives), it is what i know..
then mkldnn or -libs options are used like tvm schedule primitives??
---
[Visit
Topic](https://discus
Thank you @leeexyz. Yeah, we can use ```tvm.tir.const``` or a new buffer. I
means, it there any mechanism to prevent users using python variables within a
```if_scope```. For example, error message to tell users to utilize
```tvm.tir.const``` since it's quite easy to confuse the python varia
I'm not sure why MKLDNN_VERBOSE=1 doesn't work. The warning shows during the
compilation is fine. It just means that AutoTVM doesn't find a log record
corrsponding to "dense_mkldnn.x86".
---
[Visit
Topic](https://discuss.tvm.apache.org/t/mkldnn-verbose-doesnt-work/9315/4) to
respond.
Yo
Hi @SYangDong, use `b = tvm.tir.const(100, dtype="float32")` instead of the
assgiment directly.
---
[Visit Topic](https://discuss.tvm.apache.org/t/if-scope-in-ir-builder/9332/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [c
@leeexyz I can see it now too! This is really helpful! Thank you so much!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/te-tensorize-elementwise-sum/9335/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](http
[quote="JosseVanDelm, post:1, topic:9335"]
```
Ab = tvm.tir.decl_buffer(a.shape, a.dtype, name="A", offset_factor=1,
strides=[2,1])
Bb = tvm.tir.decl_buffer(b.shape, b.dtype, name="B", offset_factor=1,
strides=[2,1])
Cb = tvm.tir.decl_buffer(c.shape, c.dtype, name="C", offset_factor
[quote="cron, post:1, topic:9083"]
side of `te.extern` is blocking any kind of optimization which leads from
introducing this stage into an
[/quote]
Hi, I am also trying to apply some schedule primitive methods on
tensor.ExternOp, but it seems not supported according to this post (3 years'
ag
Hi everyone,
I'm currently trying to tensorize the schedule for a very simple [4,4] matrix
element-wise sum (add) to be performed in 4 [2,2] matrix addition steps by an
intrinsic function. I've looked into adapting the tutorial on
[Tensorization](https://tvm.apache.org/docs/tutorials/languag
hi,I am new to TVM and I want to ask a q:
when using realy.build a lib and we can use time_evaluator to calculate the
time cost;
how to calculate the time when using create_executor?
I think
> start = time.time()
> tvm_output = intrp.evaluate()(tvm.nd.array(x.astype(dtype)),
**param
Hi, I'm using the ir_builder to contrust a cuda kernel, but I encounter a
problem of if_scope
```
ib = tvm.tir.ir_builder.create()
n = te.size_var("n")
A = ib.pointer("float32", name="A")
tmod = tvm.tir.truncmod
with ib.for_range(0, n, name="i") as i:
with ib.if_scope(tm
10 matches
Mail list logo