You mean combine the two kernels? But actually I want to cut the if statements.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/ops-become-slow-when-using-te-var/11486/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
Yes,I guess so.However for most case I think those ifs are unnecessary.So I
want to know the assertion sentences to avoid them.
Here's an example. `N` is the `te.var`.You can clearly see the duplicate `if`
.
extern "C" __global__ void default_function_kernel0(float* __restrict__
T_s
I wrote some ops with te and topi, and found that when using te.var to
represent op input shape,it became much slower than constant shape,when with
the same schedule.
When I look into the generated cuda code, there are lots of if sentences about
the var,which make it much slower.So I wonder i
Perhaps adding a renaming logic
[here](https://github.com/apache/tvm/blob/main/src/auto_scheduler/compute_dag.cc#L1206)
may not work because there are just the print of `... = tuple(name.op.axis) +
tuple(name.op.reduce_axis)`, but the following steps in [step
print](https://github.com/apache/
OK,thank you so much. :grinning: BTW, do you know that if te.schedule
contains the compute information or just schedule? I mean the sch in `sch, args
= task.apply_best(log_file)`.
Because I tried to change the args and got an error.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/pr
Yes, I realize the same question that since the auto_scheduler do the fusion,
the DAG op can't correspond to thoes we wrote in TE and TOPI.But the printed
schedule use the DAG op,so we can't simply match or reuse the printed schedule.
However, when I look into the DAG, I find it doesn't chang
Thank you! Your explanation about TE tensors is very clear。
However,I think maybe those `T_reshape.op` may refer to different ones because
I use several topi.reshape, and here's my code:
def function():
A = te.placeholder((1, 3, 5, 5), name="A", dtype="float32")
kernel = te
I tried to write a complicate new op and tune it with auto_scheduler.In the
description of op, I use the topi.reshape for several times,and I use
topi.conv2d as well.
The new OP works well with auto_scheduler, But when I see the printed schedule,
I am confused.
> PadInput_i0, PadInp
It seems that in TE,the index operator [ ] is not same as [ ] in numpy.
When I use A[:1000] (A is a te.Tensor), I get a TensorSlice object ,but I want
a Tensor.
So can anyone tell me how to use TensorSlice, or how can I get the same [ ]
index method just like numpy? much thanks
---
[Visit
There has been over a year since TVM v0.7 come up.So when will v.0.8 release?
Maybe in next 1 or 2 month? I am looking forward to it.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/when-will-tvm-v0-8-release/11283/1) to
respond.
You are receiving this because you enabled mailing list
OK。And I wonder if auto_schuduler has any plans to support dynamic shape, or at
least, a shape range for operator tuning? BTW, will auto_schuduler support
dynamic batch for model tuning in the future? Thanks
---
[Visit
Topic](https://discuss.tvm.apache.org/t/can-tvm-support-auto-scheduler
OK。And I wonder if auto_schuduler has any plans to support dynamic shape, or at
least, a shape range for operator tuning? BTW, will auto_schuduler support
dynamic batch for model tuning in the future?
Thanks
---
[Visit
Topic](https://discuss.tvm.apache.org/t/can-tvm-support-auto-scheduler
hello,i also have problems in this part.Can you share your C code to show how
to call asm code by general C code? much thanks
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-compile-the-autotvm-generated-assembly-code-for-the-cpu/5542/7)
to respond.
You are receiving this becau
@donglaxiche hello,I have solved this problem。you should checkout tvm to a
early repo,for example
git checkout 0b2f30aef2c1c1ed4ec504157b54ceaab182e9ab .Then it works
---
[Visit
Topic](https://discuss.tvm.apache.org/t/no-match-for-call-to-const-std-hash-spv-builtin-const-spv-builtin/10669/
I met this problem too.@tqchen can you help us?thanks
---
[Visit
Topic](https://discuss.tvm.apache.org/t/no-match-for-call-to-const-std-hash-spv-builtin-const-spv-builtin/10669/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails,
15 matches
Mail list logo