@manupa-arm @matt-arm
So maybe I can ask more directly.
Is the ordering of first pattern matching your offloadable and then replacing,
within the extracted composite, the native relay operators with your new
ethosu.conv2d relay operator a solution to not being able to do what I said
before
cc @mbrookhart He may have some insights.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/question-on-fuzzy-path-matching-matching-arbitrary-number-and-type-of-nodes-in-path/11493/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these em
Hello TVM developer and community,
I have been working on running inference with TVM on CPU only.
Especially, I am working on ARM big Little CPU core.
I am wondering about ARM big Little CPU core, is it possible to for TVM capture
the communication cost between Big core and Little core of ARM
I'm sorry to disturb you again, does that mean relay.build won't get the Tensor
Expression, it will directly make the Relay IR -> Relay Primitives->TIR?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/question-on-how-to-manual-schedule-and-optimize-the-front-model/11472/9)
to respond.
No. As I know, primitive functions is a Relay Function which CallNode's op is
OpNode not
FunctionNode.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/question-on-how-to-manual-schedule-and-optimize-the-front-model/11472/
Thank you very much. I want to continue to confirm whether the primitive
functions are Tensor Expressions.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/question-on-how-to-manual-schedule-and-optimize-the-front-model/11472/7)
to respond.
You are receiving this because you enabled ma
"Schedule" includes some Primitives and generates IR at last, "Pass" can be
understood as a function which can modify the IR.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/question-on-how-to-manual-schedule-and-optimize-the-front-model/11472/6)
to respond.
You are receiving this bec
A more profound issue we may encounter for those symbolic shape cases: we need
a mechanism to embed domains of variables in the IR? CC: @tqchen
---
[Visit
Topic](https://discuss.tvm.apache.org/t/a-failed-example-of-using-compute-at-based-on-tvmscript/11489/8)
to respond.
You are receivin
Let me explain how compute-at / reverse-compute-at works.
Basically, we do integer set analysis to determine the loops domain of the
block being moved (in our example it's block C). In our particular case, the
domain inferred is:
```python
[0 : T.min(1, OH - i), 0 : OW]
= [0 : 1, 0 : OW]
Is there anyone meet the same problem or can help me figure out it. I would
appreciate it very much~
---
[Visit
Topic](https://discuss.tvm.apache.org/t/question-about-reshape-of-schedule/11503/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe fr
If I wann do reshape + reorder, result will not meet expectation. Split can't
work In these demo while do reorder.
src_shape = (1, 16, 16000, 21) # 16449
dst_shape = (1, 256000, 21) # 263184 263169 256000
fp16 = "float16"
src_tensor = tvm.placeholder(src_shape, dtype=fp16,
When I tried to compile a yolov3 model for zynq following the TVM - Vitis AI
YoloV3 tutorial:
https://github.com/Xilinx/Vitis-AI/blob/master/external/tvm/examples/external_yolov3_tutorial.ipynb
Met the "LLVM ERROR: out of memory Aborted (core dumped)" error, when it came
to "lib.export_library
@junrushao1994 Please take a look :) :blush:
---
[Visit
Topic](https://discuss.tvm.apache.org/t/inferbound-error-domain-already-inferred-of-split-op/11499/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://
Hi all:
Recently I met infer bound error on split op:
```
TVMError: Check failed: match: iter_var(blockIdx.x, , blockIdx.x) domain
already inferred, cannot prove their extents are the same
floordiv{any_dim|any_dim>=0}*{any_dim|any_dim>=0})*({any_dim|any_dim>=0} -
(floordiv({any_dim|any_d
14 matches
Mail list logo